Selective reporting of research findings (March 14, 2005)

This page is moving to a new website.

I have talked extensively about publication bias in my weblog

and address this issue in detail on my book on Statistical Evidence which I hope to finish sometime soon.

A related problem is when researchers decide to report or not report particular data analyses based on how impressive the results appear. This is called publication bias in situ (PBIS) by Phillips 2004.

Examples of this problem occur when researchers have discretion in

(1) Which exposures and outcomes to consider in datasets with many variables.
(2) Which functional forms to use to represent variables (e.g., how to divide continuous variables into categories).
(3) Whether to conduct separate analyses by subgroup, and which subgroup results to emphasize.

and they then focus their attention on that analyses which produce the more interesting results. The more interesting results, of course, are those results which tend to show that a new drug or therapy is better and the less interesting results are those that tend to show that the new drug or therapy is about the same as the standard drug or therapy. I make a joke about this sort of thing when I tell people how wonderful statistical software is. It allows you to run ten separate analyses and then choose the one that gives you the smallest p-value.

Evaluating PBIS is difficult because you normally don't have access to the original research protocol to see what the researchers had originally intended.