Publication bias is
the tendency for researchers who have data with a negative conclusion to fail to
publish their work. These pages discuss the problems that publication bias
causes, especially for those researchers who are performing a systematic
overview. Also see Category: Information
searching and Category: Systematic overviews.
2008
[[There is no material yet from my new site.]]
Outside resources
- U Ravnskov. Cholesterol lowering trials in coronary heart disease:
frequency of citation and outcome. BMJ. 1992;305(6844):15-19. Abstract:
OBJECTIVE--To see if the claim that lowering cholesterol values prevents
coronary heart disease is true or if it is based on citation of supportive
trials only. DESIGN--Comparison of frequency of citation with outcome of all
controlled cholesterol lowering trials using coronary heart disease or death,
or both, as end point. SUBJECTS--22 controlled cholesterol lowering trials.
RESULTS--Trials considered by their directors as supportive of the contention
were cited almost six times more often than others, according to Science
Citation Index. Apart from trials discontinued because of alleged side effects
of treatment, unsupportive trials were not cited after 1970, although their
number almost equalled the number considered supportive. In three supportive
reviews the outcome of the selected trials was more favourable than the
outcome of the excluded and ignored trials. In the 22 controlled cholesterol
lowering trials studied total and coronary heart disease mortality was not
changed significantly either overall or in any subgroup. A statistically
significant 0.32% reduction in non-fatal coronary heart disease seemed to be
due to bias as event frequencies were unrelated to trial length and to mean
net reduction in cholesterol value; individual changes in cholesterol values
were unsystematically or not related to outcome; and after correction for a
small but significant increase in non-medical deaths in the intervention
groups total mortality remained unchanged (odds ratio 1.02).
CONCLUSIONS--Lowering serum cholesterol concentrations does not reduce
mortality and is unlikely to prevent coronary heart disease. Claims of the
opposite are based on preferential citation of supportive trials.
[Accessed November 30, 2009]. Available at:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1882525/.
- Christine Laine, Richard Horton, Catherine De Angelis, et al. Clinical
Trial Registration: Looking Back and Moving Ahead. The Medical Journal of
Australia. 2007;186(12):612-613. Excerpt: In 2005, the International
Committee of Medical Journal Editors (ICMJE) initiated a policy requiring
investigators to deposit information about trial design into an accepted
clinical trials registry before the onset of patient enrolment.1 This policy
aimed to ensure that information about the existence and design of clinically
directive trials was publicly available, an ideal that leaders in
evidence-based medicine have advocated for decades.2 The policy precipitated
much angst among research investigators and sponsors, who feared that
registration would be burdensome and would stifle competition. Yet, the
response to this policy has been overwhelming. The ICMJE promised to
re-evaluate the policy 2 years after implementation. Here, we summarise that
re-evaluation, specifically commenting on registries that meet the policy
requirements, the types of studies that require registration, and the
registration of trial results. As is always the case, the ICMJE establishes
policy only for the 12 member journals (a detailed description of the ICMJE
and its purpose is available at http://www.icmje.org), but many other journals
have adopted our initial trial registration recommendations, and we hope that
they will also adopt the modifications discussed in this update. [Accessed
November 30, 2009]. Available at:
http://www.mja.com.au/public/issues/186_12_180607/lai10586_fm.html.
- Erin D. Williams. Clinical Trials Reporting and Publication.
Excerpt: The central issue before Congress with respect to clinical trials
reporting and publication is how to balance the potential beneficial public
health effects of requiring that clinical trials data be made public with the
burdens that such requirements may place on companies and their innovation.
Clinical trials, which are conducted regularly to test the effects of new
pharmaceuticals and medical devices, cost a significant amount of money, and
by their nature may present some risk to the people who participate in them.
Manufacturers as well as medical journal editors have been reluctant to
publish clinical trial data indicating that products in development are
harmful or ineffective. The availability of such information might save a
duplication of effort and studies that harm or fail to help patients.
[Accessed November 30, 2009]. Available at:
http://www.fas.org/sgp/crs/misc/RL32832.pdf.
- Mike Clarke. Commentary: Searching for trials for systematic reviews:
what difference does it make?. Int. J. Epidemiol. 2002;31(1):123-124.
Excerpt: "Systematic reviews are increasingly common as a means of assessing
the relative effectiveness of health care interventions. This is not least
because of the formation of the Cochrane Collaboration with its aim to
prepare, maintain and promote the accessibility of reviews in all areas of
health care.1 Just as systematic reviews contribute to the practice of
evidence-based health care, so should the conduct of reviews be based on good
evidence, ideally from empirical research." [Accessed May 19, 2010].
Available at:
http://ije.oxfordjournals.org.
- Iain Chalmers. Current Controlled Trials: an opportunity to help
improve the quality of clinical research. Current Controlled Trials in
Cardiovascular Medicine. 2000;1(1):3-8. Abstract: "Some problems with the
quality of controlled clinical trials can be addressed by following these
procedures: registering all trials at inception; using systematic reviews to
inform the design of new studies; posting and obtaining feedback on preprints;
reporting all well conducted trials, regardless of their results; reducing
biased and inefficient assessment of reports submitted for publication;
publishing sufficiently detailed reports; linking trial reports to relevant
external information; providing readier access to reports; and reviewing and
amending reports after initial publication. The launch of a new range of
electronic journals by Current Controlled Trials offers an opportunity to
contribute to progress in these ways." [Accessed May 19, 2010]. Available
at:
http://cvm.controlled-trials.com/content/1/1/003.
- Antje Timmer, Robert Hilsden, Lloyd Sutherland. Determinants of
abstract acceptance for the Digestive Diseases Week - a cross sectional study.
BMC Medical Research Methodology. 2001;1(1):13. Abstract: "BACKGROUND: The
Digestive Diseases Week (DDW) is the major meeting for presentation of
research in gastroenterology. The acceptance of an abstract for presentation
at this meeting is the most important determinant of subsequent full
publication. We wished to examine the determinants of abstract acceptance for
this meeting. METHODS: A cross-sectional study was performed, based on
abstracts submitted to the DDW. All 17,205 abstracts submitted from 1992 to
1995 were reviewed for acceptance, country of origin and research type
(controlled clinical trials (CCT), other clinical research (OCR), basic
science (BSS)). A random sub-sample (n = 1,000) was further evaluated for
formal abstract quality, statistical significance of study results and sample
size. RESULTS: 326 CCT, 455 OCR and 219 BSS abstracts were evaluated in
detail. Abstracts from N/W Europe (OR 0.4, 95% CI 0.3-0.6), S/E Europe (OR
0.4, 95% CI 0.2-0.6) and non-Western countries (OR 0.3, 95% CI 0.2-0.5) were
less likely to be accepted than North-American contributions when controlling
for research type. In addition, the OR for the acceptance for studies with
negative results as compared to those with positive results was 0.4 (95% CI
0.3-0.7). A high abstract quality score was also weakly associated with
acceptance rates (OR 1.4, 95% CI 1.0-2.0). CONCLUSIONS: North-American
contributions and reports with statistically positive results have higher
acceptance rates at the AGA. Formal abstract quality was also predictive for
acceptance." [Accessed May 19, 2010]. Available at:
http://www.biomedcentral.com/1471-2288/1/13.
- Peter Juni, Franziska Holenstein, Jonathan Sterne, Christopher Bartlett,
Matthias Egger. Direction and impact of language bias in meta-analyses of
controlled trials: empirical study. Int. J. Epidemiol. 2002;31(1):115-123.
Abstract: "Background: Excluding clinical trials reported in languages
other than English from meta-analyses may introduce bias and reduce the
precision of combined estimates of treatment effects. We examined the
influence of trials published in languages other than English on combined
estimates and conclusions of published meta-analyses. Methods: We searched
journals and the Cochrane Database of Systematic Reviews for meta-analyses of
at least five trials with binary outcomes that were based on comprehensive
literature searches without language restrictions. We compared estimates of
treatment effects from trials published in languages other than English to
those from trials published in English, and assessed the impact of restricting
meta-analyses to trials published in English. Results: We identified 303
meta-analyses: 159 (52.4%) employed comprehensive literature searches of which
50 included 485 English and 115 non-English language trials. Non-English
language trials included fewer participants (median 88 versus 116, P = 0.006)
and were more likely to produce significant results at P < 0.05 (41.7% versus
31.3%, P = 0.033). The methodological quality of non-English language trials
tended to be lower than that of trials published in English. Estimates of
treatment effects were on average 16% (95% CI : 3-26%) more beneficial in
non-English-language trials than in English-language trials. In 29 (58.0%)
meta-analyses the change in effect estimates after exclusion of non-English
language trials was less than 5%. In the remaining meta-analyses, 5 (10.0%)
showed more benefit and 16 (32.0%) less benefit after exclusion of non-English
language trials. Conclusions: This retrospective analysis suggests that
excluding trials published in languages other than English has generally
little effect on summary treatment effect estimates. The importance of
non-English language trials is, however, difficult to predict for individual
systematic reviews. Comprehensive literature searches followed by a careful
assessment of trial quality are required to assess the contribution of all
relevant trials, independent of language of publication." [Accessed May
19, 2010]. Available at:
http://ije.oxfordjournals.org/cgi/content/abstract/31/1/115.
- A Vickers, N Goyal, R Harland, R Rees. Do certain countries produce
only positive results? A systematic review of controlled trials. Control
Clin Trials. 1998;19(2):159-166. Abstract: "OBJECTIVE: To determine whether
clinical trials originating in certain countries always have positive results.
DATA SOURCES: Abstracts of trials from Medline (January 1966-June 1995). STUDY
SELECTION: Two separate studies were conducted. The first included trials in
which the clinical outcome of a group of subjects receiving acupuncture was
compared to that of a group receiving placebo, no treatment, or a
nonacupuncture intervention. In the second study, randomized or controlled
trials of interventions other than acupuncture that were published in China,
Japan, Russia/USSR, or Taiwan were compared to those published in England.
DATA EXTRACTION: Blinded reviewers determined inclusion and outcome and
separately classified each trial by country of origin. DATA SYNTHESIS: In the
study of acupuncture trials, 252 of 1085 abstracts met the inclusion criteria.
Research conducted in certain countries was uniformly favorable to
acupuncture; all trials originating in China, Japan, Hong Kong, and Taiwan
were positive, as were 10 out of 11 of those published in Russia/USSR. In
studies that examined interventions other than acupuncture, 405 of 1100
abstracts met the inclusion criteria. Of trials published in England, 75% gave
the test treatment as superior to control. The results for China, Japan,
Russia/USSR, and Taiwan were 99%, 89%, 97%, and 95%, respectively. No trial
published in China or Russia/USSR found a test treatment to be ineffective.
CONCLUSIONS: Some countries publish unusually high proportions of positive
results. Publication bias is a possible explanation. Researchers undertaking
systematic reviews should consider carefully how to manage data from these
countries." [Accessed May 19, 2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/9551280.
- L McAuley, B Pham, P Tugwell, D Moher. Does the inclusion of grey
literature influence estimates of intervention effectiveness reported in
meta-analyses?. Lancet. 2000;356(9237):1228-1231. Abstract:
"BACKGROUND: The inclusion of only a subset of all available evidence in a
meta-analysis may introduce biases and threaten its validity; this is
particularly likely if the subset of included studies differ from those not
included, which may be the case for published and grey literature (unpublished
studies, with limited distribution). We set out to examine whether exclusion
of grey literature, compared with its inclusion in meta-analysis, provides
different estimates of the effectiveness of interventions assessed in
randomised trials. METHODS: From a random sample of 135 meta-analyses, we
identified and retrieved 33 publications that included both grey and published
primary studies. The 33 publications contributed 41 separate meta-analyses
from several disease areas. General characteristics of the meta-analyses and
associated studies and outcome data at the trial level were collected. We
explored the effects of the inclusion of grey literature on the quantitative
results using logistic-regression analyses. FINDINGS: 33% of the meta-analyses
were found to include some form of grey literature. The grey literature, when
included, accounts for between 4.5% and 75% of the studies in a meta-analysis.
On average, published work, compared with grey literature, yielded
significantly larger estimates of the intervention effect by 15% (ratio of
odds ratios=1.15 [95% CI 1.04-1.28]). Excluding abstracts from the analysis
further compounded the exaggeration (1.33 [1.10-1.60]). INTERPRETATION: The
exclusion of grey literature from meta-analyses can lead to exaggerated
estimates of intervention effectiveness. In general, meta-analysts should
attempt to identify, retrieve, and include all reports, grey and published,
that meet predefined inclusion criteria." [Accessed May 19, 2010].
Available at:
http://www.ncbi.nlm.nih.gov/pubmed/11072941.
- I Sim, D K Owens, P W Lavori, G D Rennels. Electronic trial banks: a
complementary method for reporting randomized trials. Med Decis Making.
2000;20(4):440-450. Abstract: "BACKGROUND: Randomized clinical trial (RCT)
results are often difficult to find, interpret, or apply to clinical care. The
authors propose that RCTs be reported into electronic knowledge bases-trial
banks-in addition to being reported in text. What information should these
trial-bank reports contain? METHODS: Using the competency decomposition
method, the authors specified the ideal trial-bank contents as the information
necessary and sufficient for completing the task of systematic reviewing.
RESULTS: They decomposed the systematic reviewing task into four top-level
tasks and 62 subtasks. 162 types of trial information were necessary and
sufficient for completing these subtasks. These items relate to a trial's
design, execution, administration, and results. CONCLUSION: Trial-bank
publishing of these 162 items would capture into computer-understandable form
all the trial information needed for critically appraising and synthesizing
trial results. Decision-support systems that access shared, up-to-date trial
banks could help clinicians manage, synthesize, and apply RCT evidence more
effectively." [Accessed May 19, 2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/11059477.
- A J Sutton, S J Duval, R L Tweedie, K R Abrams, D R Jones. Empirical
assessment of effect of publication bias on meta-analyses. BMJ.
2000;320(7249):1574-1577. Abstract: "Objective: To assess the effect of
publication bias on the results and conclusions of systematic reviews and
meta-analyses. Design: Analysis of published meta-analyses by trim and fill
method. Studies: 48 reviews in Cochrane Database of Systematic Reviews that
considered a binary endpoint and contained 10 or more individual studies. Main
outcome measures: Number of reviews with missing studies and effect on
conclusions of meta-analyses. Results: The trim and fill fixed effects
analysis method estimated that 26 (54%) of reviews had missing studies and in
10 the number missing was significant. The corresponding figures with a random
effects model were 23 (48%) and eight. In four cases, statistical inferences
regarding the effect of the intervention were changed after the overall
estimate for publication bias was adjusted for. Conclusions: Publication or
related biases were common within the sample of meta-analyses assessed. In
most cases these biases did not affect the conclusions. Nevertheless,
researchers should check routinely whether conclusions of systematic reviews
are robust to possible non-random selection mechanisms." [Accessed May 19,
2010]. Available at:
http://www.bmj.com/cgi/content/abstract/320/7249/1574.
- An-Wen Chan, Asbjorn Hrobjartsson, Mette T. Haahr, Peter C. Gotzsche,
Douglas G. Altman. Empirical Evidence for Selective Reporting of Outcomes
in Randomized Trials: Comparison of Protocols to Published Articles. JAMA.
2004;291(20):2457-2465. Abstract: "Context: Selective reporting of outcomes
within published studies based on the nature or direction of their results has
been widely suspected, but direct evidence of such bias is currently limited
to case reports. Objective To study empirically the extent and nature of
outcome reporting bias in a cohort of randomized trials. Design: Cohort study
using protocols and published reports of randomized trials approved by the
Scientific-Ethical Committees for Copenhagen and Frederiksberg, Denmark, in
1994-1995. The number and characteristics of reported and unreported trial
outcomes were recorded from protocols, journal articles, and a survey of
trialists. An outcome was considered incompletely reported if insufficient
data were presented in the published articles for meta-analysis. Odds ratios
relating the completeness of outcome reporting to statistical significance
were calculated for each trial and then pooled to provide an overall estimate
of bias. Protocols and published articles were also compared to identify
discrepancies in primary outcomes. Main Outcome Measures: Completeness of
reporting of efficacy and harm outcomes and of statistically significant vs
nonsignificant outcomes; consistency between primary outcomes defined in the
most recent protocols and those defined in published articles. Results: One
hundred two trials with 122 published journal articles and 3736 outcomes were
identified. Overall, 50% of efficacy and 65% of harm outcomes per trial were
incompletely reported. Statistically significant outcomes had a higher odds of
being fully reported compared with nonsignificant outcomes for both efficacy
(pooled odds ratio, 2.4; 95% confidence interval [CI], 1.4-4.0) and harm
(pooled odds ratio, 4.7; 95% CI, 1.8-12.0) data. In comparing published
articles with protocols, 62% of trials had at least 1 primary outcome that was
changed, introduced, or omitted. Eighty-six percent of survey responders
(42/49) denied the existence of unreported outcomes despite clear evidence to
the contrary. Conclusions: The reporting of trial outcomes is not only
frequently incomplete but also biased and inconsistent with protocols.
Published articles, as well as reviews that incorporate them, may therefore be
unreliable and overestimate the benefits of an intervention. To ensure
transparency, planned trials should be registered and protocols should be made
publicly available prior to trial completion." [Accessed February 25,
2009]. Available at:
http://jama.ama-assn.org/cgi/content/abstract/291/20/2457.
- DM Lane, WP Dunlap. Estimating effect size: Bias resulting from the
significance criterion in editorial decisions. British Journal of
Mathematical and Statistical Psychology. 1978;31(2):107-112. Abstract:
Experiments that find larger differences between groups than actually exist in
the population are more likely to pass stringent tests of significance and be
published than experiments that find smaller differences. Published measures
of the magnitude of experimental effects will therefore tend to overestimate
these effects. This bias was investigated as a function of sample size, actual
population difference, and alpha level. The overestimation of experimental
effects was found to be quite large with the commonly employeed significance
levels of 5 per cent and 1 per cent. Further, the recently recommended
measure, omega squared, was found to depend much more heavily on the alpha
level empoloyed that the true population omega squared value. Hence, it was
concluded that effect size estimation is impractical unless scientific
journals drop the consideration of statistical significance as one of the
criteria of publication." [Accessed May 19, 2010]. Available at:
http://psych.rice.edu/papers/estimating_effect_size.pdf.
- P A Rochon, J H Gurwitz, C M Cheung, J A Hayes, T C Chalmers.
Evaluating the quality of articles published in journal supplements compared
with the quality of those published in the parent journal. JAMA.
1994;272(2):108-113. OBJECTIVES: To determine the relationship between the
quality of articles and whether they were published in a supplement or in the
parent journal. DATA SOURCES AND STUDY SELECTION: All randomized control
trials of drug therapies in adults published in the American Journal of
Cardiology, the American Journal of Medicine and the American Heart Journal
from January 1990 and obtained in November 1992 by means of a MEDLINE search.
A total of 318 abstracts appeared to meet our inclusion criteria, and these
articles were obtained and reviewed in further detail. An additional 76 were
excluded. DATA EXTRACTION: Three reviewers who were "blinded" and thus unaware
of supplement status independently assessed the quality of each of the
remaining 242 articles according to a standard quality scoring system. DATA
SYNTHESIS: Overall, 67 (27.7%) of the articles were published in journal
supplements. Article quality scores ranged from 4.2% to 87.5%, with a mean
(+/- SD) score of 37.2% +/- 13.1%. Quality scores were lower in articles
published in journal supplements than in those published in the parent journal
(t[240] = 2.61, P = .01). The mean quality score for articles published in
journal supplements was 33.6% +/- 12.8% compared with a score of 38.5% +/-
13.1% for articles published in the parent journal. Supplement articles
included in their final analysis a smaller proportion of the patients
initially randomized (t[75] = 2.8, P = .007). CONCLUSION: Our findings suggest
that randomized control trials published in journal supplements are generally
of inferior quality compared with articles published in the parent journal.
The review process surrounding the publication of journal supplements should
be consistent with that of the parent journal. [Accessed May 19, 2010].
Available at:
http://www.ncbi.nlm.nih.gov/pubmed/8015117.
- D Helmer, I Savoie, C Green, A Kazanjian. Evidence-based practice:
extending the search to find material for the systematic review. Bull Med
Libr Assoc. 2001;89(4):346-352. Abstract: "BACKGROUND: Cochrane-style
systematic reviews increasingly require the participation of librarians.
Guidelines on the appropriate search strategy to use for systematic reviews
have been proposed. However, research evidence supporting these
recommendations is limited. OBJECTIVE: This study investigates the
effectiveness of various systematic search methods used to uncover randomized
controlled trials (RCTs) for systematic reviews. Effectiveness is defined as
the proportion of relevant material uncovered for the systematic review using
extended systematic review search methods. The following extended systematic
search methods are evaluated: searching subject-specific or specialized
databases (including trial registries), hand searching, scanning reference
lists, and communicating personally. METHODS: Two systematic review projects
were prospectively monitored regarding the method used to identify items as
well as the type of items retrieved. The proportion of RCTs identified by each
systematic search method was calculated. RESULTS: The extended systematic
search methods uncovered 29.2% of all items retrieved for the systematic
reviews. The search of specialized databases was the most effective method,
followed by scanning of reference lists, communicating personally, and hand
searching. Although the number of items identified through hand searching was
small, these unique items would otherwise have been missed. CONCLUSIONS:
Extended systematic search methods are effective tools for uncovering material
for the systematic review. The quality of the items uncovered has yet to be
assessed and will be key in evaluating the value of the systematic search
methods." [Accessed May 19, 2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/11837256.
- K Dickersin. The existence of publication bias and risk factors for its
occurrence. JAMA. 1990;263(10):1385-1389. Abstract: "Publication bias
is the tendency on the parts of investigators, reviewers, and editors to
submit or accept manuscripts for publication based on the direction or
strength of the study findings. Much of what has been learned about
publication bias comes from the social sciences, less from the field of
medicine. In medicine, three studies have provided direct evidence for this
bias. Prevention of publication bias is important both from the scientific
perspective (complete dissemination of knowledge) and from the perspective of
those who combine results from a number of similar studies (meta-analysis). If
treatment decisions are based on the published literature, then the literature
must include all available data that is of acceptable quality. Currently,
obtaining information regarding all studies undertaken in a given field is
difficult, even impossible. Registration of clinical trials, and perhaps other
types of studies, is the direction in which the scientific community should
move." [Accessed May 19, 2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/2406472.
- Fujian Song, Sheetal Parekh-Bhurke, Lee Hooper, et al. Extent of
publication bias in different categories of research cohorts: a meta-analysis
of empirical studies. BMC Medical Research Methodology. 2009;9(1):79.Abstract:
BACKGROUND:The validity of research synthesis is threatened if published
studies comprise a biased selection of all studies that have been conducted.
We conducted a meta-analysis to ascertain the strength and consistency of the
association between study results and formal publication. METHODS:The Cochrane
Methodology Register Database, MEDLINE and other electronic bibliographic
databases were searched (to May 2009) to identify empirical studies that
tracked a cohort of studies and reported the odds of formal publication by
study results. Reference lists of retrieved articles were also examined for
relevant studies. Odds ratios were used to measure the association between
formal publication and significant or positive results. Included studies were
separated into subgroups according to starting time of follow-up, and results
from individual cohort studies within the subgroups were quantitatively
pooled. RESULTS:We identified 12 cohort studies that followed up research from
inception, four that included trials submitted to a regulatory authority, 28
that assessed the fate of studies presented as conference abstracts, and four
cohort studies that followed manuscripts submitted to journals. The pooled
odds ratio of publication of studies with positive results, compared to those
without positive results (publication bias) was 2.78 (95% CI: 2.10 to 3.69) in
cohorts that followed from inception, 5.00 (95% CI: 2.01 to 12.45) in trials
submitted to regulatory authority, 1.70 (95% CI: 1.44 to 2.02) in abstract
cohorts, and 1.06 (95% CI: 0.80 to 1.39) in cohorts of manuscripts.
CONCLUSIONS:Dissemination of research findings is likely to be a biased
process. Publication bias appears to occur early, mainly before the
presentation of findings at conferences or submission of manuscripts to
journals. [Accessed November 29, 2009]. Available at:
http://www.biomedcentral.com/1471-2288/9/79.
- K Dickersin, Y I Min, C L Meinert. Factors influencing publication of
research results. Follow-up of applications submitted to two institutional
review boards. JAMA. 1992;267(3):374-378. Abstract: "OBJECTIVE:--To
investigate factors associated with the publication of research findings, in
particular, the association between "significant" results and publication.
DESIGN:--Follow-up study. SETTING:--Studies approved in 1980 or prior to 1980
by the two institutional review boards that serve The Johns Hopkins Health
Institutions--one that serves the School of Medicine and Hospital and the
other that serves the School of Hygiene and Public Health. POPULATION:--A
total of 737 studies were followed up. RESULTS:--Of the studies for which
analyses had been reported as having been performed at the time of interview,
81% from the School of Medicine and Hospital and 66% from the School of
Hygiene and Public Health had been published. Publication was not associated
with sample size, presence of a comparison group, or type of study (eg,
observational study vs clinical trial). External funding and multiple data
collection sites were positively associated with publication. There was
evidence of publication bias in that for both institutional review boards
there was an association between results reported to be significant and
publication (adjusted odds ratio, 2.54; 95% confidence interval, 1.63 to
3.94). Contrary to popular opinion, publication bias originates primarily with
investigators, not journal editors: only six of the 124 studies not published
were reported to have been rejected for publication. CONCLUSION:--There is a
statistically significant association between significant results and
publication." [Accessed May 19, 2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/1727960.
- D Rennie. Fair conduct and fair reporting of clinical trials. JAMA.
1999;282(18):1766-1768. Description: This is an accompanying editorial to
Juansen HK, Gotzsche PC. Problems in the design and reporting of trials of
antifungal agents encountered during meta-analysis. JAMA 1999 Nov 10; 282(18):
1752-9. [Accessed May 19, 2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/10568651.
- Owen Dyer. GlaxoSmithKline faces US lawsuit over concealment of trial
results. BMJ. 2004;328(7453):1395. Excerpt: "The British pharmaceutical
giant GlaxoSmithKline (GSK) is facing a major lawsuit in the United States
over alleged concealment of negative trial results involving its
antidepressant paroxetine (marketed there as Paxil and in Britain as Seroxat).
The civil suit, filed by New York state's attorney general, Eliot Spitzer,
charges the drug company with "repeated and persistent fraud" in concealing
the results of studies that suggested that paroxetine was ineffective in
treating depression in adolescents. It is the first time a US public authority
has pursued a drug company for misreporting trial data." [Accessed May 19,
2010]. Available at:
http://www.bmj.com/cgi/content/full/328/7453/1395.
- M Egger, P Juni, C Bartlett, F Holenstein, J Sterne. How important are
comprehensive literature searches and the assessment of trial quality in
systematic reviews? Empirical study. Excerpt: "The inclusion of an
unbiased sample of relevant studies is central to the validity of systematic
reviews and meta-analyses. Time-consuming and costly literature searches,
which cover the grey literature and all relevant languages and databases, are
normally recommended to prevent reporting biases. However, the size and
direction of these effects is unclear at present. There may be trade-offs
between timeliness, cost and the quality of systematic reviews." [Accessed
May 19, 2010]. Available at:
http://www.hta.ac.uk/execsumm/summ701.shtml.
- Catherine H MacLean, Sally C Morton, Joshua J Ofman, Elizabeth A Roth,
Paul G Shekelle. How useful are unpublished data from the Food and Drug
Administration in meta-analysis?. J Clin Epidemiol. 2003;56(1):44-51.
Abstract: "The goals of this systematic review and meta-analysis were to
ascertain whether studies of nonsteroidal anti-inflammatory drugs (NSAIDs)
summarized in the FDA reviews are ultimately published, to compare the
methodologic and population characteristics of studies summarized in the FDA
reviews with those reported in peer reviewed literature, and to compare the
pooled relative risk of dyspepsia from NSAIDs in each data source. Summary
measures of risk difference were calculated with a random effects model;
meta-regression was used to assess the effect of study covariates. Among 37
studies described in the FDA reviews, one was published. Sample size, gender
distribution, indication for drug use, and methodologic quality did not vary
significantly between the published and FDA data. The pooled risk ratio for
dyspepsia obtained using published data (1.21) or FDA data (1.07) did not
differ significantly or practically. Data from FDA reviews may be a viable
data source for systematic reviews and meta-analyses but only after being
subjected to the same methodologic scrutiny as published data." [Accessed
May 19, 2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/12589869.
- Martin R Tramer, D John M Reynolds, R Andrew Moore, Henry J McQuay.
Impact of covert duplicate publication on meta-analysis: a case study. BMJ.
1997;315(7109):635-640. Abstract: "Objective: To quantify the impact of
duplicate data on estimates of efficacy. Design: Systematic search for
published full reports of randomised controlled trials investigating
ondansetron's effect on postoperative emesis. Abstracts were not considered.
Data sources: Eighty four trials (11 980 patients receiving ondansetron)
published between 1991 and September 1996. Main outcome measures: Percentage
of duplicated trials and patient data. Estimation of antiemetic efficacy
(prevention of emesis) of the most duplicated ondansetron regimen. Comparison
between the efficacy of non-duplicated and duplicated data. Results: Data from
nine trials had been published in 14 further reports, duplicating data from
3335 patients receiving ondansetron; none used a clear cross reference.
Intravenous ondansetron 4 mg versus placebo was investigated in 16 reports not
subject to duplicate publication, three reports subject to duplicate
publication, and six duplicates of those three reports. The number needed to
treat to prevent vomiting within 24 hours was 9.5 (95% confidence interval 6.9
to 15) in the 16 non-duplicated reports and 3.9 (3.3 to 4.8) in the three
reports which were duplicated (P�0.00001). When these 19 were combined the
number needed to treat was 6.4 (5.3 to 7.9). When all original and duplicate
reports were combined (n=25) the apparent number needed to treat improved to
4.9 (4.4 to 5.6). Conclusions: By searching systematically we found 17% of
published full reports of randomised trials and 28% of the patient data were
duplicated. Trials reporting greater treatment effect were significantly more
likely to be duplicated. Inclusion of duplicated data in meta-analysis led to
a 23% overestimation of ondansetron's antiemetic efficacy. Key messages
Although publishing the same data more than once is strongly discouraged,
there is no evidence of the impact of duplicate data on meta-analysis Re-analysing
an important trial, and cross referencing to original reports (overt
duplication), may be necessary and valuable in some circumstances Covert
duplication, masked by change of authors, of language, or by adding extra
data, causes problems. One danger is that patient data are analysed more than
once in meta-analysis 17% of systematically searched randomised trials of
ondansetron as a postoperative antiemetic were covert duplicates and resulted
in 28% of patient data being duplicated. None of these reports cross
references the original source. Duplication lead to an overestimation of
ondansetron's antiemetic efficacy of 23%. Trials reporting greater treatment
effect were significantly more likely to be duplicated. Covert duplication of
data has major implications for the assessment of drug efficacy and safety."
[Accessed May 19, 2010]. Available at:
http://www.bmj.com/cgi/content/abstract/315/7109/635.
- Narayana S Murali, Hema R Murali, Paranee Auethavekiat, et al. Impact
of FUTON and NAA bias on visibility of research. Mayo Clin. Proc.
2004;79(8):1001-1006. Abstract: "OBJECTIVE: To determine whether
availability of journals on MEDLINE as FUTON (full text on the Net) affects
their impact factor. MATERIAL AND METHODS: A comprehensive search identified
324 cardiology, nephrology, and rheumatology/immunology journals on-line until
May 2003. The status of these journals was ascertained in MEDLINE as having
FUTON, abstracts only, and NAA (no abstract available). Impact factors for all
available journals from the Institute for Scientific Information (ISI) were
abstracted. RESULTS: Of the 324 Journals, 124 (38.3%) were FUTON, 138 (42.6%)
had abstracts only, and 62 (19.1%) had NAA. The mean (+/-SEM) impact factor
was 3.24 (+/-0.32), 1.64 (+/-0.30), and 0.14 (+/-0.45), respectively. Of the
324 current journals, 159 existed in both the pre- and the post-Internet era.
An analysis of the change (ie, delta) in impact factor from the pre- to
post-Internet era revealed a trend between journals with FUTON and abstracts
only (P=.17, Wilcoxon rank sum test). Similar analyses of the delta of
cardiology journals revealed a statistically significant difference between
Journals with FUTON and abstracts only (P=.04, Wilcoxon rank sum test).
CONCLUSION: FUTON bias is the tendency to peruse what is more readily
available. This is the first study to show that on-line availability of
medical literature may increase the impact factor and that such increase tends
to be greater in FUTON journals. Failure to consider this bias may affect a
journal's impact factor. Also, it could limit consideration of medical
literature by ignoring relevant NAA articles and thereby influence medical
education akin to publication or language bias." [Accessed May 19, 2010].
Available at:
http://www.ncbi.nlm.nih.gov/pubmed/15301326.
- M Egger, T Zellweger-Z�hner, M Schneider, et al. Language bias in
randomised controlled trials published in English and German. Lancet.
1997;350(9074):326-329. Abstract: "BACKGROUND: Some randomised controlled
trials (RCTs) done in German-speaking Europe are published in international
English-language journals and others in national German-language journals. We
assessed whether authors are more likely to report trials with statistically
significant results in English than in German. METHODS: We studied pairs of
RCT reports, matched for first author and time of publication, with one report
published in German and the other in English. Pairs were identified from
reports round in a manual search of five leading German-language journals and
from reports published by the same authors in English found on Medline.
Quality of methods and reporting were assessed with two different scales by
two investigators who were unaware of authors' identities, affiliations, and
other characteristics of trial reports. Main study endpoints were selected by
two investigators who were unaware of trial results. Our main outcome was the
number of pairs of studies in which the levels of significance (shown by p
values) were discordant. FINDINGS: 62 eligible pairs of reports were
identified but 19 (31%) were excluded because they were duplicate
publications. A further three pairs (5%) were excluded because no p values
were given. The remaining 40 pairs were analysed. Design characteristics and
quality features were similar for reports in both languages. Only 35% of
German-language articles, compared with 62% of English-language articles,
reported significant (p < 0.05) differences in the main endpoint between study
and control groups (p = 0.002 by McNemar's test). Logistic regression showed
that the only characteristic that predicted publication in an English-language
journal was a significant result. The odds ratio for publication of trials
with significant results in English was 3.75 (95% CI 1.25-11.3).
INTERPRETATION: Authors were more likely to publish RCTs in an
English-language journal if the results were statistically significant.
English language bias may, therefore, be introduced in reviews and
meta-analyses if they include only trials reported in English. The effort of
the Cochrane Collaboration to identify as many controlled trials as possible,
through the manual search of many medical journals published in different
languages will help to reduce such bias." [Accessed May 19, 2010].
Available at:
http://www.ncbi.nlm.nih.gov/pubmed/9251637.
- Desmond Julian, Jacques Le Lorier, Genevieve Gregoire. Meta-analysis
and the meta-epidemiology of clinical research. BMJ. 1998;316(7127):311b.
Execerpt: "Both the editorial by Naylor and the paper by Stern and Simes
propose the prospective registration of clinical trials. Such registration
would not only avoid publication bias but also provide the basis for
evaluating the final conclusions of trials in the light of the original
hypothesis and design." [Accessed May 19, 2010]. Available at:
http://www.bmj.com/cgi/content/full/316/7127/311/b.
- Matthias Egger, George Davey Smith. Meta-analysis bias in location and
selection of studies. BMJ. 1998;316(7124):61-66. Excerpt: Meta-analysis
has received a mixed reception since the outset. Some people have rejected
what they see as exercises in "mega-silliness," while the purveyors of a
highly distinguished series of meta-analyses of perinatal medical care have
been dismissed as "an obstetrical Baader-Meinhof gang." To some clinicians
objecting to the findings of meta-analyses, "a tool has become a weapon." At
the other end of the spectrum, the application of a technique that basically
consists of calculating a weighted average has been hailed as "Newtonian," and
it has been suggested that meta-analysis has left no place for the narrative
review article. The truth is likely to lie somewhere between these extreme
views. [Accessed November 30, 2009]. Available at:
http://www.bmj.com/cgi/content/full/316/7124/61.
- T C Chalmers, C S Frank, D Reitman. Minimizing the three stages of
publication bias. JAMA. 1990;263(10):1392-1395. Publication bias can be
considered to have three stages: (1) Prepublication bias occurs in the
performance of research, caused by ignorance, sloth, greed, or the double
standard applied to clinical trials but not to clinical practice. (2)
Publication bias refers to basing acceptance or rejection of a manuscript on
whether it supports the treatment tested. Potentially biased reviewers are of
equal concern. (3) Postpublication bias occurs in publishing interpretations,
reviews, and meta-analyses of published clinical trials. Bias can be minimized
by (1) insisting on high-quality research and thorough literature reviews, (2)
eliminating the double standard concerning peer review and informed consent
applied to clinical research and practice, (3) publishing legitimate trials
regardless of their results, (4) requiring peer reviewers to acknowledge
conflicts of interest, (5) replacing ordinary review articles with
meta-analyses, and (6) requiring the authors of reviews to acknowledge
possible conflicts of interest. [Accessed May 19, 2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/2406473.
- J L Tang, J L Liu. Misleading funnel plot for detection of bias in
meta-analysis. J Clin Epidemiol. 2000;53(5):477-484. Abstract:
"Publication and other forms of selection biases pose a threat to the validity
of meta-analysis. Funnel plots are usually used to detect such biases;
asymmetrical plots are interpreted to suggest that biases are present. Using
198 published meta-analyses, we demonstrate that the shape of a funnel plot is
largely determined by the arbitrary choice of the method to construct the
plot. When a different definition of precision and/or effect measure were
used, the conclusion about the shape of the plot was altered in 37 (86%) of
the 43 meta-analyses with an asymmetrical plot suggesting selection bias. In
the absence of a consensus on how the plot should be constructed, asymmetrical
funnel plots should be interpreted cautiously. These findings also suggest
that the discrepancies between large trials and corresponding meta-analyses
and heterogeneity in meta-analyses may also be determined by how they are
evaluated." [Accessed May 19, 2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/10812319.
- Erik von Elm, Michael Costanza, Bernhard Walder, Martin Tramer. More
insight into the fate of biomedical meeting abstracts: a systematic review.
BMC Medical Research Methodology. 2003;3(1):12. Abstract: "BACKGROUND: It
has been estimated that about 45% of abstracts that are accepted for
presentation at biomedical meetings will subsequently be published in full.
The acceptance of abstracts at meetings and their fate after initial rejection
are less well understood. We set out to estimate the proportion of abstracts
submitted to meetings that are eventually published as full reports, and to
explore factors that are associated with meeting acceptance and successful
publication. METHODS: Studies analysing acceptance of abstracts at biomedical
meetings or their subsequent full publication were searched in MEDLINE,
OLDMEDLINE, EMBASE, Cochrane Library, CINAHL, BIOSIS, Science Citation Index
Expanded, and by hand searching of bibliographies and proceedings. We
estimated rates of abstract acceptance and of subsequent full publication, and
identified abstract and meeting characteristics associated with acceptance and
publication, using logistic regression analysis, survival-type analysis, and
meta-analysis. RESULTS: Analysed meetings were held between 1957 and 1999. Of
14945 abstracts that were submitted to 43 meetings, 46% were accepted. The
rate of full publication was studied with 19123 abstracts that were presented
at 234 meetings. Using survival-type analysis, we estimated that 27% were
published after two, 41% after four, and 44% after six years. Of 2412
abstracts that were rejected at 24 meetings, 27% were published despite
rejection. Factors associated with both abstract acceptance and subsequent
publication were basic science and positive study outcome. Large meetings and
those held outside the US were more likely to accept abstracts. Abstracts were
more likely to be published subsequently if presented either orally, at small
meetings, or at a US meeting. Abstract acceptance itself was strongly
associated with full publication. CONCLUSIONS: About one third of abstracts
submitted to biomedical meetings were published as full reports. Acceptance at
meetings and publication were associated with specific characteristics of
abstracts and meetings." [Accessed May 19, 2010]. Available at:
http://www.biomedcentral.com/1471-2288/3/12.
- P C G�tzsche. Multiple publication of reports of drug trials. Eur.
J. Clin. Pharmacol. 1989;36(5):429-432. Abstract: "Fourty-four multiple
publications of 31 comparative trials of nonsteroidal anti-inflammatory drugs
in rheumatoid arthritis were examined for mutual agreement. Thirty-two of the
papers were published in the same language as the primary version. Important
discrepancies were seen in 14 trials, involving description of the study
design in two, exclusion of protocol violators in two, inconsistency in the
number of effect variables in five, in the number of side-effects in five, and
in the significance level in one. In three articles the conclusion became more
favourable for the new drug with time. Only half of the trials had the same
first author and number of authors. For six trials, multiple publication was
difficult to detect. Adherence to the manuscript guidelines published by the
International Committee of Medical Journal Editors should diminish the risk of
inflated meta-analyses, reference lists and curricula vitae, and inexplicable
discrepancies in articles based on the same data." [Accessed May 19,
2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/2666138.
- C B Begg, M Mazumdar. Operating characteristics of a rank correlation
test for publication bias. Biometrics. 1994;50(4):1088-1101. Abstract:
"An adjusted rank correlation test is proposed as a technique for identifying
publication bias in a meta-analysis, and its operating characteristics are
evaluated via simulations. The test statistic is a direct statistical analogue
of the popular "funnel-graph." The number of component studies in the
meta-analysis, the nature of the selection mechanism, the range of variances
of the effect size estimates, and the true underlying effect size are all
observed to be influential in determining the power of the test. The test is
fairly powerful for large meta-analyses with 75 component studies, but has
only moderate power for meta-analyses with 25 component studies. However, in
many of the configurations in which there is low power, there is also
relatively little bias in the summary effect size estimate. Nonetheless, the
test must be interpreted with caution in small meta-analyses. In particular,
bias cannot be ruled out if the test is not significant. The proposed
technique has potential utility as an exploratory tool for meta-analysts, as a
formal procedure to complement the funnel-graph." [Accessed May 19, 2010].
Available at:
http://www.ncbi.nlm.nih.gov/pubmed/7786990.
- Daniel K. Benjamin, Philip Brian Smith, M. Dianne Murphy, et al.
Peer-Reviewed Publication of Clinical Trials Completed for Pediatric
Exclusivity. JAMA. 2006;296(10):1266-1273. Abstract: "Context: Much of
pediatric drug use is off-label because appropriate pediatric studies have not
been conducted and the drugs have not been labeled by the US Food and Drug
Administration (FDA) for use in children. In 1997, Congress authorized the FDA
to grant extensions of marketing rights known as "pediatric exclusivity" if
FDA-requested pediatric trials were conducted. As a result, there have been
over 100 product labeling changes. The publication status of studies completed
for pediatric exclusivity has not been evaluated. Objective: To quantify the
dissemination of results of studies conducted for pediatric exclusivity into
the peer-review literature. Design: Cohort study of all trials conducted for
pediatric exclusivity between 1998 and 2004 as determined by MEDLINE and
EMBASE searches through 2005, the subsequent labeling changes, and the
publication of those studies in peer-reviewed journals. We categorized any
labeling changes resulting from the studies as positive or negative for the
drug under study. We then evaluated aspects of the studies and product label
changes that were associated with subsequent publication in peer-reviewed
medical journals. Main Outcome Measures: Publication of the trial data in
peer-reviewed journals. Results: Between 1998 and 2004, 253 studies were
submitted to the FDA for pediatric exclusivity: 125 (50%) evaluated efficacy,
51 (20%) were multi-dose pharmacokinetic, 34 (13%) were single-dose
pharmacokinetic, and 43 (17%) were safety studies. Labeling changes were
positive for 127/253 (50%) of studies; only 113/253 (45%) were published.
Efficacy studies and those with a positive labeling change were more likely to
be published. Conclusions: The pediatric exclusivity program has been
successful in encouraging drug studies in children. However, the dissemination
of these results in the peer-reviewed literature is limited. Mechanisms to
more widely disperse this information through publication warrant further
evaluation." [Accessed May 19, 2010]. Available at:
http://jama.ama-assn.org/cgi/content/abstract/296/10/1266.
- G Taubes. A plan to register unpublished studies. Science.
1997;277(5333):1754. [Accessed May 19, 2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/9324758.
- Michael L. Callaham, Robert L. Wears, Ellen J. Weber, Christopher Barton,
Gary Young. Positive-Outcome Bias and Other Limitations in the Outcome of
Research Abstracts Submitted to a Scientific Meeting. JAMA.
1998;280(3):254-257. Abstract: "Context: Studies with positive results are
more likely to be published in biomedical journals than are studies with
negative results. However, many studies submitted for consideration at
scientific meetings are never published in full; bias in this setting is
poorly studied. Objective: To identify features associated with the fate of
research abstracts submitted to a scientific meeting. Design and Setting:
Prospective observational cohort, with 5-year follow-up of all research
submitted for consideration to the major annual 1991 US research meeting in
the specialty of emergency medicine. Participants: All research abstracts
submitted for consideration at the meeting for possible presentation. Main
Outcome Measures: Characteristics associated with acceptance for presentation
at the meeting and subsequent publication as a full manuscript. Results: A
total of 492 research abstracts were submitted from programs in emergency
medicine and other specialies affiliated with 103 US medical schools. A total
of 179 (36%) were accepted for presentation and 214 (43%) were published in 44
journals. Of the 179 abstracts accepted for presentation, 111 studies were
published. Scientific quality of abstracts or prestige of the journal in which
the study was eventually published did not predict either of these outcomes.
The best predictors (by logistic regression) of meeting acceptance were a
subjective "originality" factor (odds ratio [OR], 2.07; 95% confidence
interval [CI], 1.13-3.89) and positive results (OR, 1.99; 95% CI, 1.07-3.84),
and, for publication, meeting acceptance (OR, 2.49; 95% CI, 1.49-4.35) and
large sample size (OR, 2.26; 95% CI, 1.23-4.31). Forty-nine percent (241) of
abstracts did not report on blinding, and 24% (118) did not report on
randomization. Acceptance and publication were both more likely for positive
outcomes (P=.03). Funnel plots showed the classic distribution of
positive-outcome ("publication") bias at each of the submission, acceptance,
and publication phases. Meeting acceptance predicted publication with a
sensitivity of only 51%, specificity of 71%, positive predictive value of 57%,
and negative predictive value of 66%. Conclusions: Positive-outcome bias was
evident when studies were submitted for consideration and was amplified in the
selection of abstracts for both presentation and publication, neither of which
was strongly related to study design or quality." [Accessed May 19, 2010].
Available at:
http://jama.ama-assn.org/cgi/content/abstract/280/3/254.
- H K Johansen, P C Gotzsche. Problems in the design and reporting of
trials of antifungal agents encountered during meta-analysis. JAMA.
1999;282(18):1752-1759. Abstract: "Meta-analyses may become biased if the
reported data in the individual trials are biased and if overlap among trials
cannot be identified. We describe the unanticipated problems we encountered in
collecting data for a meta-analysis comparing a new antifungal agent,
fluconazole, with amphotericin B in patients with cancer complicated by
neutropenia. In 3 large trials that comprised 43% of the patients identified
for the meta-analysis, results for amphotericin B were combined with results
for nystatin in a "polyene" group. Because nystatin is recognized as an
ineffective drug in these circumstances, this approach creates a bias in favor
of fluconazole. Furthermore, 79% of the patients were randomized to receive
oral amphotericin B, which is poorly absorbed and not an established
treatment, in contrast to intravenous amphotericin B, which was administered
in 4 of 5 placebo-controlled trials, or 86% of patients. It was unclear
whether there was overlap among the "polyene" trials, and it is possible that
results from single-center trials were included in multicenter trial reports.
We were unable to obtain information to clarify these issues from the trial
authors or the manufacturer of fluconazole. Two of 11 responding authors
replied that the data were with the drug manufacturer and two indicated that
they did not have access to their data because of change of affiliation. In
the meta-analyses, fluconazole and amphotericin B (mostly given orally) had
similar effects (13 trials), whereas nystatin was no better than placebo (3
trials). Since individual trials are rarely conclusive, investigators,
institutions, and pharmaceutical companies should provide essential details
about their work to ensure that meta-analyses can accurately reflect the
studies conducted and that patients will realize maximum benefits from
treatments. We recommend that investigators keep copies of their trial data to
help facilitate accurate and unbiased meta-analyses." [Accessed May 19,
2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/10568648.
- J A Sterne, D Gavaghan, M Egger. Publication and related bias in
meta-analysis: power of statistical tests and prevalence in the literature.
J Clin Epidemiol. 2000;53(11):1119-1129. Abstract: "Publication and
selection biases in meta-analysis are more likely to affect small studies,
which also tend to be of lower methodological quality. This may lead to
"small-study effects," where the smaller studies in a meta-analysis show
larger treatment effects. Small-study effects may also arise because of
between-trial heterogeneity. Statistical tests for small-study effects have
been proposed, but their validity has been questioned. A set of typical
meta-analyses containing 5, 10, 20, and 30 trials was defined based on the
characteristics of 78 published meta-analyses identified in a hand search of
eight journals from 1993 to 1997. Simulations were performed to assess the
power of a weighted regression method and a rank correlation test in the
presence of no bias, moderate bias or severe bias. We based evidence of
small-study effects on P < 0.1. The power to detect bias increased with
increasing numbers of trials. The rank correlation test was less powerful than
the regression method. For example, assuming a control group event rate of 20%
and no treatment effect, moderate bias was detected with the regression test
in 13.7%, 23.5%, 40.1% and 51.6% of meta-analyses with 5, 10, 20 and 30
trials. The corresponding figures for the correlation test were 8.5%, 14.7%,
20.4% and 26.0%, respectively. Severe bias was detected with the regression
method in 23.5%, 56.1%, 88.3% and 95.9% of meta-analyses with 5, 10, 20 and 30
trials, as compared to 11.9%, 31.1%, 45.3% and 65.4% with the correlation
test. Similar results were obtained in simulations incorporating moderate
treatment effects. However the regression method gave false-positive rates
which were too high in some situations (large treatment effects, or few events
per trial, or all trials of similar sizes). Using the regression method,
evidence of small-study effects was present in 21 (26.9%) of the 78 published
meta-analyses. Tests for small-study effects should routinely be performed in
meta-analysis. Their power is however limited, particularly for moderate
amounts of bias or meta-analyses based on a small number of small studies.
When evidence of small-study effects is found, careful consideration should be
given to possible explanations for these in the reporting of the
meta-analysis." [Accessed May 20, 2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/11106885.
- K Dickersin, S Chan, T C Chalmers, H S Sacks, H Smith. Publication bias
and clinical trials. Control Clin Trials. 1987;8(4):343-353. Abstract:
"A study was performed to evaluate the extent to which the medical literature
may be misleading as a result of selective publication of randomized clinical
trials (RCTs) with results showing a statistically significant treatment
effect. Three hundred eighteen authors of published trials were asked whether
they had participated in any unpublished RCTs. The 156 respondents reported
271 unpublished and 1041 published trials. Of the 178 completed unpublished
RCTs with a trend specified, 26 (14%) favored the new therapy compared to 423
of 767 (55%) published reports (p less than 0.001). For trials that were
completed but not published, the major reasons for nonpublication were
"negative" results and lack of interest. From the data provided, it appears
that nonpublication was primarily a result of failure to write up and submit
the trial results rather than rejection of submitted manuscripts. The results
of this study imply the existence of a publication bias of importance both to
meta-analysis and the interpretation of statistically significant positive
trials." [Accessed May 20, 2010]. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/3442991.
- Anastasia L. Misakian, Lisa A. Bero. Publication Bias and Research on
Passive Smoking: Comparison of Published and Unpublished Studies. JAMA.
1998;280(3):250-253. Abstract: "Context: The results of reviews may be
biased by delays in publication and failure to publish nonsignificant results.
Objective: To determine the extent of unpublished results on the health
effects of passive smoking and whether passive smoking studies with
statistically nonsignificant results would have longer time to publication
than those with statistically significant results. Design: Semistructured
telephone interviews of principal investigators of published or unpublished
studies funded between 1981 and 1995, identified by information obtained from
76 (85%) of 89 organizations contacted that potentially funded research on
passive smoking. Participants: Seventy-eight investigators were eligible and
could be located; 65 (83%) responded. They had conducted 61 studies of the
health effects of passive smoke in humans or animals between 1981 and 1995
that met the criteria for the analysis of time to publication. Main Outcome
Measure: Time to publication for published studies and statistical
significance of results of published and unpublished studies. Results:
Fourteen of the 61 studies were unpublished. Median time to publication was 5
years (95% confidence interval [CI], 4-7 years) for statistically
nonsignificant studies and 3 years (95% CI, 3-5 years) for statistically
significant studies (P=.004). Statistically significant results (P=.004),
experimental study design (P=.01), study size less than or equal to 500
(P=.01), and animals as subjects (P=.03) were predictive of time to
publication. When the studies with human participants were analyzed
separately, only statistically significant data were predictive of publication
(P=.007). Multivariate analysis of all studies indicated that statistical
significance (P=.001) and study design (P=.01) were the only independent
predictors of time to publication, while for the human studies only
statistical significance was predictive of publication (P=.007). Conclusion:
There is a publication delay for passive smoking studies with nonsignificant
results compared with those with significant results." [Accessed May 20,
2010]. Available at:
http://jama.ama-assn.org/cgi/content/abstract/280/3/250.
- Jerome M Stern, R John Simes. Publication bias: evidence of delayed
publication in a cohort study of clinical research projects. BMJ.
1997;315(7109):640-645. Abstract: "Objectives: To determine the extent to
which publication is influenced by study outcome. Design: A cohort of studies
submitted to a hospital ethics committee over 10 years were examined
retrospectively by reviewing the protocols and by questionnaire. The primary
method of analysis was Cox's proportional hazards model. Setting: University
hospital, Sydney, Australia. Studies: 748 eligible studies submitted to Royal
Prince Alfred Hospital Ethics Committee between 1979 and 1988. Main outcome
measures: Time to publication. Results: Response to the questionnaire was
received for 520 (70%) of the eligible studies. Of the 218 studies analysed
with tests of significance, those with positive results (P�0.05) were much
more likely to be published than those with negative results (P[>=]0.10)
(hazard ratio 2.32 (95% confidence interval 1.47 to 3.66), P=0.0003), with a
significantly shorter time to publication (median 4.8 v 8.0 years). This
finding was even stronger for the group of 130 clinical trials (hazard ratio
3.13 (1.76 to 5.58), P=0.0001), with median times to publication of 4.7 and
8.0 years respectively. These results were not materially changed after
adjusting for other significant predictors of publication. Studies with
indefinite conclusions (0.05[<=]P�0.10) tended to have an even lower
publication rate and longer time to publication than studies with negative
results (hazard ratio 0.39 (0.13 to 1.12), P=0.08). For the 103 studies in
which outcome was rated qualitatively, there was no clear cut evidence of
publication bias, although the number of studies in this group was not large.
Conclusions: This study confirms the evidence of publication bias found in
other studies and identifies delay in publication as an additional important
factor. The study results support the need for prospective registration of
trials to avoid publication bias and also support restricting the selection of
trials to those started before a common date in undertaking systematic
reviews. Key messages This retrospective cohort study of clinical research
projects confirms the findings of publication bias found in previous studies
Delay in the publication of studies with negative results has been identified
as an additional important factor in publication bias With the recognised
importance of evidence based medicine, these results have important
implications for the selection of studies included in systematic reviews
Prospective registration of clinical research projects will avoid many of the
problems associated with publication bias However, it is also important to
restrict inclusion in systematic reviews to studies started before a certain
date to allow for the delay in completing studies with negative results."
[Accessed May 20, 2010]. Available at:
http://www.bmj.com/cgi/content/abstract/315/7109/640.
- P J Easterbrook, J A Berlin, R Gopalan, D R Matthews. Publication bias
in clinical research. Lancet. 1991;337(8746):867-872. Abstract: "In a
retrospective survey, 487 research projects approved by the Central Oxford
Research Ethics Committee between 1984 and 1987, were studied for evidence of
publication bias. As of May, 1990, 285 of the studies had been analysed by the
investigators, and 52% of these had been published. Studies with statistically
significant results were more likely to be published than those finding no
difference between the study groups (adjusted odds ratio [OR] 2.32; 95%
confidence interval [Cl] 1.25-4.28). Studies with significant results were
also more likely to lead to a greater number of publications and presentations
and to be published in journals with a high citation impact factor. An
increased likelihood of publication was also associated with a high rating by
the investigator of the importance of the study results, and with increasing
sample size. The tendency towards publication bias was greater with
observational and laboratory-based experimental studies (OR = 3.79; 95% Cl =
1.47-9.76) than with randomised clinical trials (OR = 0.84; 95% Cl =
0.34-2.09). We have confirmed the presence of publication bias in a cohort of
clinical research studies. These findings suggest that conclusions based only
on a review of published data should be interpreted cautiously, especially for
observational studies. Improved strategies are needed to identify the results
of unpublished as well as published studies." [Accessed May 20, 2010].
Available at:
http://www.ncbi.nlm.nih.gov/pubmed/1672966.
- Antje Timmer, Robert J. Hilsden, John Cole, David Hailey, Lloyd R. Sutherland. Publication bias in gastroenterological research � a retrospective cohort study based on abstracts submitted to a scientific meeting BMC Medical Research Methodology. 2002;2(1):7. The aim of this study was to examine the determinants of publication and whether publication bias occurred in gastroenterological research. [Accessed on January 28, 2013]. http://www.biomedcentral.com/1471-2288/2/7
- Mildred K. Cho, Lisa A. Bero. The Quality of Drug Studies Published in Symposium Proceedings Ann Intern Med. 1996;124(5):485�489. Objective: To compare the quality, relevance, and structure of drug studies published in symposium proceedings that are sponsored by drug companies with 1) articles from symposia with other sponsors and 2) articles in the peer-reviewed parent journals of symposium proceedings; and to study the relation between drug company sponsorship and study outcome.Design: Cross-sectional studies of clinical drug studies published in symposium proceedings or their parent medical journals.Measurements: The proportion of articles with no methods sections (which are necessary to assess quality); methodologic quality and clinical relevance scores; and the proportion of articles with outcomes favoring the drug of interest.Results: Symposia sponsored by single drug companies had more articles without methods sections (10%; 108 of 1064) than did symposia that had other sponsors (3%; 58 of 2314) or symposia that had no mentioned sponsor (2%; 29 of 1663) (P < 0.001). The mean methodologic quality and relevance scores of articles were similar both by type of sponsorship and between articles published in symposia sponsored by single drug companies and articles from the parent journals. Significantly more articles with drug company support (98%; 39 of 40) than without drug company support (79%; 89 of 112) had outcomes favoring the drug of interest (P = 0.01).Conclusions: Articles in symposia sponsored by single drug companies were similar in quality and clinical relevance to articles with other sponsors and to articles published in the parent journals. Articles with drug company support are more likely than articles without drug company support to have outcomes favoring the drug of interest. [Accessed on January 28, 2013]. http://www.ncbi.nlm.nih.gov/pubmed/8602706
- A. Richard Palmer. QUASI-REPLICATION AND THE CONTRACT OF ERROR: Lessons from Sex Ratios, Heritabilities and Fluctuating Asymmetry Annual Review of Ecology and Systematics. 2000;31(1):441�480. Selective reporting�e.g., the preferential publication of results that are statistically significant, or consistent with theory or expectation�presents a challenge to meta-analysis and seriously undermines the quest for generalizations. Funnel graphs (scatterplots of effect size vs. sample size) help reveal the extent of selective reporting. They also allow the strength of biological effects to be judged easily, and they reaffirm the value of graphical presentations of data over statistical summaries. Funnel graphs of published results, including: (a) sex-ratio variation in birds, (b) field estimates of heritabilities, and (c) relations between fluctuating asymmetry and individual attractiveness or fitness, suggest selective reporting is widespread and raise doubts about the true magnitude of these phenomena. Quasireplication�the "replication" of previous studies using different species or systems�has almost completely supplanted replicative research in ecology and evolution. Without incentives for formal replicative studies, which could come from changes to editorial policies, graduate training programs, and research funding priorities, the contract of error will continue to thwart attempts at robust generalizations. "For as knowledges are now delivered, there is a kind of contract of error between the deliverer and the receiver: for he that delivereth knowledge desireth to deliver it in such a form as may be best believed, and not as may be best examined; and he that receiveth knowledge desireth rather present satisfaction than expectant inquiry; and so rather not to doubt than not to err: glory making the author not to lay open his weakness, and sloth making the disciple not to know his strength." The Advancement of Learning, Francis Bacon, 1605 (8:170�171) [Accessed on January 28, 2013]. http://www.annualreviews.org/doi/abs/10.1146/annurev.ecolsys.31.1.441
- Laurence Hirsch. Randomized clinical trials: What gets published, and when? CMAJ. 2004;170(4):481�483. [Accessed on January 28, 2013].
http://www.cmaj.ca/content/170/4/481
- Redundant Publication: A Reminder. Kassirer JP, Angell M. New England Journal of Med 1995: 333(7); 449-450.
- Registering Clinical Trials. Dickersin K, PhD, MA, Drummond R, MD. Jama 2003: 290(4); 516-23.
- Retrospective and prospective identification of unpublished controlled trials: lessons from a survey of obstetricians and pediatricians. Hetherington J, Dickersin K, Chalmers I, Meinert CL. Pediatrics 1989: 84(2); 374-80.
- Journal article: S. Garattini. The risk of bias from omitted research BMJ. 2000;321(7265):845�846. [Accessed on January 28, 2013]. http://www.bmj.com/content/321/7265/845.full
- Role of a research ethics committee in follow-up and publication of results. Pich J, Carné X, Arnaiz J-A, Gómez B, Trilla A, Rodés J. The Lancet 2003: 361(9362)
- Deborah E. Barnes, Lisa A. Bero. Scientific quality of original research articles on environmental tobacco smoke Tob Control. 1997;6(1):19�26. Objective To evaluate the scientific quality of original research articles on the health effects of environmental tobacco smoke; to determine whether poor article quality is associated with publication in non-peer-reviewed symposium proceedings or with other article characteristics.
Design Cross sectional study of original research articles on the health effects of environmental tobacco smoke published in peer reviewed journals and non-peer-reviewed symposium proceedings from 1980 to 1994. Article quality was assessed by two independent reviewers who used a valid and reliable instrument, were unaware of study hypotheses, were blinded to identifying characteristics of articles, and had no disclosed conflicts of interest.
Participants All symposium articles (n = 68) and a random sample of peer reviewed journal articles (n = 68) that satisfied inclusion/exclusion criteria.
Main Outcome Measure Mean quality scores, which could range from 0 (lowest quality) to 1 (highest quality).
Results Using multivariate regression analysis, symposium articles were of poorer scientific quality than peer reviewed journal articles when controlling simultaneously for the effects of study design, article conclusion, article topic, and source of funding acknowledged (P = 0.027). Article quality was not associated with either source of funding acknowledged or article conclusion in multivariate analyses.
Conclusions In published reports on environmental tobacco smoke, non-peer-reviewed symposium articles tend to be of poor quality. These articles should not be used in scientific, legal, or policy settings unless their quality has been independently assessed. [Accessed on January 28, 2013]. http://tobaccocontrol.bmj.com/content/6/1/19.abstract
- Selecting the language of the publications included in a meta-analysis: is there a Tower of Babel bias? Gregoire G, Derderian F, Le Lorier J. J Clin Epidemiol 1995: 48(1); 159-63.
- Lorna Gibson, Miriam Brazzelli, Brenda Thomas, Peter Sandercock. A
systematic review of clinical trials of pharmacological interventions for
acute ischaemic stroke (1955-2008) that were completed, but not published in
full. Trials. 2010;11(1):43. Abstract: "BACKGROUND: We assessed the
prevalence, and potential impact of, trials of pharmacological agents for
acute stroke that were completed but not published in full. Failure to publish
trial data is to be deprecated as it sets aside the altruism of participants'
consent to be exposed to the risks of experimental interventions, potentially
biases the assessment of the effects of therapies, and may lead to premature
discontinuation of research into promising treatments. METHODS: We searched
the Cochrane Stroke Group's Specialised Register of Trials in June 2008 for
completed trials of pharmacological interventions for acute ischaemic stroke,
and searched MEDLINE and EMBASE (January 2007 - March 2009) for references to
recent full publications. We assessed trial completion status from trial
reports, online trials registers and correspondence with experts. RESULTS: We
identified 940 trials. Of these, 125 (19.6%, 95% confidence interval
16.5-22.6) were completed but not published in full by the point prevalence
date. They included 16,058 participants (16 trials had over 300 participants
each) and tested 89 different interventions. Twenty-two trials with a total of
4,251 participants reported the number of deaths. In these trials, 636/4251
(15.0%) died. CONCLUSIONS: Our data suggest that, at the point prevalence
date, a substantial body of evidence that was of relevance both to clinical
practice in acute stroke and future research in the field was not published in
full. Over 16,000 patients had given informed consent and were exposed to the
risks of therapy. Responsibility for non-publication lies with investigators,
but pharmaceutical companies, research ethics committees, journals and
governments can all encourage the timely publication of trial data."
[Accessed May 19, 2010]. Available at:
http://www.trialsjournal.com/content/11/1/43.
- Systematic reviews in health care: Investigating and dealing with publication and other biases in meta-analysis. Sterne JA, Egger M, Smith GD. Bmj 2001: 323(7304); 101-5.
- M. L Callaham, E. Weber, G. Young, R. Wears, C. Barton. Time to publication of studies was not affected by whether results were positive BMJ. 1998;316(7143):1536�1536. [Accessed on January 28, 2013].
- Deborah A. Zarin, Tony Tse, Nicholas C. Ide. Trial Registration at ClinicalTrials.gov between May and October 2005 New England Journal of Medicine. 2005;353(26):2779�2787. Concern about previously undisclosed safety problems with drugs such as paroxetine (Paxil, GlaxoSmithKline) and rofecoxib (Vioxx, Merck) has increased the public's desire for more complete information about clinical research studies.1,2 The provision of basic information about clinical trial protocols in a publicly accessible registry and the public identification of all trials, whether or not their results are subsequently published, have been advocated as ways to address this issue.3�6 Numerous groups have called for comprehensive registration by issuing statements or convening meetings to discuss policy and implementation details.7�15 In the United States, the Food and Drug Administration (FDA) . . . [Accessed on January 28, 2013]. http://www.nejm.org/doi/full/10.1056/NEJMsa053234
- I Chalmers. Underreporting research is scientific misconduct JAMA. 1990;263(10):1405�1408. Substantial numbers of clinical trials are never reported in print, and among those that are, many are not reported in sufficient detail to enable judgments to be made about the validity of their results. Failure to publish an adequate account of a well-designed clinical trial is a form of scientific misconduct that can lead those caring for patients to make inappropriate treatment decisions. Investigators, research ethics committees, funding bodies, and scientific editors all have responsibilities to reduce underreporting of clinical trials. An extended use of prospective registration of trials at inception, as well as benefiting clinical research in other ways, could help people to play their respective roles in reducing underreporting of clinical trials.
- Gerd Antes, Iain Chalmers. Under-reporting of clinical trials is unethical Lancet. 2003;361(9362):978�979. http://www.ncbi.nlm.nih.gov/pubmed/12660049
- Unpublished Research from a Medical Specialty Meeting: Why Investigators Fail to Publish. Weber EJ, Callaham ML, Wears RL, Barton C, Young G. JAMA 1998: 280(July 1998); 257-259.
- Journal article: Reinhard Wentz. Visibility of research: FUTON bias Lancet. 2002;360(9341):1256. http://www.ncbi.nlm.nih.gov/pubmed/12401287
- What contributions do languages other than English make on the results of meta-analysis? Moher D, Pham, Klassen T, Schulz K, Berlin J, Jadad A, Liberati A. Journal of Clinical Epidemiology 2000: 53(9); 964-972.
- Publication bias and meta-analysis. Smith M. Evaluation in
Education: An International Review Series 1980: 4; 22-24.
All of the material above this paragraph is licensed under a
Creative Commons Attribution 3.0 United States License. This page was written by
Steve Simon and was last modified on
2017-06-15. The material
below this paragraph links to my
old website, StATS. Although I wrote all of the material
listed below, my ex-employer, Children's Mercy Hospital, has claimed copyright
ownership of this material. The brief excerpts shown here are included under
the fair use provisions of U.S. Copyright laws.
2005
5. Stats: Registration of
clinical trials (July 22, 2005). International Committee of Medical
Journal Editors (ICMJE) has called for a requirement for registration of
clinical trials. All clinical trials that start recruiting on or after July
1, 2005 must be placed in a public registry before enrollment of the first
patient. This includes "trials that test a clinical hypothesis about
health outcomes" but not phase 1 trials that only assess toxicity and/or
pharmacokinetics.
4. Stats: Selective reporting of research findings (March 14, 2005). I
have talked extensively about publication bias in my weblog and address this
issue in detail on my book on Statistical Evidence which I hope to finish
sometime soon. A related problem is when researchers decide to report or not
report particular data analyses based on how impressive the results appear.
This is called publication bias in situ (PBIS) by Phillips 2004,
Publication bias in situ. Phillips CV. BMC Med Res Methodol 2004: 4(1);
20.
2004
3. Stats: Publication bias (June
25, 2004). Eliot Spitzer, New York's state attorney general has filed a
lawsuit against GlaxoSmithKline for concealing negative trials about an
antidepressant that the company manufactures (Paxil).
2. Stats: Publication Bias
references (June 25, 2004). Publication bias is the tendency for negative
research to get published less frequently, less prominently, or more slowly,
and the tendency for positive research to get published more than one time.
Here are some references that document the problems caused by publication
bias.
1. Stats: A serious problem of
publication bias (April 9, 2004). I was browsing a paper by Leonard
Leibovici about alternative medicine when a comment by Andrew Vickers
in the eletters section caught my eye. He noted the serious problem of
publication bias in Chinese journals. Apparently, the Chinese journals almost
never publish a negative trial. He cites two sources: Vickers et al 1998 CCT
and Tang et al 1999 BMJ.
What now?
Browse other categories at this site
Browse through the most recent entries
Get help