Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2000 Nov;53(11):1119-29.
doi: 10.1016/s0895-4356(00)00242-0.

Publication and related bias in meta-analysis: power of statistical tests and prevalence in the literature

Affiliations

Publication and related bias in meta-analysis: power of statistical tests and prevalence in the literature

J A Sterne et al. J Clin Epidemiol. 2000 Nov.

Abstract

Publication and selection biases in meta-analysis are more likely to affect small studies, which also tend to be of lower methodological quality. This may lead to "small-study effects," where the smaller studies in a meta-analysis show larger treatment effects. Small-study effects may also arise because of between-trial heterogeneity. Statistical tests for small-study effects have been proposed, but their validity has been questioned. A set of typical meta-analyses containing 5, 10, 20, and 30 trials was defined based on the characteristics of 78 published meta-analyses identified in a hand search of eight journals from 1993 to 1997. Simulations were performed to assess the power of a weighted regression method and a rank correlation test in the presence of no bias, moderate bias or severe bias. We based evidence of small-study effects on P < 0.1. The power to detect bias increased with increasing numbers of trials. The rank correlation test was less powerful than the regression method. For example, assuming a control group event rate of 20% and no treatment effect, moderate bias was detected with the regression test in 13.7%, 23.5%, 40.1% and 51.6% of meta-analyses with 5, 10, 20 and 30 trials. The corresponding figures for the correlation test were 8.5%, 14.7%, 20.4% and 26.0%, respectively. Severe bias was detected with the regression method in 23.5%, 56.1%, 88.3% and 95.9% of meta-analyses with 5, 10, 20 and 30 trials, as compared to 11.9%, 31.1%, 45.3% and 65.4% with the correlation test. Similar results were obtained in simulations incorporating moderate treatment effects. However the regression method gave false-positive rates which were too high in some situations (large treatment effects, or few events per trial, or all trials of similar sizes). Using the regression method, evidence of small-study effects was present in 21 (26.9%) of the 78 published meta-analyses. Tests for small-study effects should routinely be performed in meta-analysis. Their power is however limited, particularly for moderate amounts of bias or meta-analyses based on a small number of small studies. When evidence of small-study effects is found, careful consideration should be given to possible explanations for these in the reporting of the meta-analysis.

PubMed Disclaimer

Comment in

Similar articles

Cited by

LinkOut - more resources