Small studies may overestimate the effect sizes in critical care meta-analyses: a meta-epidemiological study
Critical Care volume 17, Article number: R2 (2013)
Small-study effects refer to the fact that trials with limited sample sizes are more likely to report larger beneficial effects than large trials. However, this has never been investigated in critical care medicine. Thus, the present study aimed to examine the presence and extent of small-study effects in critical care medicine.
Critical care meta-analyses involving randomized controlled trials and reported mortality as an outcome measure were considered eligible for the study. Component trials were classified as large (≥100 patients per arm) and small (<100 patients per arm) according to their sample sizes. Ratio of odds ratio (ROR) was calculated for each meta-analysis and then RORs were combined using a meta-analytic approach. ROR<1 indicated larger beneficial effect in small trials. Small and large trials were compared in methodological qualities including sequence generating, blinding, allocation concealment, intention to treat and sample size calculation.
A total of 27 critical care meta-analyses involving 317 trials were included. Of them, five meta-analyses showed statistically significant RORs <1, and other meta-analyses did not reach a statistical significance. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Large trials showed significantly better reporting quality than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data.
Small trials are more likely to report larger beneficial effects than large trials in critical care medicine, which could be partly explained by the lower methodological quality in small trials. Caution should be practiced in the interpretation of meta-analyses involving small trials.
Small-study effects refer to the pattern that small studies are more likely to report beneficial effect in the intervention arm, which was first described by Sterne et al. . This effect can be explained, at least partly, by the combination of lower methodological quality of small studies and publication bias [2, 3]. Typically, such small-study effects can be evaluated by funnel plot. Funnel plot depicts the effect size against the precision of the effect size. Small studies with effect sizes of wider standard deviations should widely and symmetrically distribute at the bottom of the plot, and large studies should cluster at top of the plot, making it the shape of an inverted funnel plot. If a funnel plot appears asymmetrical, publication bias is assumed to be present.
In critical care medicine, studies are conducted in intensive care units (ICU) where the number of beds is limited. Due to the nature of population and the care setting, the studies in critical care frequently have a small sample size. Meta-analysis is considered to be an important tool to combine the effect sizes of small trials, allowing more statistical power to detect the beneficial effects of a new intervention. However, according to meta-epidemiological studies conducted in other biomedical fields, interpretation of meta-analyses of small trials should be cautious, and such meta-analyses may overestimate the true effect of an intervention [3, 4]. Small-study effect has been observed when examining meta-analysis with binary  and continuous outcomes . In critical care medicine, small-study effects have never been quantitatively assessed. Thus, we conducted this systematic review of critical care meta-analyses in an attempt to examine the presence and extent of small-study effects in critical care medicine.
Materials and methods
Search strategy and study selection
Medline and Embase databases were searched from inception to August 2012. There was no language restriction. The core search terms consisted of critical care, mortality and meta-analysis (detailed search strategy is shown in Additional file 1). Inclusion criteria were as follows: critical care meta-analyses involving randomized controlled trial; the end points should include mortality; at least one component trial had more than 100 subjects per arm on average. Exclusion criteria were systematic reviews without meta-analysis; all component trials were exclusively large (sample sizes ≥100 per arm) or small trials (sample sizes <100 per arm); meta-analyses included duplicated component trials. If there were several meta-analyses addressing the same clinical issue, we included the most updated one. Two reviewers (XX and ZZ) independently assessed the literature and disagreement was settled by a third opinion (HN).
The following data were extracted from eligible meta-analyses: the lead author of the study, year of publication, number of trials, treatment strategy in the experimental arm, proportion of large trials in each meta-analysis, effect size and corresponding 95% confidence interval (CI), heterogeneity as represented by I2. For each component trial, we extracted the following data: sequence generating, allocation concealment, blinding, incomplete follow-up data, intention-to-treat analysis, sample size calculation, and year of publication. Sequence generating was considered adequate when the trial reported the method to generate the randomization sequence (for example computer, randomization table). Allocation concealment was considered adequate when the investigator responsible for patient selection was unable to predict allocation of the next patient. The commonly used techniques included the use of central randomization or sequentially numbered, opaque and sealed envelopes. Blinding was considered adequate if the experimental and control interventions were described as indistinguishable by patients or investigators .
Small and large trials were distinguished by a cutoff of an average of 100 subjects per arm. For example, if a two-arm trial had 113 patients in one arm and 87 patients in the other, it was considered a large trial. This definition was somewhat arbitrary. However, a sample size of 200 patients allowed an 80% statistical power to detect an absolute difference of 20% for binary outcomes (assuming that the proportion in the control group was 50%) at two-sided α = 0.05. Another reason for this definition was that critical care trials were usually small, and a greater cutoff point would significantly reduce the number of meta-analyses that were eligible for the analysis.
Treatment effects were expressed as odds ratio (OR) for mortality. The number of events and total number of patients in each arm were extracted for each component trial. An OR <1 indicated beneficial effect in the experimental arm. A standard logistic regression model was used to examine whether estimated treatment effects differ according to whether a trial is large or small [6, 7]. Ratio of OR (ROR) was estimated from the regression model. ROR <1 indicates larger effect size in smaller studies and ROR >1 indicates larger effect size in large trials. ROR was calculated separately for each meta-analysis. These RORs were then combined using a meta-analytic approach. Inverse variance weighting and either fixed effect or random effects models were used to pool these RORs. Meta-analyses involving exclusively large or small trials were not included in our analysis and thus did not contribute to the analysis. Heterogeneity between trials was estimated using I2. A rough guide to the interpretation of I2 can be as follows: 0 to 40% represents unimportant heterogeneity; 30% to 60% represents moderate heterogeneity; 50% to 90% represents substantial heterogeneity and 75% to 100% represents considerable heterogeneity . To account for the difference of estimated effects between large and small trials, the qualities of study reporting including sequence generating, blinding, allocation concealment, incomplete follow-up data, sample size calculation and intention to treat were compared between large and small trials. The proportions of large and small trials were compared based on the year of publication before and after 2002. This was defined arbitrarily. However, we feel that multicenter large trials have increased rapidly in the last 10 years. We analyzed the association between sample size and treatment effects, stratified according to the significance of effect size and heterogeneity within each meta-analysis. All statistical analyses were performed using Stata software version 11.0 (StataCorp LP, College Station, TX, USA). Statistical significance was considered at two-sided P <0.05.
Study selection and characteristics
Our initial search identified 371 citations. Of them, 329 were excluded by reviewing the title and abstract because they were duplicate meta-analyses, included non-randomized trials, did not report data on mortality, and other irrelevant articles. Full text of the remaining 42 citations was reviewed, of which 15 citations were excluded. In these excluded 15 citations, eight did not include large trials, study end point was not mortality in four meta-analyses, and three were duplicated meta-analyses. A total of 27 meta-analyses [9–35] involving randomized controlled trials were finally included in our analysis (Figure 1).
Characteristics of included meta-analyses are shown in Table 1. All meta-analyses were published after the year 2005. These meta-analyses covered all subspecialties of critical care medicine, including mechanical ventilation, sedation, fluid resuscitation, prevention of nosocomial infection, nutrition and critical care nephrology. For the number of component trials in each meta-analysis, we only counted those that reported mortality as an end point. Of note, because one meta-analysis  included seven trials conducted by Joachim Boldt that had been retracted , Boldt's trials were excluded from analysis. Eight meta-analyses [10, 13, 17, 18, 25, 28, 31, 34] included only one large trial, and the meta-analyses by Zhongheng  included mostly large trials (83.3%). Most meta-analyses reported non-significant effect size, and only six meta-analyses [19, 20, 22, 25, 30, 31] reported statistically significant effect sizes (for example mortality). Eight meta-analyses [12, 15, 16, 20, 21, 30, 32, 35] reported high heterogeneity and the remaining 19 meta-analyses showed no significant heterogeneity among component trials. Seventeen meta-analyses [9–11, 14, 17, 18, 22–29, 31, 33, 34] reported an I2 of 0%, most of which were meta-analyses without significant effect sizes.
RORs were estimated by logistic regression model separately for each meta-analysis (Figure 2). The RORs and relevant 95% CI are shown in the left column of Figure 2. Five meta-analyses [12, 19–22] showed statistically significant RORs, indicating significantly larger beneficial effects in small studies; nine meta-analyses [10, 11, 13, 15, 16, 26, 27, 31, 33] showed RORs >1, but without statistical significance; the remaining 13 meta-analyses showed RORs <1, again without statistical significance. RORs were combined using a meta-analytic approach with inverse variance weighting. A fixed effect model was used to combine the RORs. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Meta-regression was performed to test whether the observed RORs were dependent on the quality of meta-analyses. Covariates including concealment (P = 0.88), blinding (P = 0.82), intention to treat (P = 0.72), sequence generating (P = 0.48) and sample size calculation (P = 0.57) cannot explain the heterogeneity between meta-analyses.
To explore possible explanations for the difference of effect sizes between large and small trials, we compared reporting qualities of large and small trials (Table 2). As expected, the large trials showed significantly better reporting quality than small trials. More large trials were well conducted than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data. For instance, 82% of large trials explicitly reported allocation concealment, while only 51% of small trials reported this (P <0.001). Intention-to-treat analysis was used in 83% of the large trials, while only 52% of the small trials used this analysis (P <0.001). Sample size calculation was performed a priori in 88% of large trials and only 44% of small trials reported sample size calculation (P <0.001). Some 75% of large trials were published after the year 2002, while 52% of small trials were published after 2002 (P <0.001).
We estimated ROR between large and small trials stratified according to the characteristics of individual meta-analysis (Table 3). Eight meta-analyses [16, 17, 19, 20, 22, 25, 30, 31] reported significant effect sizes, and the pooled ROR was 0.49 (95%: 0.38 to 0.60). The remaining 19 meta-analyses reported insignificant effect sizes, and the pooled ROR was 0.69 (95%: 0.59 to 0.79). Stratified by heterogeneity, eight meta-analyses [12, 15, 16, 20, 21, 30, 32, 35] reported high heterogeneity, their combined ROR was 0.46 (95%: 0.36 to 0.55); the remaining 19 meta-analyses showed a ROR of 0.79 (95% CI: 0.68 to 0.90). The result indicated that small-study effects were more prominent in meta-analyses of high heterogeneity.
This is the first meta-epidemiological study conducted in the field of critical care medicine to demonstrate smaller trials may overestimate treatment effect size. In this study, we included 27 meta-analyses of 317 randomized controlled trials covering all subspecialties of critical care medicine. The results showed that small trials were more likely to report larger estimated treatment effects compared with large trials, and this was more prominent in meta-analyses involving highly heterogeneous component trials. Furthermore, the small trials were of low quality in methodology compared with large trials, which may partly account for the small-study effects.
In a meta-epidemiological study in osteoarthritis , the authors employed the difference in effect sizes between large and small trials to explore the small-study effects. In line with the present study, they concluded that small trials were more likely to report larger beneficial treatment effects than large trials. However, in that study, the small trials were not statistically different from those of large trials in terms of blinding, intention-to-treat analysis, thus, the small-study effects cannot be fully explained by methodological quality. In osteoarthritis trials blinding is probably much more easily achieved because drugs can be made indistinguishable in appearance. Thus, small studies, usually with limited financial support, can also achieve good methodological quality. In contrast, in critical care medicine, blinding is sometimes impossible or complex due to the nature of intervention. Such interventions included pulmonary artery catheter, intensity of continuous renal replacement therapy, prone ventilation, and subglottic secretion drainage. In these situations, blinding may be difficult to achieve or only large trials with more planning and methodological support can make this possible. Thus, small trials in critical care medicine are of limited quality in design compared with large trials.
Possible explanations for the small-study effects have been explored. Kjaergard and colleagues  demonstrated that small studies with lower quality significantly exaggerated the intervention effect compared to large trials, while small trials with adequate sequence generating, allocation concealment and blinding did not differ from large trials. This is consistent with our findings that small studies had lower methodological quality compared with large trials. However, the impact of methodological quality such as allocation concealment and blinding varies according to different outcomes. A meta-epidemiological study involving 146 meta-analyses demonstrated that in trials with subjective outcomes the estimated effect sizes were exaggerated when there was inadequate concealment or blinding, while in trials with objective outcomes such as mortality, there was little evidence that inadequate concealment and lack of blinding would distort the estimated effect sizes . This is in contrast with our findings, because we used mortality as an end point, but the result indicated that lack of blinding and inadequate allocation concealment might contribute to the exaggerated effect sizes in small trials. However, this is an unsettled question and there are other studies supporting our finding [38, 39]. Most probably, individual quality measures such as blinding and allocation concealment are not consistently associated with the effect sizes across study areas, and each medical area should be specifically investigated . Our analysis focused on the field of critical care medicine and showed, for the first time, a possible association of methodological quality with effect sizes.
There are several limitations in the present study. First, our study aims to investigate the small-study effect in critical care medicine. However, critical care is an extremely heterogeneous subspecialty that involves all varieties of diseases. Such heterogeneous nature may potentially impair the quality of the meta-epidemiological study. Second, the heterogeneity cannot be fully accounted for in the present analysis. We tried to explore the sources of heterogeneity by incorporating factors related to the quality of study design in meta-regression model, but failed to identify a significant covariate. We propose that the explainable factors may be those that cannot be readily accessible. Studies with negative results are less likely to be published than studies with positive results, particularly if such studies are small in sample size. As a result, small studies with negative result are less likely to be accepted into publication. If this is the case, it is not surprising that small studies are more likely to report beneficial effect. However, such kind of publication bias cannot be systematically investigated. Another explanation for the small-study effect may be that more rigorous implementation of interventions is performed in smaller studies.
In conclusion, our study included 27 critical care meta-analyses involving all subspecialties in the field of critical care medicine. The result showed that small studies tended to report larger beneficial effects than large trials. Interpretation of meta-analyses of small trials should be cautious and sometimes definitive conclusions cannot be made until a large multicenter trial is conducted.
Interpretation of critical care meta-analyses involving small studies should be cautious due to the small-study effects.
Small-study effects may be attributable to the poor methodological quality of the small studies.
intensive care unit
ratio of odds ratio.
Sterne JA, Egger M: Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis. J Clin Epidemiol 2001, 54: 1046-1055. 10.1016/S0895-4356(01)00377-8
Chan AW, Altman DG: Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors. BMJ 2005, 330: 753. 10.1136/bmj.38356.424606.8F
Kjaergard LL, Villumsen J, Gluud C: Reported methodologic quality and discrepancies between large and small randomized trials in meta-analyses. Ann Intern Med 2001, 135: 982-989.
Nüesch E, Trelle S, Reichenbach S, Rutjes AW, Tschannen B, Altman DG, Egger M, Jüni P: Small study effects in meta-analyses of osteoarthritis trials: meta-epidemiological study. BMJ 2010, 341: c3515. 10.1136/bmj.c3515
Schulz KF, Altman DG, Moher D, CONSORT Group: CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ 2010, 340: c332. 10.1136/bmj.c332
Sterne JA, Jüni P, Schulz KF, Altman DG, Bartlett C, Egger M: Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med 2002, 21: 1513-1524. 10.1002/sim.1184
Siersma V, Als-Nielsen B, Chen W, Hilden J, Gluud LL, Gluud C: Multivariable modelling for meta-epidemiological assessment of the association between trial quality and treatment effects estimated in randomized clinical trials. Stat Med 2007, 26: 2745-2758. 10.1002/sim.2752
Higgins JPT, Green S, (editors): Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 (updated March 2011). The Cochrane Collaboration 2011. [http://www.cochrane-handbook.org]
Abroug F, Ouanes-Besbes L, Dachraoui F, Ouanes I, Brochard L: An updated study-level meta-analysis of randomised controlled trials on proning in ARDS and acute lung injury. Crit Care 2011, 15: R6. 10.1186/cc9403
Afshari A, Wetterslev J, Brok J, Møller A: Antithrombin III in critically ill patients: systematic review with meta-analysis and trial sequential analysis. BMJ 2007, 335: 1248-1251. 10.1136/bmj.39398.682500.25
Afshari A, Brok J, Møller AM, Wetterslev J: Inhaled nitric oxide for acute respiratory distress syndrome and acute lung injury in adults and children: a systematic review with meta-analysis and trial sequential analysis. Anesth Analg 2011, 112: 1411-1421. 10.1213/ANE.0b013e31820bd185
Annane D, Bellissant E, Bollaert PE, Briegel J, Confalonieri M, De Gaudio R, Keh D, Kupfer Y, Oppert M, Meduri GU: Corticosteroids in the treatment of severe sepsis and septic shock in adults: a systematic review. JAMA 2009, 301: 2362-2375. 10.1001/jama.2009.815
Augustes R, Ho KM: Meta-analysis of randomised controlled trials on daily sedation interruption for critically ill adult patients. Anaesth Intensive Care 2011, 39: 401-409.
Barkun AN, Bardou M, Pham CQ, Martel M: Proton pump inhibitors vs. histamine 2 receptor antagonists for stress-related mucosal bleeding prophylaxis in critically ill patients: a meta-analysis. Am J Gastroenterol 2012, 107: 507-520. 10.1038/ajg.2011.474
Blackwood B, Alderdice F, Burns K, Cardwell C, Lavery G, O'Halloran P: Use of weaning protocols for reducing duration of mechanical ventilation in critically ill adult patients: Cochrane systematic review and meta-analysis. BMJ 2011, 342: c7237. 10.1136/bmj.c7237
Burns KE, Adhikari NK, Slutsky AS, Guyatt GH, Villar J, Zhang H, Zhou Q, Cook DJ, Stewart TE, Meade MO: Pressure and volume limited ventilation for the ventilatory management of patients with acute lung injury: a systematic review and meta-analysis. PLoS One 2011, 6: e14623. 10.1371/journal.pone.0014623
Delaney AP, Dan A, McCaffrey J, Finfer S: The role of albumin as a resuscitation fluid for patients with sepsis: a systematic review and meta-analysis. Crit Care Med 2011, 39: 386-391. 10.1097/CCM.0b013e3181ffe217
Kopterides P, Siempos II, Tsangaris I, Tsantes A, Armaganidis A: Procalcitonin-guided algorithms of antibiotic therapy in the intensive care unit: a systematic review and meta-analysis of randomized controlled trials. Crit Care Med 2010, 38: 2229-2241. 10.1097/CCM.0b013e3181f17bf9
Landoni G, Mizzi A, Biondi-Zoccai G, Bignami E, Prati P, Ajello V, Marino G, Guarracino F, Zangrillo A: Levosimendan reduces mortality in critically ill patients. A meta-analysis of randomized controlled studies. Minerva Anestesiol 2010, 76: 276-286.
Laupland KB, Kirkpatrick AW, Delaney A: Polyclonal intravenous immunoglobulin for the treatment of severe sepsis and septic shock in critically ill adults: a systematic review and meta-analysis. Crit Care Med 2007, 35: 2686-2692. 10.1097/01.CCM.0000295312.13466.1C
Marik PE, Zaloga GP: Immunonutrition in critically ill patients: a systematic review and analysis of the literature. Intensive Care Med 2008, 34: 1980-1990. 10.1007/s00134-008-1213-6
Phoenix SI, Paravastu S, Columb M, Vincent JL, Nirmalan M: Does a higher positive end expiratory pressure decrease mortality in acute respiratory distress syndrome? A systematic review and meta-analysis. Anesthesiology 2009, 110: 1098-1105. 10.1097/ALN.0b013e31819fae06
Pileggi C, Bianco A, Flotta D, Nobile CG, Pavia M: Prevention of ventilator-associated pneumonia, mortality and all intensive care unit acquired infections by topically applied antimicrobial or antiseptic agents: a meta-analysis of randomized controlled trials in intensive care units. Crit Care 2011, 15: R155. 10.1186/cc10285
Puskarich MA, Runyon MS, Trzeciak S, Kline JA, Jones AE: Effect of glucose-insulin-potassium infusion on mortality in critical care settings: a systematic review and meta-analysis. J Clin Pharmacol 2009, 49: 758-767. 10.1177/0091270009334375
Serpa Neto A, Nassar Junior AP, Cardoso SO, Manettta JA, Pereira VG, Esposito DC, Damasceno MC, Russell JA: Vasopressin and terlipressin in adult vasodilatory shock: a systematic review and meta-analysis of nine randomized controlled trials. Crit Care 2012, 16: R154. 10.1186/cc11469
Shah MR, Hasselblad V, Stevenson LW, Binanay C, O'Connor CM, Sopko G, Califf RM: Impact of the pulmonary artery catheter in critically ill patients: meta-analysis of randomized clinical trials. JAMA 2005, 294: 1664-1670. 10.1001/jama.294.13.1664
Shan L, Hao PP, Chen YG: Efficacy and safety of intensive insulin therapy for critically ill neurologic patients: a meta-analysis. J Trauma 2011, 71: 1460-1464. 10.1097/TA.0b013e3182250515
Siempos II, Ntaidou TK, Falagas ME: Impact of the administration of probiotics on the incidence of ventilator-associated pneumonia: a meta-analysis of randomized controlled trials. Crit Care Med 2010, 38: 954-962. 10.1097/CCM.0b013e3181c8fe4b
Tan JA, Ho KM: Use of dexmedetomidine as a sedative and analgesic agent in critically ill adult patients: a meta-analysis. Intensive Care Med 2010, 36: 926-939. 10.1007/s00134-010-1877-6
Vasu TS, Cavallazzi R, Hirani A, Kaplan G, Leiby B, Marik PE: Norepinephrine or dopamine for septic shock: systematic review of randomized clinical trials. J Intensive Care Med 2012, 27: 172-178. 10.1177/0885066610396312
Visser J, Labadarios D, Blaauw R: Micronutrient supplementation for critically ill adults: a systematic review and meta-analysis. Nutrition 2011, 27: 745-758. 10.1016/j.nut.2010.12.009
Wang F, Wu Y, Bo L, Lou J, Zhu J, Chen F, Li J, Deng X: The timing of tracheotomy in critically ill patients undergoing mechanical ventilation: a systematic review and meta-analysis of randomized controlled trials. Chest 2011, 140: 1456-1465. 10.1378/chest.11-2024
Wang F, Bo L, Tang L, Lou J, Wu Y, Chen F, Li J, Deng X: Subglottic secretion drainage for preventing ventilator-associated pneumonia: an updated meta-analysis of randomized controlled trials. J Trauma Acute Care Surg 2012, 72: 1276-1285.
Zarychanski R, Turgeon AF, Fergusson DA, Cook DJ, Hébert P, Bagshaw SM, Monsour D, McIntyre L: Renal outcomes and mortality following hydroxyethyl starch resuscitation of critically ill patients: systematic review and meta-analysis of randomized trials. Open Med 2009, 3: e196-209.
Zhongheng Z, Xiao X, Hongyang Z: Intensive- vs less-intensive-dose continuous renal replacement therapy for the intensive care unit-related acute kidney injury: a meta-analysis and systematic review. J Crit Care 2010, 25: 595-600. 10.1016/j.jcrc.2010.05.030
Reilly C: Retraction. Notice of formal retraction of articles by Dr. Joachim Boldt. Br J Anaesth 2011, 107: 116-117.
Wood L, Egger M, Gluud LL, Schulz KF, Jüni P, Altman DG, Gluud C, Martin RM, Wood AJ, Sterne JA: Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. BMJ 2008, 336: 601-605. 10.1136/bmj.39465.451748.AD
Moher D, Pham B, Jones A, Cook DJ, Jadad AR, Moher M, Tugwell P, Klassen TP: Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet 1998, 352: 609-913. 10.1016/S0140-6736(98)01085-X
Egger M, Juni P, Bartlett C, Holenstein F, Sterne J: How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health Technol Assess 2003, 7: 1-76.
Balk EM, Bonis PA, Moskowitz H, Schmid CH, Ioannidis JP, Wang C, Lau J: Correlation of quality measures with estimates of treatment effect in meta-analyses of randomized controlled trials. JAMA 2002, 287: 2973-2982. 10.1001/jama.287.22.2973
The authors declare that they have no competing interests.
ZZ conceived the idea, collected data and drafted the manuscript; XX helped collect data and analyses; HN helped collect data, and analyze and interpret the results. All authors have read and approved the manuscript for publication.
Electronic supplementary material
About this article
Cite this article
Zhang, Z., Xu, X. & Ni, H. Small studies may overestimate the effect sizes in critical care meta-analyses: a meta-epidemiological study. Crit Care 17, R2 (2013). https://doi.org/10.1186/cc11919