Meta-research matters: Meta-spin cycles, the blindness of bias, and rebuilding trust

Republished in JPR Vol 11 #4 February 2, 2022


Citation: Bero L (2018) Meta-research matters: Meta-spin cycles, the blindness of bias, and rebuilding trust. PLoS Biol 16(4): e2005972. https://doi.org/10.1371/journal.pbio.2005972

Published: April 2, 2018

Copyright: © 2018 Lisa Bero. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.


 

Author

Lisa Bero*

Charles Perkins Centre, Faculty of Pharmacy, The University of Sydney, Sydney, Australia
* [email protected]

Abstract
Meta-research is research about research. Meta-research may not be as click-worthy as a
meta-pug—a pug dog dressed up in a pug costume—but it is crucial to understanding
research. A particularly valuable contribution of meta-research is to identify biases in a body
of evidence. Bias can occur in the design, conduct, or publication of research and is a systematic deviation from the truth in results or inferences. The findings of meta-research can
tell us which evidence to trust and what must be done to improve future research. We should
be using meta-research to provide the evidence base for implementing systemic changes to
improve research, not for discrediting it.
‘That’s so meta!’ exclaimed my student. And I thought, ‘Now she’s really got it!’ We were work￾ing on a review of studies of ‘spin’—research reporting practices that distort the interpretation
of results and mislead readers by making findings appear more favourable than they are. At
the time of our epiphany, we were discussing how to avoid putting spin on the conclusions of
our review of studies of spin [1]. If that’s not ‘meta’, I don’t know what is [2]. Meta-research
may not be as click-worthy as a meta-pug—a pug dog dressed up in a pug costume—but it is
crucial to understanding research.
Meta-research is research about research. A particularly valuable contribution of this discipline is identifying biases in a body of evidence. Bias can occur in the design, conduct, or publication of research and is a systematic deviation from the truth in results or inferences. The
findings of meta-research can tell us which evidence to trust and what must be done to
improve future research.
While biases related to internal validity, such as appropriateness of randomisation or blinding, can be detected in a single study, the impact of such biases on a research area can only be
determined by examining multiple studies. By examining the association of the appropriate￾ness of randomisation in clinical trials with the magnitude of the intervention effect estimate
in multiple groups of studies across different clinical areas, for example, it’s been shown that
inadequate sequence generation or concealment of allocation will overestimate effect estimates
by about 7% and 10%, respectively [3]. This means that when we come across the findings of a
randomised controlled trial on any topic and see that the randomisation is inadequate, we
have empirical evidence to support scepticism about the findings. In such cases, the results are
probably exaggerated.

 

Fig 1

To avoid bias, the mouse was blinded when self-reporting outcomes. Image credit: Lorris Williams.

 

Meta-research has also quantified the biases associated with inappropriate randomisation
and blinding in trials with animals (see Fig 1). Analyses of preclinical animal studies examining
interventions for stroke, multiple sclerosis, and trauma have shown that lack of randomisation
and blinding, inadequate statistical power, and use of comorbid animals are associated with
inflated effect estimates of the tested interventions [4, 5].
Meta-research has been used to identify important biases in research that are not related to
study methodology, such as publication bias, outcome reporting bias, funding bias, and spin.
Reporting bias, including the failure to publish entire studies or the selective reporting of out￾comes or analyses in published studies, has been detected across a variety of health fields and
among animal studies [6–8]. This results in the published body of evidence being overwhelm￾ingly ‘positive’ or showing statistically significant results. Researchers, not editors, are usually
to blame, as publication bias often results from a failure to submit negative studies, not from
journals rejecting them [9, 10].
Funding bias has been demonstrated in pharmaceutical, tobacco, chemical, and other
research areas. For example, a Cochrane meta-research review included 75 studies examining
the association between sponsorship and research outcomes of drug or device studies across dif￾ferent clinical areas. Industry-sponsored studies more often had efficacy results, relative risk
(RR): 1.27 (95% CI 1.17–1.37); harm results, RR: 1.37 (95% CI 0.64–2.93); and conclusions, RR:
1.34 (95% CI 1.19–1.51) that favoured the sponsor [11]. Industry- and non-industry-sponsored
studies did not differ in methodological biases, such as random sequence generation or follow￾up, although industry-sponsored studies were at a lower risk of bias for blinding. This suggests
that other factors, such as the use of unfair comparator doses or the failure to publish studies
with unfavourable results, are associated with the different outcomes of industry- and non￾industry-sponsored studies. Existing tools for assessing risk of bias or internal validity of studies
are not sufficient for identifying the mechanisms of bias associated with funding source.

Meta-research has been critical in identifying biases in interpretation, known as ‘spin’. Our
review found 35 reports that had investigated spin in clinical trials, observational studies, diag￾nostic accuracy tests, systematic reviews, and meta-analyses [1]. The highest prevalence of spin
was in trials. Some of the common practices used to spin results included detracting from sta￾tistically nonsignificant results and inappropriately using causal language. Current peer review
practices are not preventing spin, so editors and peer reviewers should become more familiar
with the prevalence and manifestations of spin in their area of research.
Has all of this navel-gazing by researchers been useful? Meta-researchers interested in bias
may have pulled the rug out from under themselves. Because meta-research is such a great tool
for detecting biases that are present across a body of evidence, it has led to proclamations that
all research is biased and should not be trusted. Rather than using the findings of meta￾research on bias to discredit research, we should use them to identify what needs to be fixed.
We have made great strides towards systemic solutions, particularly in the area of clinical
research. The identification of biases associated with industry conflicts of interest has led to
policies for disclosure, management, and elimination of the conflicts. Compliance with mecha￾nisms to reduce publication and selective reporting bias—such as study registration, protocol
availability, and open access to data—is now required by many clinical journals as a condition
of publication. Tools for assessing risks of bias in studies have been informed by empirical
investigation of methodological biases and are being improved based on empirical evidence.
As meta-research discovers biases in a variety of fields, we need to expand and adapt the solu￾tions we are using in clinical research to reduce bias in all areas.

Ironically, the identification of biases in bodies of evidence has led to criticisms of the use of
another type of evidence synthesis—systematic review and meta-analysis—to support policy
decisions and guidelines. The argument is that if all research is flawed, why should we bother
to summarise it? In the not-too-distant past, if you were seeking advice on a medical condition
from a healthcare practitioner, they would likely have been informed by practice guidelines
developed using the ‘good old boys sit around the table’ (GOBSAT) method [12]. It was the
opinions of healthcare practitioners, often supported by their own financial conflicts of inter￾est, that formulated the recommendations made in clinical practice guidelines [13]. Nowadays,
healthcare practitioners are more likely to be informed by guidelines based on multiple sys￾tematic reviews and meta-analyses addressing all the questions relevant to the guideline [14].
High-quality reviews always include an assessment of the risk of bias of the included studies so
that it is transparent that recommendations are based on strong, weak, or no evidence [15].
Thus, meta-research has boosted the quality of systematic reviews by enabling the identifica￾tion of biases in the included studies.
Meta-research will point out the imperfections in a body of evidence. It will identify the
flaws so we can focus on the best available evidence. Methodological advances in how we do
systematic reviews and meta-analysis will provide better ways to deal with the biases identified.
Now that we know the extent of publication bias across various fields, we should not limit
meta-analysis to only published, peer-reviewed data. For years, Cochrane has been recom￾mending searching for unpublished data as part of a comprehensive search strategy [15].
Meta-research has provided more specific guidance by showing that certain types of metaanalyses, such as those of drug efficacy or harm, should include data from clinical study reports
and regulatory databases [16, 17].
Identifying biases in research will help us identify systemic changes that are needed to
improve the quality and trustworthiness of research. We should be using meta-research to pro￾vide the evidence base for implementing these changes, not for discrediting research. When
using meta-research to bolster criticisms of systematic reviews or meta-analysis for informing
health decisions, we need to think carefully about the alternatives. I’m not ready to turn the clock back to making decisions based on the GOBSAT method. Give me the best available evidence any day.

 

References

1. Chiu K, Grundy Q, Bero L. ’Spin’ in published biomedical literature: A methodological systematic review.
PLoS Biol. 2017; 15(9):e2002173. https://doi.org/10.1371/journal.pbio.2002173 PMID: 28892482;
PubMed Central PMCID: PMCPMC5593172.
2. Zimmer B. Dude, this headline is so meta. Boston Globe. 2012 May 6.
3. Page MJ, Higgins JP, Clayton G, Sterne JA, Hrobjartsson A, Savovic J. Empirical Evidence of Study
Design Biases in Randomized Trials: Systematic Review of Meta-Epidemiological Studies. PLoS ONE.
2016; 11(7):e0159267. https://doi.org/10.1371/journal.pone.0159267 PMID: 27398997; PubMed Cen￾tral PMCID: PMCPMC4939945.
4. Crossley NA, Sena E, Goehler J, Horn J, van der Worp B, Bath PM, et al. Empirical evidence of bias in
the design of experimental stroke studies: a metaepidemiologic approach. Stroke. 2008; 39(3):929–34.
https://doi.org/10.1161/STROKEAHA.107.498725 PMID: 18239164.
5. Sena ES, Briscoe CL, Howells DW, Donnan GA, Sandercock PA, Macleod MR. Factors affecting the
apparent efficacy and safety of tissue plasminogen activator in thrombotic occlusion models of stroke:
systematic review and meta-analysis. J Cereb Blood Flow Metab. 2010; 30(12):1905–13. https://doi.
org/10.1038/jcbfm.2010.116 PMID: 20648038; PubMed Central PMCID: PMCPMC3002882.
6. Chan AW, Altman DG. Identifying outcome reporting bias in randomised trials on PubMed: review of
publications and survey of authors. BMJ. 2005; 330(7494):753. https://doi.org/10.1136/bmj.38356.
424606.8F PMID: 15681569.
7. Misakian AL, Bero LA. Publication bias and research on passive smoking: comparison of published and
unpublished studies. JAMA. 1998; 280(3):250–3. PMID: 9676672.
8. Sena ES, van der Worp HB, Bath PM, Howells DW, Macleod MR. Publication bias in reports of animal
stroke studies leads to major overstatement of efficacy. PLoS Biol. 2010; 8(3):e1000344. https://doi.
org/10.1371/journal.pbio.1000344 PMID: 20361022; PubMed Central PMCID: PMCPMC2846857.
9. Dickersin K. The existence of publication bias and risk factors for its occurrence. JAMA. 1990; 263
(10):1385–9. PMID: 2406472.
10. Lee K, Boyd E, Holroyd-Leduc J, Bachetti P, Bero L. Predictors of publication: Characteristics of submit￾ted manuscripts associated with acceptance at major biomedical journals. Medical Journal of Australia.
2006; 184(12):621–6. PMID: 16803442
11. Lundh A, Lexchin J, Mintzes B, Schroll JB, Bero L. Industry sponsorship and research outcome.
Cochrane Database Syst Rev. 2017; 2:MR000033. doi: 10.1002/14651858.MR000033.pub3. https://
doi.org/10.1002/14651858.MR000033.pub3 PMID: 28207928.
12. James Colquhoun Petrie (Obituary). BMJ. 2001; 323:636.
13. Norris SL, Holmer HK, Ogden LA, Burda BU. Conflict of interest in clinical practice guideline development: a systematic review. PLoS ONE. 2011; 6(10):e25153. https://doi.org/10.1371/journal.pone.
0025153 PMID: 22039406; PubMed Central PMCID: PMCPMC3198464.

14. Institute of Medicine (U.S.). Committee on Standards for Developing Trustworthy Clinical Practice
Guidelines. Graham Robin, Mancher Michelle, Wolman Dianne Miller, Greenfield Sheldon, and Stein￾berg Earl, editors. Clinical practice guidelines we can trust. Washington, DC: National Academies
Press; 2011. xxxiv, 290 p. p.
15. Higgins J, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions. Chichester,
United Kingdom: The Cochrane Collaboration and Wiley-Blackwell; 2008.
16. Hart B, Lundh A, Bero L. Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta￾analyses. BMJ. 2012; 344:d7202. https://doi.org/10.1136/bmj.d7202 PMID: 22214754.
17. Jefferson T, Jones M, Doshi P, Spencer EA, Onakpoya I, Heneghan CJ. Oseltamivir for influenza in
adults and children: systematic review of clinical study reports and summary of regulatory comments.
BMJ. 2014; 348:g2545. https://doi.org/10.1136/bmj.g2545 PMID: 24811411; PubMed Central PMCID:
PMCPMC3981975

Print Friendly, PDF & Email