April 3, 2019

The 4 common biases that lead to bad science. (They're likely not what you think.)

Daily Briefing

    There's an often overlooked bias in scientific research that can perpetuate "bad science" for a long time to come, and it centers around "how research is published and used in supporting future work," Aaron Carroll, writes in the New York Times' "The Upshot."

    Cheat sheet series: Evidence-based medicine 101

    Carroll is a professor of pediatrics at Indiana University School of Medicine and a prominent health care columnist.

    The four research publication biases and their consequences

    According to Carroll, biases that influence how research is published and spun can be "even more pernicious" than commonly discussed biases like a researcher's undisclosed ties or financial conflicts.

    For instance, Carroll cites a recent study that identified four common publication biases in research on antidepressants. The study, published in Psychological Medicine, analyzed 105 FDA-registered studies of antidepressants to determine which trials were eventually published in medical literature and which remained hidden from the public.  

    All told, the researchers identified four common biases that could influence whether a trial is ultimately published and how it's spun.

    1. Publication bias. According to Carroll, publication bias comes into play when a study's outcome influences the decision on whether or not to publish. The research review found that half of the antidepressant studies were considered "positive" by FDA, and the other half were "negative." However, only 48% of the negative studies were published, in comparison to 98% of the positive studies.

    2. Outcome reporting bias. The authors also detected outcome reporting bias, which "refers to writing up only the results in a trial that appear positive, while failing to report those that appear negative," Carroll writes. Researchers reported that in 10 of the 25 "negative" antidepressant studies, researchers reported the outcomes as positive by either omitting the negative results or choosing a positive secondary outcome to focus on.

    3. Spin bias. Of the 15 remaining negative articles, 11 used spin, which Carroll writes is the use of "language, often in the abstract or summary of the study, to make negative results," or statistically insignificant results, appear positive. Carroll explains that this spin can have a significant impact and cites a randomized control trial which found that clinicians who read abstracts where nonsignificant results were spun to appear positive were more likely to think the treatment was beneficial.

    4. Citation bias. Bias can continue after publication, Carroll writes, as the more a study is cited and discussed, the more it is circulated. Positive studies were cited three times more than negative studies, so these positive results get amplified even more, Carroll writes.

    These biases are not unique to antidepressant research, Carroll writes.

    According to Carroll, the systematic reviews of studies with research biases "provides empirical evidence that the biases are widespread and cover many domains," and these biases often paint a more positive picture of study results than what was actually found, which can result in the dissemination of biased research.

    A possible solution: Preregistration

    According to Carroll, study preregistration could help researchers control for these biases.

    Study preregistration requires authors to describe the study, the hypothesis, the data that will be collected, and the analysis process before any data is collected for the study.

    When the study is complete, reviewers compare the completed study to the preregistered version. If the versions are similar, the results are published—regardless of the outcome.

    However, Carroll noted that preregistration only "works sporadically." A 2011 study on preregistered research found that up to half of the publications omitted primary outcomes after the study was complete. While there could be valid reasons for the adjustments, Carroll says that "too often, there are no explanations."

    What else can be done

    While a lot of medical studies are influenced by research bias, Carroll writes that we shouldn't "discount all results from medical trials." Instead, "we need, more than ever, to reproduce research to make sure it's robust," he writes.

    Carroll believes authors should be held to more "rigorous standards" to report results that are accurate and transparent, regardless of whether they are negative or positive. Doing so requires building an accepting culture: "We can celebrate and elevate negative results, in both our arguments and reporting, as we do positive ones."

    Unfortunately, this is easier said than done, as "[t]hese actions might make for more boring news and more tempered enthusiasm," Carroll explains. "But they might also lead to more accurate science" (Carroll, "The Upshot," New York Times, 9/24).

    Learn more about why study design matters

    Been awhile since your last statistics class? It can be difficult to judge the quality of studies, the significance of data, or the importance of new findings when you don't know the basics.

    Download our cheat sheets to get a quick, one-page refresher on some of the foundational components of evidence-based medicine.

    Get all 6 Cheat Sheets

    X
    Cookies help us improve your website experience. By using our website, you agree to our use of cookies.