October 28, 2019

Why a popular algorithm showed bias toward white patients (and how it can be fixed)

Daily Briefing

    An algorithm that is widely used in U.S. hospitals and health systems is more likely to classify white patients as needing follow-up care than comparably ill black patients, according to study published Friday in the journal Science.

    Slide deck: Health Equity 101

    Why biased algorithms are a growing concern in health care

    Hospitals and health systems are increasingly relying on algorithms to guide their treatment decisions, hoping that sophisticated machine-learning techniques can uncover associations that humans might miss while avoiding human biases.

    But researchers are discovering that, because these algorithms are typically trained using data from past patients, they may inadvertently reinforce existing racial disparities—even when they don't explicitly consider race at all.

    "These algorithms are built on data, and those reflect systemic biases," according to Sendhil Mullainathan, computational and behavioral science researcher at the University of Chicago's Booth School of Business and senior author of the new study.

    About the new study

    For the new study, Mullainathan and his team examined an algorithm created by Optum that is used by more than 50 organizations. The algorithm, which was trained using insurance claims data, is intended to identify patients who are likely to require additional care in the next year and thus could benefit from high-risk care management programs.

    (The Daily Briefing is published by Advisory Board, a division of Optum.)

    For the study, the researchers looked at the algorithm's rankings at one hospital of 6,079 patients who identified as black and 43,539 who identified as white. They found that "at the same level of algorithm-predicted risk, blacks have significantly more illness burden than whites," the authors wrote.

    As the Wall Street Journal explained, "the algorithm gave healthier white patients the same ranking as black patients who had one more chronic illness as well as poorer laboratory results and vital signs."

    All told, the algorithm failed to account for about 50,000 chronic cases of conditions among black patients.

    How the algorithm's bias arose

    The algorithm's bias is notable because it didn't actually consider race in its calculations at all. Instead, it used insurance claims data, a seemingly "race-blind" metric, the Post reports.

    But the study reveals a limitation of that approach. The algorithm was trained by looking at "dollars spent … rather than on the underlying physiology," according to Mullainathan—and for a variety of reasons, the U.S. health system has historically spent less money on black patients than on white patients.

    As a result, the algorithm "took the system-wide problem … and [it] expand[ed] that and magnif[ied] it," Mullainathan said.

    The researchers emphasized that other algorithms used elsewhere in health care likely have similar issues. "It's truly inconceivable to me that anyone else's algorithm doesn't suffer from this," Mullainathan said.

    What can be done to fix the problem?

    Even as the researchers highlighted the algorithm's bias, they also proposed a possible solution: Rather than training the algorithm to identify sick patients by measuring how much money was spent on them, it could instead be trained on physiological measures of illness, such as high blood pressure.

    When researchers retrained the algorithm using biological data, overall bias was reduced by 84%.

    "The lesson here is to be very careful about the data the algorithm is being trained on," Mullainathan said. "Concepts that we as humans tend to take synonymously—like care in dollars and care in biological terms—algorithms take them literally."

    Optum is currently working with the researchers to update the algorithm, according to Washington Post.

    "Predictive algorithms that power these tools should be continually reviewed and refined, and supplemented by information such as socio-economic data, to help clinicians make the best-informed care decisions for each patient," a spokesperson for the company said. "As we advise our customers, these tools should never be viewed as a substitute for a doctor's expertise and knowledge of their patients’ individual needs" (Chakradhar, STAT News, 10/24; Johnson, Washington Post, 10/24; Evans/Wilde Mathews, Wall Street Journal, 10/25).

    X
    Cookies help us improve your website experience. By using our website, you agree to our use of cookies.