IT Forefront

Is it possible to develop an 'unbiased' AI model for health care?

by Mackenzie Rech

Artificial intelligence (AI) is a once novel technology that has now entered mainstream conversations in health care. As AI becomes a reality and not an emerging idea, health care leaders must understand not only the opportunities but also the risks. One ethical risk that is top-of-mind for stakeholders is the potential for biased AI models to create or perpetuate health disparities.

New infographic: How to combat AI bias

I sat down with Doug Hague, Executive Director of the School of Data Science at University of North Carolina, Charlotte, to discuss AI in health care and how organizations can develop fair models. Before entering academia, Hague worked in the data science field for over 20 years and led large analytics organizations in corporate settings. Hague also runs his own consulting business where he advises leaders across health care, finance, retail, and manufacturing industries on how to effectively implement and validate AI models.

Question: You've worked on AI implementations across many industries. When it comes to applying and using AI, how does health care compare to other industries?

Doug Hague: Most health care organizations are using analytics but are not really doing advanced predictive modeling yet, except for the most progressive systems. Health systems are looking at a number of areas to test AI, such as population health, sepsis detection, or radiology. I think one area that saw early AI adoption was streamlining claims management with payers, because that's where business cases are easy to justify. In other industries, they've often looked to create autonomous decisions, but in health care they still focus on keeping a human in the loop.

Q: Having a human in the loop is important, especially in clinical applications of AI. But providers are not immune to human bias and could potentially reintroduce bias into a process. How do we address this issue?

Hague: The model by itself isn't always the best solution. Humans can improve models since they can take into account exogenous variables to make it better in terms of decision-making, but humans can absolutely reintroduce bias after you've built a model.

When you're evaluating your model, you need to measure the inputs and outputs, but also take a more holistic view of the entire process. Don't stop with "is my model correct?" Make sure to measure process outcomes that include the human decisions.

Q: What are some of the most important considerations when it comes to validating AI models?

Hague: Most organizations spend a lot of time developing a model, gathering the data, and building something as accurate and fair as they can. For example, you may predict sepsis based on inputs and a given threshold, determine if a patient meets the threshold, and then alert a nurse. This calculation is all based on historical data and let's say its 70% accurate. Over time, that model will degrade. For example, a year later, it might be 60% accurate. Your predictive capability will always decline with time—it never gets better.

Performance degradation is something you need to monitor. The question is, how much degradation will you accept before tuning it or getting new data to "rebuild" it? If a model fails, then what? Sometimes your workflow becomes so dependent on a model, if you don't have a fallback you're in trouble.

There are also risks involved with taking AI models from the literature and implementing them into a hospital without model validation because your patients and clients will be different from the research. Model risk management (MRM) is often a foreign concept for health care. Ongoing performance monitoring is part of MRM, and most don't think about it.

Q: When we talk about reducing bias in AI, one significant challenge is how to define and optimize for fairness in the algorithm. Are there ways to define fairness quantitatively?

Hague: You can define bias mathematically, but when it comes to fairness, that's more of a social concept. Every stakeholder has a different definition of fairness. If you say, for example, that some underserved patient population deserves a greater share of resources, are we essentially introducing bias in a way that could be viewed as unfair?

I haven't seen anyone effectively define fairness in health care yet. Financial services tried because of redlining and the mortgage business, but after 40 years, they still struggle with fairness. When that industry started building models they removed all protected variables (race, gender, etc.) to make sure things were fair, but as analytics and other techniques have evolved, bias has creeped in because of cross-correlation among the variables.

Q: The finance industry has legislation to address fairness issues in models. How can we prepare for, or even prevent, the introduction of similar fairness regulations in health care?

Hague: In finance, they made mistakes and regulation has become extensive and, at times, burdensome. Finance had a number of laws passed regarding fairness back in the 1970's. Developing best practices without overdoing it is something health care needs to adopt.

You can protect yourself by recognizing the reality of bias during development and documenting your efforts to be fair. Sometimes when you mitigate a model's bias, your profit could go down, so you may face pressure on that from leadership, but with documentation and evidence you won't get sued. Defend yourself in court before you actually get to court.

Just released: How to combat AI bias

In this infographic, we outline a number of challenges to look out for when developing and using AI models, and various steps health care stakeholders can take to reduce the risk of algorithmic bias.

Download Now