I believe that artificial intelligence can have an outsized, positive impact on health care. I also believe that health care has reached a point where we can't afford not to embrace AI. We've tried to get a handle on things like total cost of care, administrative burden, clinician burnout, and patient experience. But our existing solution set simply isn't making enough progress.
One of the major limiting factors for broader adoption of artificial intelligence is simply parsing hype from reality. I hear similar questions about this in nearly every conversation I have with health care leaders—across all segments of the industry.
Below, I've answered some of the questions I am getting in this line. They touch consistently on understanding what the limitations of the technology are. I think this is a good starting point for working toward unlocking the full value of AI.
Here are the questions:
1. What about the big disappointments in health AI, like IBM Watson? Is that just overhyped tech, or is there something fundamentally wrong with that kind of AI?
Absolutely, this was overhyped. But I think it also reflects an all-too-common focus on the wrong end of the challenge. That is, they thought technology first, and then went looking for a problem. They picked cancer because it's such a big issue, largely assuming that the vast capabilities of the technology would be able to swarm the problem and find an answer. As it turned out, they had a terrible time with integration with legacy systems and getting good data—at least those problems prevented them from living up to their own expectations.
This isn't a fundamental failing of AI, but it did create a mini-"techlash"—a backlash against technology. That's one of the biggest problems of overhyping. Adoption is set back, and maybe even leads to disinvestment.
2. What's the next big thing for AI to tackle in health care?
I understand where this question is coming from, and I get asked this all the time, both from people in the industry and people in my own company. And I have a not-very-exciting answer.
Health care already has plenty to digest and learn from about the AI that's already in place and in development now. We certainly need to sustain innovation, but our reflexive need to jump to the next new thing before we've really mastered what's in front of us doesn't usually serve any business, much less health care, all that well.
3. Other industries are ahead of health care in AI. Are there things they've learned or things that we can skip as this rolls out in health care?
I often worry that we think health care is so different from other industries that we risk reinventing the wheel every time we try to adapt technology. There are plenty of things we can learn from other industries, and—maybe more importantly—things that these industries have already done that won't have to be repeated.
One of the things health care gets to skip is training patients to be digital consumers. Other industries have established how to engage. Health care ignores that at its peril but also has the opportunity to learn from that. The biggest mindset shift required is to think about bringing technology to patients, instead of the way we've usually thought about it, which is bringing patients to the technology.
4. How do we know if the data we're training AI on is biased or not?
You should probably assume that any data you are training AI on is biased. Structural bias is so strong in our institutions, our culture, and in the data that those create that we simply cannot assume that we are working with unbiased data. It is much more likely that health care data is biased against those who are already underserved—low-income patients, rural patients, non-White patients, and non-cisgender patients.
Two examples of this kind of structural bias revealing itself through AI have recently appeared in journal publications, one from health care and one from outside the industry. The health care example was of predictive overbooking—basically using predictive models based on existing patient data to figure out which patients are most likely to be no-shows. Their appointment times are then selectively overbooked so that productivity and revenue for that appointment slot is less likely to be lost. The researchers found that at one health system in the US, this was likely to lead to much longer wait times for Black patients than White ones.
The non-health care example comes from research on Open AI's GPT-3, a language-learning AI. When asked to complete sentences about people who are Muslim, it overwhelmingly made associations with violence and terrorism, based largely on what it learned from reading content online.
5. What do you think will be the "tipping point" for AI? How will we know when it's really integrated into health care?
We'll know when I'm not writing blogs like this anymore. That is, we'll know AI has gone truly mainstream when it's invisible. When it's just as much a part of the clinical toolkit as the stethoscope that no one even notices it, AI will be truly integrated into health care.