Daily Briefing

The future of AI in health care, according to Microsoft's AI chief


Equity is top of mind for health leaders—but revelations about bias in artificial intelligence is increasing cause for concern. Radio Advisory's Rachel Woods sat down with Microsoft's Director for Artificial Intelligence Tom Lawry to discuss how bias creeps into AI.

Read a lightly edited excerpt from the interview below and download the episode for the full conversation.

Rachel Woods: I'm not sure that the average health leader really understands even how AI works or how often it is used today. So without getting too technical, what kind of basics do you want to make sure our listeners know about artificial intelligence?

Tom Lawry: When you think about AI, it's gotten a lot of attention. I mean, frankly, it's been very much the shiny object in health care for the last few years. But I could tell you stories where it actually has its fruits going back to the 1840s, which, by the way, a woman, actually the daughter of poet Lord Byron, wrote the first algorithm. So as much as we think about this being a male-dominated industry, women drove most of the early progress.

Anyway, so the second part is simply the way I like everyone to think about AI is, first of all, we have to have a definition, so I'm going to give you one now. AI, simply put, is intelligence demonstrated by the software with the ability to depict or mimic human brain functions, and operative on "mimic," number one.

But really look at it as the brain is such an awesome organ. It's what brought humans to the top of the food chain, and we've become so smart that we've actually started to look at how to outsource certain parts of our brain function to let machines do it, not all, but certain parts. So it's kind of about outsourcing some of what we as humans have been unique up until now in our capabilities.

Woods: And when it comes to health care, how do you explain the difference between what AI can do really well today and what its limitations still are?

Lawry: Yeah, I think that's one of the most important points that everyone should know, because we often hear about how AI is going to take over radiology and everything else, and it's not. So simply put, AI is really good at certain things, better than humans. It's great at things like various analysis, pattern recognition, image analysis, information processing. On the other hand, humans are good at and will always be better than AI at things like reasoning, judgment, imagination, empathy, creativity, and problem solving.

So if you bring that together and you look at, for example, if you're a physician, if you're a nurse and you really evaluate the work you do, so much of it does involve things like information processing, but so much of diagnosis, the treatment recommendation, the whole care process involves all those other things unique to what humans do and what machines can't do now and, in my view, won't be able to do in our lifetime.

So it's really about how you bring those two things together when it comes to creating value to have AI getting in behind those humans, those caregivers, anyone who's a knowledge worker, to say, "How do we help them be better at what they care about by using things AI's really good at?"

Woods: Now, if I'm honest, in health care, the truth is that we are battling against some pretty serious inertia, inertia that presumes that in health care, the presumption is white and male by default. The world is biased, health care included, and we know that algorithms are biased too. But how exactly is bias introduced into AI?

Lawry: Okay, I'm chuckling now, Rae. I'm going to push back on you just a little bit already. First of all, AI has the ability to become biased or to demonstrate patterns that we as humans would know as bias.

So two things, not all AI is biased. Number two, there's an important distinction here. So again, this is a good time that you bring it up. When we talk about or we read about bias in things like algorithms used for clinical decision support, bias means there's a variance when something's being predicted between one group of patients or people and another.

The idea, when you think of bias and AI, we're really referring to statistical variance of something that's happening with, say, an algorithm where its predictive capabilities are different for one population versus another.

Woods: Maybe give us an example of how that plays out in health care in an unintended way, because I do want to be clear that it is not that there are necessarily nefarious people that are building algorithms with the express purpose of making sure that one group is left out. But we know that the things that we build reflect the world around us, and natural biases can come through as we're designing, developing, deploying these kind of systems. Maybe give us an example of how that can play out in health care.

Lawry: Absolutely. Well, first of all, thank you for pointing out, to the best of my knowledge, that there is no Dr. Evil for AI out there doing nefarious things. So let me do a quick story.

Everyone tuned in to Radio Advisory right now is now part of this new clinical AI team working in a hospital, and our goal is to create an algorithm that basically allows us, as we're putting patients in the hospital as inpatients, to follow them and risk rate the propensity to develop or experience an adverse event.

So the idea is, today, we put 50 people in as inpatients. We're following their care. Some of them, they're doing fine, it looks like we're going to discharge them. And all of a sudden, there's almost this random thing where they have an adverse event. Co-team comes in, stabilize them. They're now in ICU instead of going home.

So we're going to create an algorithm that follows those patients, predicts the ones that are highly likely have an adverse event, and then get our intensivists and others in there to do something. So we've created the algorithm. We've done that. We've run a pilot, and I'm proud to report we've been able to predict and reduce adverse events by 40%.

So think about that for a second—40% reduction, quality people are very happy. If those patients are fixed-pay patients, so they're Medicare on DRG and I can keep them out of ICU, lower the ICU usage, lower the length of stay, then the CFO's going to be very happy. So 40% reduction, we're improving quality. We're improving financial performance.

Here's what's important: 40% is a statistical average. If we look closely and I were to go into the board and give them that story, they'd be very happy. If I went to the next level and said, "But here's the deal. In getting that 40% as a statistical average, what we're seeing is the algorithm is three times better predicting and preventing adverse events in white males versus Hispanic females."

And it meets all legal requirements. It meets all regulatory requirements such as HIPAA, GDPR in Europe, totally legal, totally compliant with everything, and yet is that okay? And so that would be an example of bias, but what it really is is an algorithm that produces good overall. But it produces it at a higher rate of value for one population versus another. So when we talk about bias, that is an example of statistical variation, and the question is, again, it's legal, it's compliant, but is it right?

Woods: I think we would all agree that it is not right, and the biggest fear that at least I have is that as the world moves towards more technological advancement, whether that's AI or otherwise, is that we actually make inequities worse and not better.

And my understanding is that a lot of this can come from, what's the data source that we're using? Again, is it unintended if we're using data from, say, wearable devices, and there are certain populations that aren't using those wearable devices or aren't getting genomic tests or things like this? Or what is the makeup of the team that's developing the algorithm? Do you have a diverse team and things like that?

Lawry: Yeah. Well, you've kind of answered the question already. It's a great question, and no, when you boil it down or net it out, you're on the right track. So how does bias creep into AI is the question. It creeps in one of several ways.

Most often, it comes from things like the conscious or unconscious bias of the people developing those algorithms. It comes from the data that's being used to create the algorithm. And so let's pull back, and I love America, we have amazing caregivers, but in America, unlike other parts of the world, if you think health care is a right legally, it's not.

If you just look at how things happen in America, where people are going to the hospital, people who are insured—and as a byproduct of those people going through the system, all of the data is coming from people who are well insured, people are on Medicare. But if there are others who are at the margins who don't have appropriate access to care, then the data alone is not going to be reflecting those populations.

So we have many ways of basically balancing dirty data, but if that's not done, then basically, the data that the algorithms are developed around are reflecting the patterns, the historical patterns from the past.

In America alone, just being aware of that and saying, "How do we fine-tune things to make sure that we're not only not perpetuating the bias that exists in the real world, but we're actually using these algorithms to try and start mitigating some of the biases that are happening in the real world today?"

So the challenge for everyone to think about is simply let's own, and you said it. There are many biases and inequities happening in the real world and the physical world of health care today. What we want to do is be very aware of and make sure they don't cross over to the digital world in the form of algorithms.

I mean, to me, it's kind of like squeezing a balloon. If you're working on diversity, equity, and inclusion in the physical realm and you're making progress, good for you. But if you're not paying attention to the other, you're squeezing that balloon, and it's popping up somewhere else.

 


SPONSORED BY

INTENDED AUDIENCE

AFTER YOU READ THIS

AUTHORS

TOPICS

MORE FROM TODAY'S DAILY BRIEFING

Don't miss out on the latest Advisory Board insights

Create your free account to access 1 resource, including the latest research and webinars.

Want access without creating an account?

   

You have 1 free members-only resource remaining this month.

1 free members-only resources remaining

1 free members-only resources remaining

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox
AB
Thank you! Your updates have been made successfully.
Oh no! There was a problem with your request.
Error in form submission. Please try again.