Auto logout in seconds.
Continue LogoutGenerative AI is a looming disaster for the healthcare industry. Tools such as ChatGPT lie to our faces, misdiagnose our patients’ diseases, steal our private health information, and — one of these days — might even take our jobs.
… or at least that’s what you’d believe if you listened to the technology’s harshest critics. It’s certainly what we’ve heard from AI-skeptical managers and executives at healthcare organizations around the country.
But is this parade of horribles actually true? Or does it, perhaps, represent the excessive suspicion of an industry that’s feeling burned by the underperformance of past AI technologies?
Here’s our take: Generative AI tools pose real risks — especially if they’re rushed to market — but they’re already transforming our industry. The right path forward is not to exaggerate their risks but to ask: Which dangers are real? Which are overblown? Which can we avoid? Which must we contain?
To help orient the conversation, here are four of the most commonly cited risks of AI that we’ve heard from healthcare stakeholders, plus why we think the concerns are often overstated.
To be sure, large language models (LLMs) make stuff up all the time. Here are a few errors we’ve seen in just the last week:
So if AI models invent nonsense so often, how can we claim that hallucinations are an overblown risk?
Because context matters.
We encountered the above hallucinations while doing early, exploratory research, a period when we frequently encounter inaccurate claims. We vetted the output carefully, took what was helpful, and discarded the rest — just as we do with the information we hear from human experts.
It’s also worth remembering that “hallucinations” aren’t equally dangerous in every context. For instance, when asked to propose a slogan for a multistate health system, ChatGPT suggests “Compassion in every touch.” Even if that claim isn’t literally true, it’s not a hallucination — just marketing puffery.
As long as you use AI outputs in the right ways, at the right times, and via processes that are robust to error, even hallucination-prone AIs can create huge value.
As a realistic matter, nobody is (or should be) using technologies like ChatGPT to unilaterally diagnose patients. But early signs show that AI-assisted diagnosis offers promise.
For instance, a research letter in JAMA found that ChatGPT, when presented with especially tricky cases, identified the right diagnosis as a possibility in 64% of cases — and ChatGPT’s top diagnosis was right 39% of the time. That isn’t perfect, but then neither are human doctors.
The question to ask isn’t “Can AI make perfect diagnoses every time?,” but rather, “Can AI contribute to the diagnostic process?”
Consider a recent paper from Google proposing a “CoDoC” system that pairs AI and human doctors. In an example using de-identified patient data, AI took the first pass at reviewing X-rays but deferred to human diagnosticians to make challenging calls. The method reduces false positives by 25% while requiring two-thirds less clinician time.
To be fair, other studies have found that AI doesn’t always improve human diagnoses, which argues for caution in bringing AI into clinical workflows. But the benefits, in the right use cases, could be substantial.
(A separate risk is that patients might expect ChatGPT to accurately diagnose their conditions — much as patients in the past decade have overrelied on “Dr. Google.” As an industry, we’ll need to help our patients understand the value and limitations of “Dr. ChatGPT.”)
Healthcare data privacy is a complicated topic, fraught with legal, ethical, and practical considerations, and nothing we say here can replace the advice of your legal team.
That said, some healthcare stakeholders seem to imagine that their inputs to AI models have zero privacy protections — that everything they type will be used to train future AIs, leaked to their competitors, or published online. And that’s just not true.
Consider ChatGPT. Under its default settings, its owner, OpenAI, can use your inputs “to improve model performance” — an admittedly vague phrase, but one that OpenAI insists doesn’t include using your data for “selling our services, advertising, or building profiles of people.”
If you’re not reassured by that guidance, you have options. You can turn off chat history for individual conversations, which OpenAI says means those conversations “won’t be used to train and improve our models.” Alternatively, you can entirely opt out of having your data used for model improvement.
To be clear, this is no excuse to enter HIPAA-protected data into ChatGPT (or to ignore your organization’s lawyers!). But when ChatGPT is used with its most restrictive settings, its privacy practices appear comparable to those of other services used in day-to-day healthcare administration.
If you’re in charge of establishing your organization’s IT policies, we’d encourage you to think twice before banning tools like ChatGPT entirely. Trust us: Your employees are using AI — whether you allow it or not. If you prohibit these tools on their work computers, they’ll use them on their phones or personal computers in ways you can’t track and which might not follow good privacy practices.
Instead of banning AI, we’d urge you to carefully review the policies of AI vendors, offer reasonable options to your employees, and give clear guidance on how they can use AI tools responsibly.
The first three risks we discussed relate to AI’s dangers for our patients. This one is different: It’s about the danger to us as healthcare workers.
If you’re feeling scared, we get it. The first time you enter a question into ChatGPT and see it respond with eerily human guidance, it’s easy to imagine that similar technology could soon take the place of many human workers.
But we’ll offer a few reflections that may set your mind at ease.
First, ChatGPT can’t do your job yet. As Sam Altman, the CEO of OpenAI, said upon releasing the newest version of ChatGPT, “[I]t still seems more impressive on first use than it does after you spend more time with it.” The longer you work with generative AI, the more you recognize that — while it can do some things as well as or better than humans — it’s still a limited tool that can’t solve the complex problems facing most healthcare workers.
Second, healthcare is likely better protected from AI job disruption than most other industries, due to a combination of regulatory restrictions, the high-touch nature of patient care, and patients’ preference for human interactions.
Still, there’s no telling what AI technologies will be capable of in five or 10 years. So if you’re worried about future AIs taking your job, there’s only one thing to do: Get really good at using AI!
No matter how the technology evolves — whether it offers only minor workflow benefits or utterly transforms our industry — the workers who know how to use these technologies will be the ones positioned to survive the disruption.
Let’s take a step back. All four of the risks we’ve discussed above a grain of truth, and they may genuinely offer reasons not to adopt AI in certain settings or without reasonable precautions.
But we suspect that there’s a deeper reason why so many stakeholders cite these dangers from AI: because they offer an easy way to object to a frightening new technology.
If, as health care leaders, we can find plausible-sounding reasons to avoid using AI, we give ourselves license to avoid fear and emotional loss from an innovation that’s reshaping our industry in unknown ways.
Avoidance may be tempting. But it’s not the right thing for our patients or for ourselves.
Rather than rejecting AI because it presents risks, we need to do the hard work of risk-benefit analysis: When are its risks truly greater than its benefits? And when are they not?
Because the potential benefits are enormous — and besides, AI is coming for our industry whether we’re ready or not.
Create your free account to access 1 resource, including the latest research and webinars.
You have 1 free members-only resource remaining this month.
1 free members-only resources remaining
1 free members-only resources remaining
You've reached your limit of free insights
Never miss out on the latest innovative health care content tailored to you.
You've reached your limit of free insights
Never miss out on the latest innovative health care content tailored to you.
This content is available through your Curated Research partnership with Advisory Board. Click on ‘view this resource’ to read the full piece
Email ask@advisory.com to learn more
Never miss out on the latest innovative health care content tailored to you.
This is for members only. Learn more.
Never miss out on the latest innovative health care content tailored to you.