November 25, 2019 Read Advisory Board's take: Why predicting mortality risk isn't enough

Researchers at Geisinger have developed an algorithm uses electrocardiogram (ECG) data to determine whether a person will die within a year, Geisinger researchers recently announced

Study details

Scientists from Geisinger looked at how well artificial intelligence (AI) could predict whether patients would die within one year based on their ECG reading. To build the AI, researchers used the data from 1.77 million ECGs from nearly 400,000 patients over three decades who had not yet developed any irregular heart rhythms.

The researchers trained two types of the AI model. One model used only raw ECG data—which measures voltage over time—while the other model used "human-derived measures," such as the ECG features recorded by a cardiologist, as well as other disease patterns.

To analyze performance of the models, the researchers used a metric known as AUC, or Area Under Curve, which measures how well a model distinguishes between two groups. Here, the groups were patients who lived and those who died after one year.

Key findings

The researchers presented their findings on Nov. 16 at the American Heart Association's Scientific Sessions in Dallas.

They found that the model that used only raw ECG data was far superior at predicting the odds that a patient would die within one year—consistently scoring above .85, where 1 is a perfect score and 0.5 shows no distinction between the models. In contrast, the AUCs for models currently used by doctors range from .65 to 0.8.

Surprisingly, the model was able to predict risk of death accurately even among patients whom a physician said had a normal ECG. According to the researchers, three cardiologists separately reviewed the ECGs which had been deemed as normal but the AI had correctly deemed as risky, and generally could not recognize the risk patterns that the AI did.

Brandon Fornwalt, co-senior author on the study and chair of the Department of Imaging Science and Innovation at Geisinger, said, "This is the most important finding of this study." He added, "This could completely alter the way we interpret ECGs in the future."

Further, Fornwalt noted when reviewing the results that, "No matter what, the voltage-based model [trained on the raw ECG data] was always better than any model you could build out of things that we already measure from an ECG."

Therefore, "The model is seeing things that humans probably can't see, or at least that we just ignore and think are normal." He added, "AI can potentially teach us things that we've been maybe misinterpreting for decades."

Still, the fact that physicians don't know what patterns the AI is picking up makes some physicians reluctant to rely on them for clinical practice. This reflects a common problem with new AI applications—that they often operate as a "black box" without providing insight into their inner workings. Since these inner workings are shielded from human eyes within layers of computations and lines of code, it can often be hard to diagnose any errors or biases. This presents a major hurdle to FDA approval and widespread usage, at least until further advancements in "explainability," or the ability for models to provide clues about what parts of an patient's record or test were the most important factors in drawing their conclusions (Tangerman, Futurism, 11/11; Lu, NewScientist, 11/11; Geisinger release, 11/15; American Heart Association release, 11/11).

Advisory Board's take

Deirdre Saulet, Practice Manager, Oncology Roundtable

I'm excited to see the results from Geisinger's algorithm as I believe it brings us one step closer in being able to predict patients' risk of short-term mortality—and to provide them the clinical support (as well as the palliative and other end-of-life services) that they may need.

While many organizations have made significant advancements in working with physicians to screen for short-term mortality risk, we know that this can sometimes be an imperfect screening method—especially as doctors tend to err on the side of optimism when projecting patients' lifespans.

Therefore, we are excited to see other organizations using machine learning and other automated methods to objectively predict short-term mortality. In addition to Geisinger, a number of other organizations have made significant advancements. For instance, at the Denver Veterans Administration Medical Center, researchers created prognostic criteria to identify patients with the highest mortality risk. The criteria rely on data that every patient could tell their provider or could easily be extracted from the patient's chart. Among a hospitalized veteran population, the tool demonstrated 79% sensitivity (rate of correct positive prediction) and 75% specificity (probability of correct negative prediction).

And, to make this process even more sophisticated, researchers at Stanford are partnering with Google to leverage AI for predicting patients' risk of mortality for a number of different conditions.

But just creating predictive tools is not enough.

But just creating predictive tools is not enough. Importantly, these predictions must be communicated back to the patient—forcing difficult conversations which can often make physicians highly uncomfortable. It's not surprising that many physicians feel unprepared for these conversations, as many simply haven't been trained to have them. For example, just 2% of the oncology board certification exam relates to end-of-life issues. 

To guide providers through these conversations, Gundersen Health System created the "Respecting Choices Person-Centered Care Model," which has since been implemented at organizations across the country. The curriculum includes modules to help team members learn how to start conversations about patient's wishes and manage communication in the advanced directive process.

Organizations can also leverage their EHR to support these conversations—and can take a page from many oncology programs leading the way. For instance, UAB Medicine, AtlantiCare, and the USA Mitchell Cancer Center are using treatment planning software called Carevive to flag when cancer patients do not have an advanced care plan documented. Using patient-reported and clinical data, the tool alerts providers when the patient should have a care plan but lacks one. Since using the tool, documentation of advance directives for breast cancer patients have increased from 19% to 81%.

Of course, these process changes can be uncomfortable for organizations as they force patients and providers to have difficult conversations. But as our ability to predict mortality using AI increases, we have to make sure our ability to communicate and act on that information advances in lockstep.

Innovation 101: Download the cheat sheets for today's digital world

Download our cheat sheets so you can keep track of the fast-changing technologies and capitalize on opportunities for IT-powered innovation. Check out our guides for these topics and more:

Get all the Cheat Sheets
Download Now

Topics