December 4, 2018

Google's AI can diagnose tumors as well as a human—and do something humans can't

Daily Briefing

    For years, cancer diagnoses have relied on the eyes of pathologists, but new artificial intelligence (AI) from Google can look at tumors in a way humans can't, Megan Molteni writes for Wired.

    Dec. 13 webcon: What 125 health care leaders told us about the next stage of their AI strategy

    How it works

    Research published earlier this year in Nature Medicine showed scientists at New York University (NYU) were able to re-train Google's deep learning algorithm to accurately diagnose cancerous tumors by identifying subtle genetic mutations from just a picture—which is beyond what any human is able to do, Molteni reports.

    The researchers used Google's Inception v3 AI, which was trained to identify 1,000 different classes of objects, Molteni writes. They showed the AI thousands of pictures of cancerous and healthy tissue gathered from the Cancer Genome Atlas—a public patient tissue sample library.

    Once the AI could identify cancerous cells with 99% accuracy, the researchers then taught it to differentiate between adenocarcinoma and squamous cell carcinoma—two different types of lung cancer. The cancers look very similar under a microscope but have different treatments, meaning a correct diagnosis is crucial, Molteni writes.

    The researchers then tested the AI on tissue samples taken from cancer patients at NYU. According to the research, the AI was able to correctly diagnosis the images between 83% and 97% of the time.

    Aristotelis Tsirigos, the lead author on the study and a pathologist at the NYU School of Medicine, noted that the AI was less effective when looking at actual tissue samples than when it had looked at pictures because the samples weren't as straightforward, containing inflammation, dead tissue, and white blood cells.

    The researchers then gave the AI genetic profiles of each tumor along with the images, and found that the AI was not only able to identify which picture showed cancerous tissue, but could also notice very small differences in the samples' appearances that pathologists couldn't see.

    What's next?

    According to Molteni, the researchers in the coming months intend to train their AI with more data, and then might consider creating a company to seek FDA approval for the device.

    Daniel Rubin, director of biomedical informatics at the Stanford Cancer Institute, said the research shows "there's a lot more information in the images than what a human being can pull out." However, he added, "The big question is, will this be trustworthy enough to replace current practice?"

    Tsirigos said he's excited about the possibility of "being able to ask deeper questions because the machine is taking care of the simple ones." He said, "Leave the object recognition to the machines, and there's still plenty [of] medicine left for the humans" (Molteni, Wired, 9/17).

    What 125 health care leaders told us about the next stage of their AI strategy

    The Advisory Board recently conducted a national study of 125 health care leaders to better understand how they plan to apply advanced analytics--including including artificial intelligence (AI) and machine learning--to simplify operations, improve patient interactions, and manage the total cost of care.

    Join the webconference on Thursday, Dec. 13 at 3 pm ET to learn what business and clinical problems are the top priorities for advanced analytics techniques, including AI.

    Register Now

    Have a Question?

    x

    Ask our experts a question on any topic in health care by visiting our member portal, AskAdvisory.

    X
    Cookies help us improve your website experience. By using our website, you agree to our use of cookies.