When Meredith Broussard, an associate professor at the Arthur L. Carter Journalism Institute at New York University (NYU), was diagnosed with breast cancer in 2019, she noticed her scans had been reviewed by artificial intelligence (AI). Writing for Wired, Broussard details how she devised her own experiment to see if AI could accurately diagnose her cancer.
Devising an experiment
When Broussard was diagnosed with breast cancer, she decided to look through her medical chart online. She noticed a note attached to her mammography report that said her film was read by a doctor — and an AI algorithm.
A year later, following an eight-hour surgery and months of recovery, Broussard is now cancer free, but writes that she was "still curious about the AI that read my films." So, she decided to "investigate what was really going on with breast cancer AI detection."
Many patients don't know their care can involve AI systems, as few people read the medical consent agreements they sign before treatment, Broussard writes.
"I think that patients will find out that we are using these approaches," said Justin Sanders, a palliative care physician at Dana-Farber Cancer Institute and Brigham and Women's Hospital. "It has the potential to become an unnecessary distraction and undermine trust in what we're trying to do in ways that are probably avoidable."
Broussard wanted to see if AI would agree with her doctor's cancer diagnosis, so she contacted a colleague in NYU's data science department named Krzysztof Geras who was building breast cancer detection AI.
Can AI detect cancer?
After struggling to download the breast scans in her electronic medical record, Broussard took a screenshot of the images to give to Geras's AI.
According to Broussard, every cancer detection program works different using specific sets of variables. "Geras's program takes two different views of a breast," Broussard writes. "They are semi-circular images with light-colored blobs inside."
Broussard notes that AI doesn't diagnose cancer the same way a human would. "A radiologist looks at multiple pictures of the affected area, reads a patient's history, and may watch multiple videos taken from different perspectives," she writes. "An AI takes in a static image, evaluates it relative to mathematical patterns found in the AI's training data, and generates a prediction that parts of the image are mathematically similar to areas labeled (by humans) in the training data."
Initially, Broussard provided the AI with her scans, but the model said there were no significant cancer results. After some more research, Broussard learned that her image was too low of a resolution for the model. Even though the scan appeared to be black and white to the human eye, the computer represented the scan as a full color image rather than the single-channel black and white image the AI was expecting.
Broussard then acquired the high-resolution black and white images of her scans from her hospital and ran them through the AI, which then correctly identified the area where Broussard's cancer was. It also generated a score on a scale of 0 to 1, marking Broussard's as a 0.213.
According to Geras, a 0.213 score is "really high." Geras said the score isn't a percentage but just a score on a 0 to 1 scale with a threshold for concern. Geras couldn't remember exactly what the threshold was, but it was lower than 0.2.
"Smart people disagree about the future of AI diagnosis and its potential," Broussard writes. "I remain skeptical that this or any AI could work well enough outside highly constrained circumstances to replace physicians, however. Someday? Maybe. Soon? Unlikely. As I found in my own inquiry, machine learning models tend to perform well in lab situations and deteriorate dramatically outside the lab." (Broussard, Wired, 3/15)