Breast density notification laws now exist in 38 states, and in February of this year, President Trump signed into law a directive to implement breast density reporting requirements nationally. This continued focus on breast density reporting stems in large part from breast density being a factor in whether a patient is high-risk for the development of breast cancer.
Learn 4 forces shaping the breast imaging market and how to respond
However, it is difficult for physicians to accurately classify this risk. Breast density is only one factor that can contribute to risk, and the most accurate current model for measuring risk of breast cancer development, the Tyrer-Cuzick model (version 8), is not without flaws. The area under the curve (AUC) is a common measure of accuracy for models designed to separate inputs into two distinct groups, which in this case is low-risk or high-risk. An AUC of 0.5 signifies that the model is no better than a coin-flip at accurately separating the two groups, whereas a perfect model would have an AUC of 1. The AUC of the Tyrer-Cuzick model is 0.62 for white women; however, this current model is significantly worse for African American women, with an AUC of just 0.45.
In recent years, the rise of artificial intelligence and deep learning models have led many to predict that these technologies will transform radiology as we know it.
Deep learning used to develop a better model
Researchers at Massachusetts General Hospital and MIT used the power of deep learning to develop a new, more accurate model for breast cancer risk prediction. They fed a deep learning model with full-field mammography images and a five-year outcome of whether or not the patient developed breast cancer. At the same time, the researchers created a risk-factor logistic regression model that mapped a patient's risk factors to whether or not the patient developed cancer within five years. The researchers then developed a dual model that combined both the image and the risk factor information into a single prediction of risk.
The result? A significantly better model at predicting breast cancer risk, particularly for African American women, who have a 40% higher mortality rate from breast cancer than white women, despite similar levels of cancer incidence. The hybrid deep learning model is equally accurate for white and African American women with an AUC of 0.71 for both subgroups.
Determining risk from limited information
This was not the only significant result to come from the research, however, as the deep learning model based just on mammography images also outperformed the Tyrer-Cuzick model. It placed 31% of all patients with future breast cancer in the top-risk decile, compared with only 18% with the Tyrer-Cuzick model. This result means that patients who do not know all of their risk factors, such as family history, can still receive an accurate assessment of their future risk.
This new image-only model would also allow imaging screening programs to provide a more accurate assessment of risk automatically based solely on the mammogram images. This type of immediate feedback on potential risk is a large reason for the proliferation of breast density notification laws, so this new image-only deep learning model provides an opportunity for imaging screening programs to go beyond basic density notification and provide patients with a much more accurate view of their future risk.
Subscribe to the Reading Room
To get more of our top insights, make sure you're subscribed to the "Reading Room" blog.
Subscribe to the Reading Room
Does your imaging population health strategy match your level of risk?
Health systems are moving toward population health management. So what does this mean for your imaging program?
Download our infographic to get guidance on prioritizing six population health investments depending on your organization's degree of exposure to risk-based payment.