Auto logout in seconds.
Continue LogoutArtificial intelligence (AI) has the potential to transform the healthcare industry, but it comes with a unique set of challenges. Leaders must be aware of the risks and work to mitigate these challenges to harness the power of AI. The five most important of these challenges are data bias, data accuracy and reliability, data sharing, transparency and explainability, and liability. By working to address each of these challenges, healthcare organizations will be able to improve care delivery and efficiency while ensuring ethical deployment of AI models.
Many of the challenges organizations are facing with AI are not new. Challenges with workflow disruption, patient and clinician hesitancy, and reliability, have been seen with previous health technologies. But the broad applicability of AI to healthcare applications will require leaders to think more critically about AI development, implementation, and utilization than they have in the past.
Despite the challenges AI presents in healthcare, its positives will outweigh potential negatives. Dismissing this technology and using the challenges as an excuse to ignore the potential of AI will only hurt your organization because AI will be necessary to stay competitive.
Although AI challenges will vary depending on the use case, there are some common themes across challenges, such as data and ethics. Below, we explain which challenges are unique to AI and how healthcare leaders can overcome them. Investing the time and resources into mitigating these challenges will allow organizations to fully harness AI and improve their organizational efficiency and care delivery.
AI could perpetuate societal bias and deliver incorrect medical information to clinicians and patients. Bias can creep into all AI solutions, and this concern is further heightened with predictive and generative AI models.
AI models learn to encode bias from biased training data. Biases in training data are often due to second-order effects of societal or historical inequities in healthcare. Healthcare data can include bias if data is incomplete, fails to capture all factors that influence health outcomes, lacks representation of certain demographics, includes outdated medical practices, or overemphasizes specific treatments. Additionally, the healthcare industry has a long history of disparate health outcomes, especially across racial lines, which contributes to biased data.
Biased AI models will vary in their predictive capabilities from one population to the next. If models are not equally effective for all users, there will be direct implications on health equity and fairness. Encoded bias can negatively affect clinical decision-making and lead to poorer health outcomes for historically marginalized and underrepresented communities.
Accuracy and reliability are important for any health technology, but AI adds another level of complexity to this ever-present challenge. AI models can produce “hallucinations,” which are AI outputs that are not grounded in the model’s training data. These outputs are made up by the model and are not factual or reliable. Even when an AI model does not hallucinate, the generated output may be too vague or broad to be reliable to the end user.
AI hallucinations usually occur due to contradictory information in training data, which leads the model into misclassifying and misinterpreting data. Model degradation caused by population shifts or outdated data can also increase the likelihood of hallucinations. Hallucinations are most apparent when users without adequate AI experience rely too much on the technology. Additionally, AI may be unreliable if models produce vague answers that are not technically wrong but are too broad to be useful. When an AI model tries to correct for potential inaccuracies, it can create AI predictions with large margins of error.
AI outputs are conveyed with a high level of confidence, making it easy for the user to accept those outputs as truth. If users over rely on AI and become complacent with the automation, they will not verify the AI output. This could contribute to the spread of scientific misinformation and negatively affect health outcomes.
Data sharing is necessary to push AI innovation forward in healthcare. Combining data sets from various sources can improve the quality, diversity, and ethicality of training data and algorithm function. However, combining data sets from various sources can lead to patient data leaks and threats to organizational intellectual property (IP). Sharing sensitive patient data has a substantial impact on patient autonomy, AI trustworthiness, and legal and ethical compliance.
AI models benefit from vast amounts of training data and continuous learning, and put a great emphasis on sharing or purchasing data. Also, the movement of large quantities of protected health information (PHI) and personally identifiable information (PII) increases the risk of data privacy breaches. For intellectual property, it is hard to determine who has copyright ownership of the work the AI model generates, especially if it resembles an existing creation. Therefore, when working with a vendor on AI model development, healthcare providers have a responsibility to evaluate potential partners with significant scrutiny.
Data sharing increases the risk of a data breach or leak, which leads to serious legal and ethical concerns. Additionally, open models, such as ChatGPT, are not HIPAA compliant. If clinicians enter PHI into an open AI model, the open model will use this data to learn, and the information will no longer be private. Such data leaks could have legal implications and negative effects on patient trust and acceptance of AI. In addition to concerns with patient privacy, data sharing can lead to a loss or leak of organizational IP.
AI models are not transparent or easily explainable because users do not know where the training data comes from, and they do not understand the inner workings of the algorithm. This is makes some AI into a “black box,” which refers to the inability for users to see how and why these deep learning algorithms produce their output. Lack of transparency and explainability can decrease the trustworthiness of an AI model.
Users act upon AI decisions with little to no understanding of how or why these outputs are generated. Black box AI models use billions of parameters to inform their decisions, making it impossible, even for data scientists, to understand how the model is producing outputs. This lack of transparency can negatively impact a model’s trustworthiness. Models with continuous learning capabilities only further exacerbate this problem because they can learn patterns from data they were never trained on.
Lack of transparency and explainability can lead to distrust or improper use of a model, which may negatively impact patient care. The black box problem also limits the effectiveness of a human-in-the-loop approach because users cannot make informed decisions when they do not understand why an AI model is generating a certain output. AI models will never be fully explainable due to the black box and AI continuous learning capabilities. However, organizations should provide some level of transparency around data collection and management to help users understand what data the AI will use to generate outputs. Increasing transparency can help mitigate bias, increase reliability, and ensure ethical use of the model
New medical technologies typically generate new waves of liability concerns. AI might do the right thing most of the time, but there will be instances where the technology fails. It’s hard to determine who will be responsible when this happens. To date, no court in the United States has considered the question of liability for medical injuries caused by relying on AI-generated information. Many people will try to blame the data, the algorithm, or the user, but healthcare leaders will have to be ready to accept some responsibility and know there will be some level of risk associated with these models.
The lack of current federal and state guidance on AI liability shifts the responsibility of determining liability and regulation of AI onto healthcare leaders. AI algorithms could subject physicians, health systems, and/or algorithm designers to liability. Physicians and health systems may be liable under malpractice or other negligence theories, while developers could be subject to product liability. Models with continuous learning capabilities make it even more difficult to trace liability back to an operator or designer.
Manufacturers and providers may find it difficult to predict the liability risks associated with creating or using AI models in healthcare. This lack of clarity around who will be held liable in the case of AI-caused harm could create hesitancy around using AI for health systems, physicians, and patients. Providers should be prepared to face an increased liability risk when implementing AI. They are often held to a different standard than technology developers regarding the Hippocratic oath and the protection of human life.
While these five challenges may feel hard to address, there is no excuse to ignore them or throw your hands up in defeat. AI has the potential to provide innovative solutions for some of healthcare's greatest problems, such as clinician administrative burden and clinician burnout. In many instances, you will be able to rely upon your organization's existing structures, personnel, and resources to address these challenges.
You don’t need to overcomplicate low-risk use cases of AI, which require little to no data management, need less scrutiny around bias and reliability, and are more transparent. Starting with a low-risk, high-impact use case will allow your organization to become more familiar with AI without the challenges of more complex models. This incremental and thoughtful approach to AI and its challenges will help your organization create a solid foundation to implement more advanced AI use cases moving forward. Most importantly, it will help keep your organization competitive and relevant in our ever-changing healthcare landscape.
Create your free account to access 1 resource, including the latest research and webinars.
You have 1 free members-only resource remaining this month.
1 free members-only resources remaining
1 free members-only resources remaining
You've reached your limit of free insights
Never miss out on the latest innovative health care content tailored to you.
You've reached your limit of free insights
Never miss out on the latest innovative health care content tailored to you.
This content is available through your Curated Research partnership with Advisory Board. Click on ‘view this resource’ to read the full piece
Email ask@advisory.com to learn more
Never miss out on the latest innovative health care content tailored to you.
This is for members only. Learn more.
Never miss out on the latest innovative health care content tailored to you.