Report

10 minute read

The board’s guide to 5 AI challenges

Artificial intelligence (AI) has the potential to transform care delivery, but healthcare board and c-suite leaders must understand the risks they face and work to mitigate these challenges to unlock its full potential. Use this guide to get a baseline understanding of the major risks and learn how your organization can ethically deploy AI models.

Artificial intelligence (AI) has the potential to transform the healthcare industry, but it comes with a unique set of challenges. Leaders must be aware of the risks and work to mitigate these challenges to harness the power of AI. The five most important of these challenges are data bias, data accuracy and reliability, data sharing, transparency and explainability, and liability. By working to address each of these challenges, healthcare organizations will be able to improve care delivery and efficiency while ensuring ethical deployment of AI models.


Introduction

Many of the challenges organizations are facing with AI are not new. Challenges with workflow disruption, patient and clinician hesitancy, and reliability, have been seen with previous health technologies. But the broad applicability of AI to healthcare applications will require leaders to think more critically about AI development, implementation, and utilization than they have in the past.  

Despite the challenges AI presents in healthcare, its positives will outweigh potential negatives. Dismissing this technology and using the challenges as an excuse to ignore the potential of AI will only hurt your organization because AI will be necessary to stay competitive.

Although AI challenges will vary depending on the use case, there are some common themes across challenges, such as data and ethics. Below, we explain which challenges are unique to AI and how healthcare leaders can overcome them. Investing the time and resources into mitigating these challenges will allow organizations to fully harness AI and improve their organizational efficiency and care delivery.


The challenges

Why is this a problem?

AI could perpetuate societal bias and deliver incorrect medical information to clinicians and patients. Bias can creep into all AI solutions, and this concern is further heightened with predictive and generative AI models.

How does it manifest?

AI models learn to encode bias from biased training data. Biases in training data are often due to second-order effects of societal or historical inequities in healthcare. Healthcare data can include bias if data is incomplete, fails to capture all factors that influence health outcomes, lacks representation of certain demographics, includes outdated medical practices, or overemphasizes specific treatments. Additionally, the healthcare industry has a long history of disparate health outcomes, especially across racial lines, which contributes to biased data.

What are the potential impacts?

Biased AI models will vary in their predictive capabilities from one population to the next. If models are not equally effective for all users, there will be direct implications on health equity and fairness. Encoded bias can negatively affect clinical decision-making and lead to poorer health outcomes for historically marginalized and underrepresented communities.

How can you act on this challenge:

  • Question the AI model’s training data. Data must be diverse and representative of the population your organization serves. An algorithm that is unbiased for one organization may not translate well to your organization depending on your patient demographics.
  • Create a diverse, multidisciplinary team for evaluating and testing AI algorithms. You cannot judge an AI model only by the effectiveness of the technology, you must also consider the potential societal impacts of the model. For example, UNC Health has formed an AI & Automation Advisory (AAA) group that includes data scientists, clinical leaders, ethicists, privacy and security experts, and operational experts.
  • Consider how your organization will monitor and manage data outputs. Creating a standardized list of metrics or KPIs that your organization will use to monitor bias in data outputs will ensure that you can quickly uncover and mitigate biased data. Different AI use cases require various levels of scrutiny. Models providing users with diagnoses or treatment recommendations, for example, will be more susceptible to bias than a model used for ambient listening.

Why is this a problem?

Accuracy and reliability are important for any health technology, but AI adds another level of complexity to this ever-present challenge. AI models can produce “hallucinations,” which are AI outputs that are not grounded in the model’s training data. These outputs are made up by the model and are not factual or reliable. Even when an AI model does not hallucinate, the generated output may be too vague or broad to be reliable to the end user.

How does it manifest?

AI hallucinations usually occur due to contradictory information in training data, which leads the model into misclassifying and misinterpreting data. Model degradation caused by population shifts or outdated data can also increase the likelihood of hallucinations. Hallucinations are most apparent when users without adequate AI experience rely too much on the technology. Additionally, AI may be unreliable if models produce vague answers that are not technically wrong but are too broad to be useful. When an AI model tries to correct for potential inaccuracies, it can create AI predictions with large margins of error.

What are the potential impacts?

AI outputs are conveyed with a high level of confidence, making it easy for the user to accept those outputs as truth. If users over rely on AI and become complacent with the automation, they will not verify the AI output. This could contribute to the spread of scientific misinformation and negatively affect health outcomes.

How can you act on this challenge:

  • Question the quality of the AI model’s training data. Like the data bias challenge, ensuring accuracy will also depend on the quality of training data. Training data should be clean, verified, and contextually relevant to the AI model’s use case.
  • Use AI as an assistant, not as an autonomous actor. Currently, there is no way to fully eliminate AI hallucinations. Therefore, this technology requires frequent user feedback and monitoring to mitigate the potential effects of hallucinations. 
  • Question how AI is reaching its conclusions. Determine if users can ask the AI they are working with (especially generative AI) to cite its sources. These sources could also be hallucinations, so verifying the source that AI cites is one way to see if an AI output is coming from a reliable source.

Why is this a problem?

Data sharing is necessary to push AI innovation forward in healthcare. Combining data sets from various sources can improve the quality, diversity, and ethicality of training data and algorithm function. However, combining data sets from various sources can lead to patient data leaks and threats to organizational intellectual property (IP). Sharing sensitive patient data has a substantial impact on patient autonomy, AI trustworthiness, and legal and ethical compliance.

How does it manifest?

AI models benefit from vast amounts of training data and continuous learning, and put a great emphasis on sharing or purchasing data. Also, the movement of large quantities of protected health information (PHI) and personally identifiable information (PII) increases the risk of data privacy breaches. For intellectual property, it is hard to determine who has copyright ownership of the work the AI model generates, especially if it resembles an existing creation. Therefore, when working with a vendor on AI model development, healthcare providers have a responsibility to evaluate potential partners with significant scrutiny.

What are the potential impacts?

Data sharing increases the risk of a data breach or leak, which leads to serious legal and ethical concerns. Additionally, open models, such as ChatGPT, are not HIPAA compliant. If clinicians enter PHI into an open AI model, the open model will use this data to learn, and the information will no longer be private. Such data leaks could have legal implications and negative effects on patient trust and acceptance of AI. In addition to concerns with patient privacy, data sharing can lead to a loss or leak of organizational IP.

How can you act on this challenge:

  • Create organization-wide guidelines on what AI models can and cannot be used. To protect against data privacy leaks into open AI models, your organization should consider blocking these models on work devices.
  • Have a plan for how you will anonymize and encrypt data to protect patient information. Data encryption preserves original patient data, allowing for accurate training data, while preventing others from identifying the patient. 
  • Reach agreements with vendors to run AI models on local servers or otherwise separate out your use of the AI models to avoid sharing data.
  • Thoroughly evaluate vendors or partners that you may share data with for AI model development. Ensure these partners have the infrastructure and expertise to protect PHI.
  • Give patients the autonomy to opt out of using AI services. Currently, views surrounding AI are polarized. Organizations should disclose to patients when AI is being used and how.

Why is this a problem?

AI models are not transparent or easily explainable because users do not know where the training data comes from, and they do not understand the inner workings of the algorithm. This is makes some AI into a “black box,” which refers to the inability for users to see how and why these deep learning algorithms produce their output. Lack of transparency and explainability can decrease the trustworthiness of an AI model.

How does it manifest?

Users act upon AI decisions with little to no understanding of how or why these outputs are generated. Black box AI models use billions of parameters to inform their decisions, making it impossible, even for data scientists, to understand how the model is producing outputs. This lack of transparency can negatively impact a model’s trustworthiness. Models with continuous learning capabilities only further exacerbate this problem because they can learn patterns from data they were never trained on.

What are the potential impacts?

Lack of transparency and explainability can lead to distrust or improper use of a model, which may negatively impact patient care. The black box problem also limits the effectiveness of a human-in-the-loop approach because users cannot make informed decisions when they do not understand why an AI model is generating a certain output. AI models will never be fully explainable due to the black box and AI continuous learning capabilities. However, organizations should provide some level of transparency around data collection and management to help users understand what data the AI will use to generate outputs. Increasing transparency can help mitigate bias, increase reliability, and ensure ethical use of the model

How can you act on this challenge:

  • Be transparent about the things you can be transparent about, such as data collection processes and data management processes. Providing end users with transparency where possible will help to mitigate potential negative consequences from AI model misuse.
  • Educate users based on digital literacy and use case. Algorithm users should have access to understandable descriptions of the models they are using. Consider the individuals given access to the model and the level of training needed based on their level of digital literacy. For example, Stanford Health communicates AI implementation broadly to the whole organization while providing in-depth training for model users. Consider how communications and training around AI will fit into your organization's current structures.
  • Think about ways you could build alerts into the AI system. If your AI model can alert the user when it is pulling information from outside the training data, then it will be easier for users to identify when they should more heavily scrutinize an AI output. 
  • Use black box solutions only if necessary. White box solutions, or models that only include a few simple rules, are easily understandable and often just as accurate as black box models.

Why is this a problem?

New medical technologies typically generate new waves of liability concerns. AI might do the right thing most of the time, but there will be instances where the technology fails. It’s hard to determine who will be responsible when this happens. To date, no court in the United States has considered the question of liability for medical injuries caused by relying on AI-generated information. Many people will try to blame the data, the algorithm, or the user, but healthcare leaders will have to be ready to accept some responsibility and know there will be some level of risk associated with these models.

How does it manifest?

The lack of current federal and state guidance on AI liability shifts the responsibility of determining liability and regulation of AI onto healthcare leaders. AI algorithms could subject physicians, health systems, and/or algorithm designers to liability. Physicians and health systems may be liable under malpractice or other negligence theories, while developers could be subject to product liability. Models with continuous learning capabilities make it even more difficult to trace liability back to an operator or designer.

What are the potential impacts?

Manufacturers and providers may find it difficult to predict the liability risks associated with creating or using AI models in healthcare. This lack of clarity around who will be held liable in the case of AI-caused harm could create hesitancy around using AI for health systems, physicians, and patients. Providers should be prepared to face an increased liability risk when implementing AI. They are often held to a different standard than technology developers regarding the Hippocratic oath and the protection of human life.

How can you act on this challenge:

  • Determine who will be accountable for varying instances of AI-related harm. Until there are comprehensive U.S. federal and state regulations on AI liability, your organization must develop guidelines that create a structure for liability. This will foster trust in AI systems and allow for legal and ethical issues with AI to be addressed effectively and efficiently.
  • Monitor AI liability cases moving through the courts and what legislatures are thinking about AI. While you need to have organizational systems in place to address liability, it is crucial to adapt your organization’s liability structures to new federal and state guidelines. Additionally, engaging in conversations with legislative bodies will give your organization power to influence future AI regulation.
  • Create organization-wide guidelines on what AI technologies your workforce can and cannot use. As mentioned in the data sharing section, open AI models should not be used for PHI and PII. It is the health system’s responsibility to set these guidelines for their workforce.

Parting thoughts

While these five challenges may feel hard to address, there is no excuse to ignore them or throw your hands up in defeat. AI has the potential to provide innovative solutions for some of healthcare's greatest problems, such as clinician administrative burden and clinician burnout. In many instances, you will be able to rely upon your organization's existing structures, personnel, and resources to address these challenges. 

You don’t need to overcomplicate low-risk use cases of AI, which require little to no data management, need less scrutiny around bias and reliability, and are more transparent. Starting with a low-risk, high-impact use case will allow your organization to become more familiar with AI without the challenges of more complex models. This incremental and thoughtful approach to AI and its challenges will help your organization create a solid foundation to implement more advanced AI use cases moving forward. Most importantly, it will help keep your organization competitive and relevant in our ever-changing healthcare landscape.


SPONSORED BY

INTENDED AUDIENCE

AFTER YOU READ THIS
  • You will understand the 5 most important AI challenges.
  • You will understand the AI conversations your organization’s leadership should have.
  • You should share this document with your board to facilitate ethical AI decision making.

Don't miss out on the latest Advisory Board insights

Create your free account to access 1 resource, including the latest research and webinars.

Want access without creating an account?

   

You have 1 free members-only resource remaining this month.

1 free members-only resources remaining

1 free members-only resources remaining

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox

You've reached your limit of free insights

Become a member to access all of Advisory Board's resources, events, and experts

Never miss out on the latest innovative health care content tailored to you.

Benefits include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox

This content is available through your Curated Research partnership with Advisory Board. Click on ‘view this resource’ to read the full piece

Email ask@advisory.com to learn more

Click on ‘Become a Member’ to learn about the benefits of a Full-Access partnership with Advisory Board

Never miss out on the latest innovative health care content tailored to you. 

Benefits Include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox

This is for members only. Learn more.

Click on ‘Become a Member’ to learn about the benefits of a Full-Access partnership with Advisory Board

Never miss out on the latest innovative health care content tailored to you. 

Benefits Include:

Unlimited access to research and resources
Member-only access to events and trainings
Expert-led consultation and facilitation
The latest content delivered to your inbox
AB
Thank you! Your updates have been made successfully.
Oh no! There was a problem with your request.
Error in form submission. Please try again.