Blog Post

The 3 kinds of trust necessary for health AI's success

By Eunice JeongJohn LeagueJordan Angers

February 9, 2021

    No matter how exciting an AI product may seem, it will not be successful until stakeholders fully trust it.  AI drew a lot of attention this year as the Covid-19 pandemic made digital health necessary for care. AI products offered new opportunities to advance alternative diagnostic methods, patient wellness, clinical trials, and research. Despite this potential, clinicians and other users don't fully trust AI products in health care, which may deter widespread adoption and hinder optimal use of AI products.

    3 steps to (finally) address your cybersecurity 'elephant in the room'

    The Consumer Technology Association's 2021 CES exhibition highlighted the rising interest in health care AI—but more importantly, it focused on how to achieve trust and clarity in AI models. One of the AI-centered CES panels, titled "Trust and the Impact of AI on Health Care," offered a unique perspective on the issue of trust in AI. Rather than looking at trust in a general way, the issue of trust breaks down into three separate categories (technical, regulatory, and human), with unique challenges to overcome so that AI products meet the standards of all three categories.

    1. Technical trust ensures the technology is intellectually sound and performing as intended

    Challenges

    The obstacle to achieving technical trust is the inexplicability of many AI models—that is, users don't know why an AI model made a specific choice. Some of that is by design. It's difficult to receive patents for AI products, so many vendors rely on trade secrecy to avoid revealing information about their product. Although that makes sense for businesses, it makes it challenging for users to compare different AI solutions.

    Another obstacle is the inherent "black box" nature of machine learning. Algorithms can make recommendations, but they can't tell you how they got to that recommendation. That makes it hard for users to trust the model's output, especially when there's so much potential for incorrect or biased models. 

    Solutions

    Although the black box problem isn't entirely solvable (yet), there are ways to achieve technical trust. Developers should draw on high-quality data from as wide an array of patient populations as possible. Data scientists and users from all backgrounds should be involved in the development process to ensure that the model is reflective of the patient population and does not overlook problems that a homogenous group may miss.

    2. Regulatory trust ensures the technology passes all applicable regulatory and legal requirements

    Challenges

    The lack of regulatory oversight of AI may make a product seem unreliable to end users. Not all AI products are considered medical devices, and many products fall under the "wellness" category, which is not subject to FDA regulation and does not receive FDA approval or third-party vetting. The 21st Century Cures Act also removed some types of AI products from FDA's oversight.

    Solution

    Although many health care AI products fall outside of the purview of official FDA guidelines, products should still undergo strong quality-control and risk-management procedures. Developers can use guidelines from other health fields such as research and development to help guide protocols for evaluation and safe use of AI products. A clear framework of standards and strong regulation to establish the rules around this technology are essential to achieve transparency.

    3. Human user trust ensures that end users of the AI product (such as clinicians) can successfully and comfortably use the technology

    Challenges

    This kind of trust correlates with the other types of trust but is also distinct. While a product could produce high technical and regulatory trust with strong back-end development and evaluation, it may lose user trust if it does not also address front-end aspects of user experience and understandability. This means building an effective interface that accounts for ease of communication, different cultures and communication styles, and personalization options. 

    Additionally, the changing nature of continuous learning systems means that machine learning is constantly adjusting to new information, which might lead to different answers to the same question at different points in time, causing the end user to question the model's accuracy.

    Solutions

    Input from the end users (often clinicians) is needed to help explain the data's context and ensure its success. Data can be correct, complete, and relevant, but clinicians provide knowledge and context on how the data is sourced. There is rarely ever a one-size-fits-all AI product for all care contexts. Different algorithms will work for different contexts, and clinicians will be adept at making these distinctions. Without their input, conclusions might become skewed and faulty.

    It's also important to realize that AI models will be wrong sometimes. Clinicians need to be prepared to make decisions that go against the AI's output. For clinicians to do this, a clear decision-making process must be established before deploying the model, and "tie breakers," such as another clinician, should be ready to review complicated decisions.

     

    Learn more: How to combat AI bias

    AI bias

    Artificial intelligence (AI) can offer a significant opportunity to improve health care, but only if AI models are developed and monitored appropriately. The data we use to feed algorithms, and the humans who oversee these models, may be perpetuating biases that are compounded by intelligent automation.

    In this infographic, we outline a number of challenges to look out for when developing and using AI models, and various steps health care stakeholders can take to reduce the risk of algorithmic bias.

    Download Now

    Topics

    Have a Question?

    x

    Ask our experts a question on any topic in health care by visiting our member portal, AskAdvisory.

    X
    Cookies help us improve your website experience. By using our website, you agree to our use of cookies.