Blog Post

The personalization enigma, and why technology still comes up short

By Andrew Rebhan

July 20, 2021

    We humans are a fussy bunch.

    We seek customization. We like recommendations from Amazon and Netflix that suit our habits and preferences. We want to be treated as individuals, and not feel like we're a cog in the machine. And we want to feel taken care of, especially when it comes to health care.

    In many ways, technology can help meet those needs, but it still encounters cultural and social barriers. What happens as technology increasingly hyper-personalizes data for us and guides us throughout our day? What happens when smart sensors and other devices start to anticipate our every need? Will we embrace it, or will we always need a human in the loop for reassurance?

    AI vendor evaluation guide: Separate the signal from the noise

    I came across three publications that raised these questions, and I wanted to dig deeper into what this means for the ongoing development of digital health, consumerism, and the patient-physician relationship.

    Exhibit A: Personalized coaching versus generic coaching

    In a recent study, researchers from New York University, Carnegie Mellon University, and the Harbin Institute of Technology wanted to evaluate whether consumer adoption of mobile health apps would lead to tangible behavior change that improves health outcomes (e.g., reduced hospital visits and medical expenses over time).

    The researchers partnered with a health app provider to implement a randomized study of over 1,000 chronic diabetes patients. The study provided the intervention group with a health app (either mobile or web-based) to track their daily activities, while a control group would have no access to the app.

    Overall, the intervention group showed greater improvement across various metrics compared with the control group, including a significant reduction in blood glucose and glycated hemoglobin levels, higher levels of daily exercise, improved diets, and better sleep, among other benefits. They were also more likely to use telehealth, reducing the need for hospital visits.

    But it was the other findings of the study that piqued my interest. 

    The researchers noted that the patients using the app would receive two text messages per week that were meant to provide the appropriate nudges or reminders for healthy behaviors. However, these messages were provided in two ways: some patients received personalized reminders (i.e., advice using patient-specific health data), while others received generic reminders (i.e., messages sharing general knowledge about diabetes care). The researchers gave the following example demonstrating this difference:

    • Personalized text: "Dear Mr. XX, you did not exercise at all yesterday. Take a 45-minute walk today as it will help control your blood glucose levels."
    • Generic text: "Regular exercise at moderate intensity is very helpful for controlling blood glucose."

    The result? Personalized mobile messages were not as effective as generic messages at driving healthy behaviors and had a smaller effect on patient app engagement and lifestyle changes. For example, generic texts were 18% more effective than personalized texts at reducing glucose levels over the course of the study.

    Personalized texts also led to a 41% decrease in daily steps and a 28% decrease in total exercise time compared with those receiving generic messages. However, the study also demonstrated that compared to generic texts, personalized messages were more effective at driving patients to use telehealth.

    The researchers gathered post-study feedback to learn more about these findings, and discovered that some participants found the personalized messages to be intrusive, annoying (messages were too frequent or contained too much information), and even made them feel like they were being judged, which demotivated them and ultimately led to a decline in app engagement.

    Exhibit B: The AI that knows me versus the doctor who knows me

    In a separate study, researchers from Penn State University and the University of California-Santa Barbara also touched upon matters of personalization, but this time using AI.

    Roughly 300 patients were recruited to participate in two separate phases. In the first phase, each patient was randomly assigned to interact through a chat function with either a human doctor-operated chatbot, an AI-only chatbot, or an AI-assisted physician operating a chatbot.

    In phase two, patients would interact with the same doctor/AI as the first phase, but with a twist: The doctor/AI would either call the patients by their first name and recall their last interaction, or would ask for their name again and repeat questions about their medical history. In essence, the researchers were testing the effects of interacting with a doctor/AI who either remembers you from the previous encounter or doesn't.

    Those patients who interacted with a human physician responded positively to being called by their first name, expected the doctor to differentiate them from other patients, and were less likely to comply if the physician didn't remember details about their last interaction.

    However, the study also found that patients were less likely to follow the advice of an AI chatbot if the machine used the first name of the patient and referred to the patient's specific medical history during the second phase. The researchers noted—like the New York University study—that patients found the personalized AI chatbot to be "intrusive."

    Another interesting finding was that nearly 80% of patients who interacted with a human physician (that is, a human who was communicating through a chat function) believed they were interacting with an AI. The researchers hypothesized that patients may have become more accustomed to AI chatbots during the pandemic, and therefore may have expected a "richer interaction" with a human.

    Study implications

    These two studies highlight the complex interplay between technology and our social nature. We know that health apps, wearables, and tech-enabled coaching can positively impact patient behaviors compared with those receiving no intervention. We also assume that consumers generally value personalization because they want to feel like the guidance they're receiving is tailored to their specific needs (the Penn State study showed how patients negatively responded to those doctors who didn't recall their information from the first interaction).

    And we know that technology can facilitate personalization: AI can handle a much higher volume of data than a human can for predictive analysis and for providing proactive feedback based on trending data from a patient's behavior.  

    A clear takeaway from all of this is how design plays a major role in determining a successful digital health program.

    For example, the fact that some patients felt judged by personalized texts in the New York University study speaks to the importance of designing digital health programs that support a patient's intrinsic motivation.

    Patients want personalized guidance but need to be nudged in a way that doesn't feel like an imposed requirement. Engagement can also decline if patients perceive personalized guidance as a constant reminder of their shortcomings.

    Furthermore, the fact that patients in both studies found personalized guidance from a technology source to be intrusive, shows that many patients still expect to get personalized health guidance primarily from a human caretaker, not from a technology source (e.g., mobile app, AI bot).

    Even if an AI can pull together vast streams of data, meticulously parse a patient's entire medical record, and create highly accurate diagnoses, patients are still likely to see this as an invasion of privacy—or just downright creepy.

    Exhibit C: The honest AI

    But given what we learned from the first two studies, is it possible to strike a balance? A third study examined if patients could form a strong therapeutic bond with a conversational AI that is similar to the bonds they have with human therapists. The study tested the effectiveness of a chatbot-based mental health platform called Woebot.

    Woebot provides brief, daily cognitive behavioral therapy sessions through a smartphone-based app program. For each session, the AI tailors the conversation by analyzing the user's mood, while delivering empathetic statements and personalized follow-up that are meant to boost the user's motivation for positive behavior change.

    In analyzing data from over 36,000 Woebot users, the researchers found that the AI had an average bond score (determined using the Working Alliance Inventory-Short Revised test) that was comparable to face-to-face individual and group-based therapy. Woebot's bond scores were also higher than other online-based therapy programs.

    What accounts for Woebot's strong ability to form bonds with humans? The researchers suggest its due to its design, as the chatbot transparently presents itself as an archetypal robot that explicitly references its limitations. The researchers explain why this transparency is important by stating, "interacting with humanoid AI identities can result in individuals falling prey to the 'uncanny valley,' which is the sense of unease and 'creepiness' that is created when something that is artificial tries to appear humanlike." In other words, while this AI addresses highly personal issues with users, it also acknowledges that it's not human.

    Additional takeaways

    This last study shows how digital health programs can be designed in a way that builds human trust in the technology. In analyzing the results of the Penn State study, the researchers noted that on average, many patients often feel like AI doesn't "recognize their uniqueness as a patient," and if an AI asks a patient how she is feeling, the AI doesn't actually care, it's just searching for data. Contrast that with the Woebot study, where participants reported that they felt cared for by the AI, with one user stating, "Woebot felt like a real person that showed concern."

    So, patients are not just seeking personalized advice—they want to be heard and feel like the entity they're engaging with cares about them. Devices can take in data and work wonders with it, but technology still often struggles to connect on a human level in terms of servicing emotional needs, and truly understanding empathy and social dynamics.

    This may even explain why in the New York University study the personalized messages were less effective at driving healthy behaviors, but were more likely to drive patients to use telehealth—perhaps these patients just wanted to hear guidance from a human who could empathize with them.

    Design thinking will play an increasingly important role in boosting digital health adoption moving forward as we start to rely more on technology that is increasingly pervasive in our lives. Health care stakeholders looking to launch digital health solutions must thoroughly evaluate how they structure interventions in terms of three key metrics:

    • Format (e.g., communication methods, notifications);
    • Information detail (e.g., health data granularity and context provided); and
    • Agency (e.g., descriptive data, supplemental advice, proactive recommendations).

    In the end, it's all about balance. Digital health programs that start with a human-centered approach to care will have a higher likelihood of success.

    AI vendor evaluation guide

    Download the infographic

    artificial intelligence

    For this infographic, we provide five core questions you can ask AI vendors to determine the true value of their offerings, whether the technology is worth the investment, and if this vendor will be a true strategic partner for your organization.

    Download now

    Have a Question?

    x

    Ask our experts a question on any topic in health care by visiting our member portal, AskAdvisory.

    X
    Cookies help us improve your website experience. By using our website, you agree to our use of cookies.