As artificial intelligence (AI) technology continues to evolve, there is a growing interest in its potential use in healthcare, including for mental health. However, some public health and tech professionals have voiced legal and ethical concerns about the use of AI chatbots, especially if patients are not properly informed beforehand, Bethany Biron writes for Insider.
ChatGPT is an AI chatbot that creates human-like texts based on prompts inputted by a user. It is a variant of GPT-3, which was created by OpenAI.
To test its use in digital mental health care, Koko, a free mental health service and nonprofit, allowed its users to use ChatGPT to help them develop responses to 4,000 users who sought help through the service.
In a Twitter thread, Robert Morris, Koko's co-founder, explained the company tested a "co-pilot approach with humans supervising the AI as needed" to write responses to users who sent in messages to Koko's peer support network.
According to Morris, users rated the AI-assisted messages "significantly higher than those written by humans on their own." He also noted that response times decreased by 50% with the AI chatbot.
However, the AI tool was quickly removed from the platform, since "once people learned the messages were co-created by a machine, it didn't work," Morris wrote.
"Simulated empathy feels weird, empty. Machines don't have lived, human experience so when they say 'that sounds hard' or 'I understand', it sounds inauthentic," Morris said. "A chatbot response that's generated in 3 seconds, no matter how elegant, feels cheap somehow."
Whether AI can overcome this problem in the future is unclear, but Morris said that it may be possible, "[e]specially if [machines] establish rapport with the user over time."
In response to Morris's thread, some public health and tech professionals argued that the company may have violated informed consent law by not informing users that they were receiving responses from an AI chatbot. Under this law, human subjects must provide consent before they are involved in research.
According to Arthur Caplan, a professor of bioethics at New York University's Grossman School of Medicine, it is "grossly unethical" to use AI technology without informing users.
"The ChatGPT intervention is not standard of care," Caplan said. "No psychiatric or psychological group has verified its efficacy or laid out potential risks."
However, Morris said the company was "not pairing people up to chat with GPT-3" without their knowledge. Users had to opt-in to the AI feature and were aware of how it would be used during the few days it was live.
Morris also noted that Koko's AI chatbot experiment was "exempt" from the informed consent law, citing previously published research from the company that was also exempt.
"Every individual has to provide consent to use the service," he said. "If this were a university study (which it's not, it was just a product feature explored), this would fall under an 'exempt' category of research."
"This [AI chatbot feature] imposed no further risk to users, no deception, and we don't collect any personally identifiable information or personal health information (no email, phone number, ip, username, etc)," Morris added.
Overall, Morris said the intent of the experiment was to "emphasize the importance of the human in human-AI discussion" and that he hopes "that doesn't get lost here." (Biron, Insider, 1/7)
Create your free account to access 2 resources each month, including the latest research and webinars.
You have 2 free members-only resources remaining this month remaining this month.
Never miss out on the latest innovative health care content tailored to you.