AEI software analyzes people's verbal and non-verbal communication in images, videos, and speech to derive information about the person's emotional or psychological state. AEI algorithms compare a video or speech sample with a curated database of facial expressions and audio recordings to classify subjects across universal emotive categories (e.g., happy, angry, sad, or surprised). This quantification of moods, attitudes, and emotions is driven by machine learning, computer vision, and other cognitive techniques.
Two major areas of focus for AEI are:
- Facial-expression analysis, which determines a patient's mood based on non-verbal cues such as facial expressions, head orientation, and in some instances eye tracking; and
- Voice analysis, which seeks to understand the emotional status of a person based on subtle voice characteristics. Vendors in this area analyze speech for signs of stress, pleasure, or other emotional states based on tone, pace, or pitch.
Human emotions are not always explicit and may be overlooked during a rushed patient interaction. Through real-time access to a person's mood and feelings, providers can adjust care to improve the patient experience and even potentially detect various health risks. Furthermore, AEI opens the door for smart devices that can sense and adapt to human emotions without a caretaker present.
Vendors are applying AEI to health care
Affectiva, an AEI company based in Boston, focuses on facial-expression analysis and offers several products used across multiple industries. These products include a software development kit (SDK) and cloud-based application programming interface (API) for developers to add AEI to mobile apps and other devices, and an "emotion-as-a-service" offering for analyzing images and videos on demand. Affectiva has partnered with Brain Power, a neurotechnology company, to deliver a smart glass-system that helps individuals with autism interpret the emotions of others and improve their social and cognitive skills, and Catalia Health to incorporate AEI into a companion robot named Mabu. This robot engages with patients to improve care management and medication adherence.
Beyond Verbal, an Israeli company, focuses on voice analysis. It offers two mobile apps and a cloud-based API for developers to add AEI to wearables, robots, or mobile apps to track emotion and wellness. This product includes a dashboard which shows dominant emotions across voice recordings and positive/negative fluctuations against a subject's baseline emotional state. Beyond Verbal has analyzed increased pitch variability and other speech abnormalities in children with autism, studied voice abnormalities in patients with Parkinson's, and collaborated with Mayo Clinic to determine if its algorithms could detect coronary artery disease (CAD) among patients referred for a coronary angiogram. To further this work, the company launched its Beyond mHealth Research Platform, a collaboration between medical centers, commercial organizations, and other health care stakeholders to identify vocal biomarkers that may indicate health-related issues.
Vendors such as Affectiva and Beyond Verbal demonstrate AEI's multitude of applications for medical research and patient care. However, it is important to note that incorporating AEI into health care brings forth new ethical and privacy challenges. For now, health care organizations planning to use AEI should strive to be as transparent as possible with patients regarding what type of data will be collected and for what purpose.
Despite these concerns, there is potential for health care organizations to leverage AEI to improve patient care and outcomes. We recommend that interested organizations take advantage of partnerships with vendors for early pilots, consider Software-as-a-Service (SaaS) as an option for AEI incorporation, and explore multimodal approaches to patient analysis that combine emotional data with additional physiological or cognitive metrics.
Register for the national meeting
Join us the for the 2017 Health Care IT Advisor national meeting