April 25, 2019

These scientists just turned brain signals into human speech

Daily Briefing

    Scientists on Wednesday revealed that they have developed a device that detects electrical signals in the brain and turns them into mostly understandable synthetic speech, raising hopes that brain-computer interfaces could someday help those who have lost the ability to speak.

    Just updated: Access our neurosciences clinical technology compendium

    How it works

    For the study, published in the journal Nature, researchers at the University of California, San Francisco (UCSF) placed a 16-by-16 grid of electrodes, known as electrocorticography (ECoG), on the brain surface of five epilepsy patients who were undergoing an unrelated brain surgery and agreed to participate in the study.

    Researchers asked the five patients to read hundreds of sentences out loud while the ECoG device detected the brain signals that corresponded to the movement of their lips, jaw, tongue, and larynx that occur during normal speech, according to Josh Chartier, a graduate student at UCSF who co-authored the study.

    After that, the researchers used an artificial intelligence (AI) system to match the signals to a database of muscle movements. The AI could then match the muscle configuration to a corresponding sound to generate synthetic speech.

    Next, the researchers asked hundreds of volunteers to listen to 100 sentences of synthetic speech to see how well listeners could decipher the speech. Listeners were given a list of either 25 or 50 words that the sentences could include.

    Among those who were given a 25-word list, nearly 70% of the words were understood, while those with a 50-word list understood just 47% of the words. According to co-author Gopala Anumanchipalli, in a real-world setting without a word list, "it would definitely be harder" to understand the synthetic speech. Participants had the most difficulty identifying words with explosive sounds such as b and p and were more easily able to detect drawn out sounds like shhh.

    However, Chartier said that even with the errors, "[i]t was able to work reasonably well." He added, "We found that in many cases the gist of the sentence was understood." For example, listeners mistook "Those thieves stole 30 jewels" for "30 thieves stole 30 jewels."

    Implications

    Nima Mesgarani, an electrical engineer at Columbia University who was not involved in the study, said that, even with the errors in understanding the speech, this study "represents an important step toward a speech neuroprosthesis."

    Other studies have been able to turn brain waves into speech, according to Chethan Pandarinath, a biomedical engineer from Emory University and Georgia Institute of Technology, but those studies mostly "decoded single words, and those were mostly monosyllabic." In this study, the UCSF researchers were able "to reconstruct complete sentences, [with] results somewhat intelligible to human listeners."

    Pandarinath added that the study's jump from monosyllabic words to sentences "is technically quite challenging and is one of the things that makes it so impressive."

    Pandarinath and Yahia Ali, also a biomedical engineer at Emory and Georgia Institute of Technology who wrote an accompanying commentary to the study, said that, "[w]ith continued progress, we can hope that individuals with speech impairments will regain the ability to freely speak their minds and reconnect with the world around them."

    Edward Chang, a neurosurgeon at UCSF who led the research, has similar hopes. He explained that being able to produce "entire spoken sentences based on an individual's brain activity" means that "we should be able to build a device that is clinically viable in patients with speech loss."

    However, Chang added that it's unclear whether this would work in someone who has never spoken, such as a patient with cerebral palsy. If there aren't any brain signals to encode vocal tract movements, it wouldn't be possible to train the virtual vocal tract to create speech from ECoG signals, Chang said. According to STAT News, the approach is also extremely invasive, requiring surgeons to perform brain surgery and place electrodes directly on the brain's surface. 

    That said, many experts agreed that the findings are encouraging. Kristina Simonyan, who studies speech disorders and the neural mechanisms of human speech at Harvard Medical School, said. "This is not the final step, but there is … hope on the horizon," she said (Begley, STAT News, 4/24; Carey, New York Times, 4/24; Hotz, Wall Street Journal, 4/24; Regalado, MIT Technology Review, 4/24).

    Get ready-to-present slides on the latest neurosciences market trends

    Download the slides to learn everything you'll need to know about the neuroscience market in 2019, from growth outlook and financial considerations to new care management priorities and technology innovations.

    Get the Slides

    X
    Cookies help us improve your website experience. By using our website, you agree to our use of cookies.