Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday January 31 2019, @01:40PM   Printer-friendly
from the NOW-your-talking! dept.

Engineers translate brain signals directly into speech

In a scientific first, Columbia [University] neuroengineers have created a system that translates thought into intelligible, recognizable speech. By monitoring someone's brain activity, the technology can reconstruct the words a person hears with unprecedented clarity. This breakthrough, which harnesses the power of speech synthesizers and artificial intelligence, could lead to new ways for computers to communicate directly with the brain. It also lays the groundwork for helping people who cannot speak, such as those living with as amyotrophic lateral sclerosis (ALS) or recovering from stroke, regain their ability to communicate with the outside world.

[...] Decades of research has shown that when people speak -- or even imagine speaking -- telltale patterns of activity appear in their brain. Distinct (but recognizable) pattern of signals also emerge when we listen to someone speak, or imagine listening. Experts, trying to record and decode these patterns, see a future in which thoughts need not remain hidden inside the brain -- but instead could be translated into verbal speech at will.

[...] [The] researchers asked [epilepsy] patients to listen to speakers reciting digits between 0 to 9, while recording brain signals that could then be run through the vocoder. The sound produced by the vocoder in response to those signals was analyzed and cleaned up by neural networks, a type of artificial intelligence that mimics the structure of neurons in the biological brain.

The end result was a robotic-sounding voice reciting a sequence of numbers. To test the accuracy of the recording, Dr. [Nima] Mesgarani and his team tasked individuals to listen to the recording and report what they heard. "We found that people could understand and repeat the sounds about 75% of the time, which is well above and beyond any previous attempts," said Dr. Mesgarani. The improvement in intelligibility was especially evident when comparing the new recordings to the earlier, spectrogram-based attempts. "The sensitive vocoder and powerful neural networks represented the sounds the patients had originally listened to with surprising accuracy."

Dr. Mesgarani and his team plan to test more complicated words and sentences next, and they want to run the same tests on brain signals emitted when a person speaks or imagines speaking. Ultimately, they hope their system could be part of an implant, similar to those worn by some epilepsy patients, that translates the wearer's thoughts directly into words.

Towards reconstructing intelligible speech from the human auditory cortex (open, DOI: 10.1038/s41598-018-37359-z) (DX)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.