Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday April 26 2019, @02:40AM   Printer-friendly
from the I-think-I-can-I-say dept.

Scientists Take a Step Toward Decoding Thoughts

Stroke, amyotrophic lateral sclerosis and other medical conditions can rob people of their ability to speak. Their communication is limited to the speed at which they can move a cursor with their eyes (just eight to 10 words per minute), in contrast with the natural spoken pace of 120 to 150 words per minute. Now, although still a long way from restoring natural speech, researchers at the University of California, San Francisco, have generated intelligible sentences from the thoughts of people without speech difficulties.

The work provides a proof of principle that it should one day be possible to turn imagined words into understandable, real-time speech circumventing the vocal machinery, Edward Chang, a neurosurgeon at U.C.S.F. and co-author of the study published Wednesday in Nature [DOI: 10.1038/s41586-019-1119-1] [DX], said Tuesday in a news conference. "Very few of us have any real idea of what's going on in our mouth when we speak," he said. "The brain translates those thoughts of what you want to say into movements of the vocal tract, and that's what we want to decode."

But Chang cautions that the technology, which has only been tested on people with typical speech, might be much harder to make work in those who cannot speak—and particularly in people who have never been able to speak because of a movement disorder such as cerebral palsy.

YouTube video (48s) comparing a person speaking a sentence to the synthesized audio created from brain wave patterns.

Also at UCSF and TechCrunch.


Original Submission

Related Stories

Facebook-Funded Study Translates Brain Activity Into Text 22 comments

Team IDs Spoken Words and Phrases in Real Time from Brain's Speech Signals

UC San Francisco scientists recently showed that brain activity recorded as research participants spoke could be used to create remarkably realistic synthetic versions of that speech, suggesting hope that one day such brain recordings could be used to restore voices to people who have lost the ability to speak. However, it took the researchers weeks or months to translate brain activity into speech, a far cry from the instant results that would be needed for such a technology to be clinically useful. Now, in a complementary new study, again working with volunteer study subjects, the scientists have for the first time decoded spoken words and phrases in real time from the brain signals that control speech, aided by a novel approach that involves identifying the context in which participants were speaking.

[...] In the new study, published July 30 in Nature Communications [DOI: 10.1038/s41467-019-10994-4], researchers from the Chang lab led by postdoctoral researcher David Moses, PhD, worked with three such research volunteers to develop a way to instantly identify the volunteers' spoken responses to a set of standard questions based solely on their brain activity, representing a first for the field.

To achieve this result, Moses and colleagues developed a set of machine learning algorithms equipped with refined phonological speech models, which were capable of learning to decode specific speech sounds from participants' brain activity. Brain data was recorded while volunteers listened to a set of nine simple questions (e.g. "How is your room currently?", "From 0 to 10, how comfortable are you?", or "When do you want me to check back on you?") and responded out loud with one of 24 answer choices. After some training, the machine learning algorithms learned to detect when participants were hearing a new question or beginning to respond, and to identify which of the two dozen standard responses the participant was giving with up to 61 percent accuracy as soon as they had finished speaking.

[...] Moses's new study was funded by through a multi-institution sponsored academic research agreement with Facebook Reality Labs (FRL), a research division within Facebook focused on developing augmented- and virtual-reality technologies. As FRL has described, the goal for their collaboration with the Chang lab, called Project Steno, is to assess the feasibility of developing a non-invasive, wearable BCI device that could allow people to type by imagining themselves talking.

See also: Facebook gets closer to letting you type with your mind
Brain-computer interfaces are developing faster than the policy debate around them

Previously: Brain Implant Translates Thoughts Into Synthesized Speech


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Friday April 26 2019, @02:48AM (3 children)

    by Anonymous Coward on Friday April 26 2019, @02:48AM (#834997)

    This should work for those stricken with liberalism!!

    • (Score: 0) by Anonymous Coward on Friday April 26 2019, @03:21AM

      by Anonymous Coward on Friday April 26 2019, @03:21AM (#835004)

      This should work for authoritarian governments!!

    • (Score: 0) by Anonymous Coward on Friday April 26 2019, @12:43PM

      by Anonymous Coward on Friday April 26 2019, @12:43PM (#835084)

      I don't think it would work for liberalism. It doesn't fix stupid. See flat-earthers.

    • (Score: 0) by Anonymous Coward on Friday April 26 2019, @11:39PM

      by Anonymous Coward on Friday April 26 2019, @11:39PM (#835377)

      Can we invent the reverse to silence rude loud bigoted rightos?

  • (Score: 1, Funny) by Anonymous Coward on Friday April 26 2019, @06:01AM

    by Anonymous Coward on Friday April 26 2019, @06:01AM (#835014)

    Dear aunt, let's set so double the killer delete select all

  • (Score: 0) by Anonymous Coward on Friday April 26 2019, @12:07PM

    by Anonymous Coward on Friday April 26 2019, @12:07PM (#835071)

    "Oh yes, boss, I'll happily do that. Idiot. Damn, that wasn't meant to be spoken!"

  • (Score: 1, Insightful) by Anonymous Coward on Friday April 26 2019, @02:08PM

    by Anonymous Coward on Friday April 26 2019, @02:08PM (#835115)

    "We're from the government, we're here to help!"

    What's not to love?

  • (Score: 0) by Anonymous Coward on Saturday April 27 2019, @12:36AM (1 child)

    by Anonymous Coward on Saturday April 27 2019, @12:36AM (#835404)

    150 words per minute, as advertised, but 135 of those had to do with desire for more food. Also, turns out that dogs really don't like Moslems.

    • (Score: 0) by Anonymous Coward on Saturday April 27 2019, @05:09AM

      by Anonymous Coward on Saturday April 27 2019, @05:09AM (#835489)

      Also, turns out that dogs really don't like Moslems.

      It is too early to jump to conclusions... Maybe they are not being cooked or seasoned properly.

      If all else fails then you could try wrapping them in bacon.

(1)