Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Saturday August 25 2018, @09:09AM   Printer-friendly
from the Worms,-Roxanne!-Worms! dept.

Submitted via IRC for SoyCow4408

Our brains have an "auto-correct" feature that we deploy when re-interpreting ambiguous sounds, a team of scientists has discovered. Its findings, which appear in the Journal of Neuroscience, point to new ways we use information and context to aid in speech comprehension.

"What a person thinks they hear does not always match the actual signals that reach the ear," explains Laura Gwilliams, a doctoral candidate in NYU's Department of Psychology, a researcher at the Neuroscience of Language Lab at NYU Abu Dhabi, and the paper's lead author. "This is because, our results suggest, the brain re-evaluates the interpretation of a speech sound at the moment that each subsequent speech sound is heard in order to update interpretations as necessary.

It's well known that the perception of a speech sound is determined by its surrounding context -- in the form of words, sentences, and other speech sounds. In many instances, this contextual information is heard later than the initial sensory input.

This plays out in every-day life -- when we talk, the actual speech we produce is often ambiguous. For example, when a friend says she has a "dent" in her car, you may hear "tent." Although this kind of ambiguity happens regularly, we, as listeners, are hardly aware of it.

"This is because the brain automatically resolves the ambiguity for us -- it picks an interpretation and that's what we perceive to hear," explains Gwilliams. "The way the brain does this is by using the surrounding context to narrow down the possibilities of what the speaker may mean."

In the Journal of Neuroscience study, the researchers sought to understand how the brain uses this subsequent information to modify our perception of what we initially heard.

To do this, they conducted a series of experiments in which the subjects listened to isolated syllables and similarly sounding words (e.g., barricade, parakeet). In order to gauge the subjects' brain activity, the scientists deployed magnetoencephalography (MEG), a technique that maps neural movement by recording magnetic fields generated by the electrical currents produced by our brain.

Their results yielded three primary findings:

  • The brain's primary auditory cortex is sensitive to how ambiguous a speech sound is at just 50 milliseconds after the sound's onset.
  • The brain "re-plays" previous speech sounds while interpreting subsequent ones, suggesting re-evaluation as the rest of the word unfolds
  • The brain makes commitments to its "best guess" of how to interpret the signal after about half a second.

Source: https://www.sciencedaily.com/releases/2018/08/180822082637.htm


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by AthanasiusKircher on Saturday August 25 2018, @12:56PM (3 children)

    by AthanasiusKircher (5291) on Saturday August 25 2018, @12:56PM (#726218) Journal

    There are much larger collections of these sorts of things. I had an audio CD a couple decades ago with probably 50 different types of audio illusions.

    Weird thing about them, though? I don't hear maybe half of them. I have quite a bit of musical training (and have had it since a fairly young age). I was sitting in a psychology class a while back with another friend who had significant musical background, and the professor started playing a bunch of these "illusions." After several of them, my friend and I realized we were hearing different things from the rest of the class. For example, many such illusions depend on "fusion" of sounds into a whole. The more musical training you have, the more you tend to be able to separate components. At an extreme, old-school piano tuners are trained to separate out the individual harmonics of a single sound source (to compare beating patterns in harmonics between two pitches for tuning). If you are sensitive to individual harmonics and the composition of a sound, many of these "illusions" don't work.

    Anyhow, larger point is that a lot of auditory processing is trained. And a lot of things where your brain does "corrections" can be influenced or untrained. For example, some theories of perfect pitch say that's a similar thing: relative pitch (what most people have) is actually a more complex auditory processing skill. Perfect (or absolute) pitch is something various animals exhibit. Some psychoacoustics experts postulate that the reason it's much easier for kids to acquire perfect pitch is because they haven't solidified that advanced auditory processing yet. Having perfect pitch is basically about keeping a more "raw" processing capability to the sound, essentially a more "primitive" brain function.

    Starting Score:    1  point
    Moderation   +3  
       Interesting=3, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 5, Interesting) by AthanasiusKircher on Saturday August 25 2018, @01:05PM (1 child)

    by AthanasiusKircher (5291) on Saturday August 25 2018, @01:05PM (#726220) Journal

    Also, more relevant to the present article: it doesn't surprise me at all that the initial milliseconds of a speech sound are processed and reinterpreted with context. Sound transients and envelope are incredibly important to how we perceive sound. I can't find a video on YouTube or whatever as a demonstration right now, but if you cut up the sound of an instrument by removing its attack (transient), lots of very disparate instruments can be confused. And just hearing that initial attack can often be ambiguous too. Our brains clearly piece together information over larger time spans, so this speech thing is unsurprising (and frankly, I'm pretty sure this has been well-known before in speech too, though I assume this study finds further nuance).

    • (Score: 1) by Ethanol-fueled on Saturday August 25 2018, @05:52PM

      by Ethanol-fueled (2792) on Saturday August 25 2018, @05:52PM (#726276) Homepage

      Serious training helps a lot but even shitty musicians can decompose things if they know how they work. A good example is the Sheppard tone that somebody posted above. I think a good analogy is being able to focus on conversing with a single person, or selectively with multiple people, in a room full of conversation. I noticed that I started getting good at it when I would listen to the Grateful Dead harmonizing and "lock on" to each of their voices and tune out the others to figure out who was singing what.

      This is why the humble Contrapunctus I is still one of my favorite songs - newbies can first hear the 4 voices individually, staggered to give them an idea of what is going where, then the song slowly starts to blend into full-blown harmony.

  • (Score: 2) by Reziac on Sunday August 26 2018, @03:28AM

    by Reziac (2489) on Sunday August 26 2018, @03:28AM (#726439) Homepage

    Very interesting. I don't have musical training, but in 3rd grade we were all tested, and I had a 'perfect ear'. Anyway after a couple iterations I start hearing the loop point, then the gaps between tones, and the illusion falls apart.

    --
    And there is no Alkibiades to come back and save us from ourselves.