Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday December 01 2018, @09:36PM   Printer-friendly
from the music-to-code-by dept.

To Predict the Future, the Brain Uses two Clocks:

That moment when you step on the gas pedal a split second before the light changes, or when you tap your toes even before the first piano note of Camila Cabello's "Havana" is struck. That's anticipatory timing.

One type relies on memories from past experiences. The other on rhythm. Both are critical to our ability to navigate and enjoy the world.

New University of California, Berkeley, research shows the neural networks supporting each of these timekeepers are split between two different parts of the brain, depending on the task at hand.

"Whether it's sports, music, speech or even allocating attention, our study suggests that timing is not a unified process, but that there are two distinct ways in which we make temporal predictions and these depend on different parts of the brain," said study lead author Assaf Breska, a postdoctoral researcher in neuroscience at UC Berkeley.

The findings, published online in the Proceedings of the National Academy of Sciences journal, offer a new perspective on how humans calculate when to make a move.

"Together, these brain systems allow us to not just exist in the moment, but to also actively anticipate the future," said study senior author Richard Ivry, a UC Berkeley neuroscientist.

[...] Both groups viewed sequences of red, white and green squares as they flashed by at varying speeds on a computer screen, and pushed a button the moment they saw the green square. The white squares alerted them that the green square was coming up.

In one sequence, the red, white and green squares followed a steady rhythm, and the cerebellar degeneration patients responded well to these rhythmic cues.

In another, the colored squares followed a more complex pattern, with differing intervals between the red and green squares. This sequence was easier for the Parkinson's patients to follow, and succeed at.

"We show that patients with cerebellar degeneration are impaired in using non-rhythmic temporal cues while patients with basal ganglia degeneration associated with Parkinson's disease are impaired in using rhythmic cues," Ivry said.

How about that? Background music can be helpful for concentration.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by ledow on Sunday December 02 2018, @02:39AM (1 child)

    by ledow (5567) on Sunday December 02 2018, @02:39AM (#768797) Homepage

    We don't have a single "AI" (all those things you just mentioned collectively) system that's capable of any inference whatsoever.

    They are all statistical models driven by human-created statistics. "Sufficiently advanced technology" that may be "indistinguishable from magic" for a small range of tasks, but essentially untrainable, certainly not RETRAINABLE (you just have to start from scratch every time) and nowhere near capable of even the most basic of inference.

    As such, expecting any such system to adapt to new patterns generally results in failure.

    To simplify: You can teach a machine to recognise a banana in an image with 90% accuracy. To get 95% accuracy becomes a thousand times harder. And it plateaus even more after that. To then take THAT machine and re-train it to recognise apples (i.e. "unlearn" everything it "learned") would take decades of effort.

    Which is why you'll always see these things as PhD projects. You start from scratch, get some accuracy, get some "increasing" accuracy, and then abandon it quickly when you realise that it isn't getting much better no matter what you do. And then if someone wants to use your work for something else, they pretty much have to start from scratch with their own training data all over again or somehow "untrain" everything you did.

    We don't have a damn clue how our mind works, and we certainly don't have anything even approaching a digital analogue of it.

    If it was just a case of training and brute-force, then Google's AI wouldn't just be playing Go, it would be running the world. Fact is AlphaGo can play a perfectly describable and logical game better than humans. IBM's Watson could look up facts and word an answer from keywords faster than a Jeopardy player. But neither could do both with the same program, or any kind of meld of such, and neither improved once they started to plateau. IBM's Watson has been the subject of a number of articles along the "fine, what do we do with it now" line... neither IBM nor anyone else has any real proper use for it.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by darkfeline on Tuesday December 04 2018, @08:17PM

    by darkfeline (1030) on Tuesday December 04 2018, @08:17PM (#769732) Homepage

    Humans don't adapt to new patterns very well either. Most humans aren't capable of very much inference and aren't very trainable either.

    We don't have a damn clue how our mind works, and we don't have a damn clue how artificial neural networks work either.

    I find a lot of smart people can't avoid the subconscious feeling that there's something "special" about human intelligence, like there's a kind of "soul" behind them.

    Spoilers, there's not. It's just a matter of building a big enough neural network and training it with the right environment and interfaces, and some natural/artificial selection, and time. It took evolution millions of year, I'm sure we can replicate it given a millennium if we don't destroy ourselves first.

    Intelligence is in the eye of the beholder. We observe some inputs and outputs, and we call some of those intelligence and some of those not.

    If a robot only says "Welcome to Corneria", that's not intelligence. If an AI tasked with surviving a game as long as possible figures out it can just pause the game, that looks an awful lot like what a young boy would do, experimentation followed by an "Ah ha!" moment and "technically I completed the task".

    "But an AI trained to do X can't do Y!" Most humans can't either.

    --
    Join the SDF Public Access UNIX System today!