Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday July 16 2019, @12:16PM   Printer-friendly
from the something-is-different dept.

Submitted via IRC for AnonymousLuser

The way a single neuron processes information is never the same

How do neurons process information? Neurons are known to break down an incoming electrical signal into sub-units. Now, researchers at Blue Brain have discovered that dendrites, the neuron's tree-like receptors, work together—dynamically and depending on the workload—for learning. The findings further our understanding of how we think and may inspire new algorithms for artificial intelligence.

In a paper published in the journal Cell Reports, researchers at EPFL's Blue Brain Project, a Swiss Brain Research Initiative, have developed a new framework to work out how a single neuron in the brain operates.

The analysis was performed using cells from the Blue Brain's virtual rodent cortex. The researchers expect other types of neurons—non-cortical or human—to operate in the same way.

Their results show that when a neuron receives input, the branches of the elaborate tree-like receptors extending from the neuron, known as dendrites, functionally work together in a way that is adjusted to the complexity of the input.

The strength of a synapse determines how strongly a neuron feels an electric signal coming from other neurons, and the act of learning changes this strength. By analyzing the "connectivity matrix" that determines how these synapses communicate with each other, the algorithm establishes when and where synapses group into independent learning units from the structural and electrical properties of dendrites. In other words, the new algorithm determines how the dendrites of neurons functionally break up into separate computing units and finds that they work together dynamically, depending on the workload, to process information.

[...] To date traditional learning algorithms (such as those currently used in A.I. applications) assume that neurons are static units that merely integrate and re-scale incoming signals. By contrast, the results show that the number and size of the independent subunits can be controlled by balanced input or shunting inhibition. The researchers propose that this temporary control of the compartmentalization constitutes a powerful mechanism for the branch-specific learning of input features.

"The method finds that in many brain states, neurons have far fewer parallel processors than expected from dendritic branch patterns. Thus, many synapses appear to be in 'grey zones' where they do not belong to any processing unit," explains lead scientist and first author Willem Wybo. "However, in the brain, neurons receive varying levels of background input and our results show that the number of parallel processors varies with the level of background input, indicating that the same neuron might have different computational roles in different brain states."

"We are particularly excited about this observation since it sheds a new light on the role of up/down states in the brain and it also provides a reason as to why cortical inhibition is so location-specific. With the new insights, we can start looking for algorithms that exploit the rapid changes in pairing between processing units, offering us more insight into the fundamental question of how the brain computes," concludes Gewaltig.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Funny) by Rosco P. Coltrane on Tuesday July 16 2019, @12:32PM (2 children)

    by Rosco P. Coltrane (4757) on Tuesday July 16 2019, @12:32PM (#867515)

    as brilliantly demonstrated by the current POTUS: one neuron, totally random behavior.

    • (Score: 0) by Anonymous Coward on Tuesday July 16 2019, @12:52PM (1 child)

      by Anonymous Coward on Tuesday July 16 2019, @12:52PM (#867526)

      Randomly just got Democrats to rally behind the squad? [axios.com]

      • (Score: 3, Insightful) by ikanreed on Tuesday July 16 2019, @03:08PM

        by ikanreed (3164) Subscriber Badge on Tuesday July 16 2019, @03:08PM (#867579) Journal

        Unlikely given that just 16 hours before, Pelosi was attacking them for daring to have any values whatsoever, which is an unforgivable crime in congress.

  • (Score: 0) by Anonymous Coward on Tuesday July 16 2019, @12:49PM

    by Anonymous Coward on Tuesday July 16 2019, @12:49PM (#867523)

    ..."neuromorphic" computing isn't?

  • (Score: 2) by Bot on Tuesday July 16 2019, @01:18PM (1 child)

    by Bot (3902) on Tuesday July 16 2019, @01:18PM (#867535) Journal

    If inputs are processed by an array of neurons, randomness implements some kind of fuzzy logic.

    --
    Account abandoned.
    • (Score: 2) by c0lo on Tuesday July 16 2019, @01:33PM

      by c0lo (156) Subscriber Badge on Tuesday July 16 2019, @01:33PM (#867542) Journal

      If inputs are processed by an array of neurons, randomness implements some kind of fuzzy logic.

      Got any other explanation for the living with "chronic cognitive dissonance condition" so many people do?

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 1, Interesting) by Anonymous Coward on Tuesday July 16 2019, @01:47PM (1 child)

    by Anonymous Coward on Tuesday July 16 2019, @01:47PM (#867547)

    > To date traditional learning algorithms (such as those currently used in A.I. applications) assume that neurons are static units that merely integrate and re-scale incoming signals.

    Show me one person who assumes actual neurons are just like the nodes in a neural network.

    • (Score: 0) by Anonymous Coward on Wednesday July 17 2019, @04:40PM

      by Anonymous Coward on Wednesday July 17 2019, @04:40PM (#868067)

      Show me one person who assumes actual neurons are just like the nodes in a neural network.

      People trying to publish bullshit that looks better if neurons are assumed to be static units?

  • (Score: 3, Insightful) by hendrikboom on Tuesday July 16 2019, @03:41PM (2 children)

    by hendrikboom (1125) Subscriber Badge on Tuesday July 16 2019, @03:41PM (#867587) Homepage Journal

    The analysis was performed using cells from the Blue Brain's virtual rodent cortex.

    The *virtual* rodent cortex? Isn't that a piece of hardware? How can that tell us about actual rat neurons?

    • (Score: 1, Informative) by Anonymous Coward on Tuesday July 16 2019, @04:15PM (1 child)

      by Anonymous Coward on Tuesday July 16 2019, @04:15PM (#867596)

      its a really really complex simulation, that simulates physical geometry of neurons, neurotransmitters and all kinds of messenger molecules.

      Thats why its extreeeeemely computation intensive.

      To do the same thing with a real brain is rather hard/not yet possible, cos act of measuring signal in a neuron is a destructive act, whether you use probes or fluorescent dyes...

      So measuring stuff on large scale in vivo is challenging, and the neuron itself is horribly complex, like all life is...

      So they use a highest fidelity model they have, cos model can be measured without perturbing it?

(1)