Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday July 16 2019, @12:16PM   Printer-friendly
from the something-is-different dept.

Submitted via IRC for AnonymousLuser

The way a single neuron processes information is never the same

How do neurons process information? Neurons are known to break down an incoming electrical signal into sub-units. Now, researchers at Blue Brain have discovered that dendrites, the neuron's tree-like receptors, work together—dynamically and depending on the workload—for learning. The findings further our understanding of how we think and may inspire new algorithms for artificial intelligence.

In a paper published in the journal Cell Reports, researchers at EPFL's Blue Brain Project, a Swiss Brain Research Initiative, have developed a new framework to work out how a single neuron in the brain operates.

The analysis was performed using cells from the Blue Brain's virtual rodent cortex. The researchers expect other types of neurons—non-cortical or human—to operate in the same way.

Their results show that when a neuron receives input, the branches of the elaborate tree-like receptors extending from the neuron, known as dendrites, functionally work together in a way that is adjusted to the complexity of the input.

The strength of a synapse determines how strongly a neuron feels an electric signal coming from other neurons, and the act of learning changes this strength. By analyzing the "connectivity matrix" that determines how these synapses communicate with each other, the algorithm establishes when and where synapses group into independent learning units from the structural and electrical properties of dendrites. In other words, the new algorithm determines how the dendrites of neurons functionally break up into separate computing units and finds that they work together dynamically, depending on the workload, to process information.

[...] To date traditional learning algorithms (such as those currently used in A.I. applications) assume that neurons are static units that merely integrate and re-scale incoming signals. By contrast, the results show that the number and size of the independent subunits can be controlled by balanced input or shunting inhibition. The researchers propose that this temporary control of the compartmentalization constitutes a powerful mechanism for the branch-specific learning of input features.

"The method finds that in many brain states, neurons have far fewer parallel processors than expected from dendritic branch patterns. Thus, many synapses appear to be in 'grey zones' where they do not belong to any processing unit," explains lead scientist and first author Willem Wybo. "However, in the brain, neurons receive varying levels of background input and our results show that the number of parallel processors varies with the level of background input, indicating that the same neuron might have different computational roles in different brain states."

"We are particularly excited about this observation since it sheds a new light on the role of up/down states in the brain and it also provides a reason as to why cortical inhibition is so location-specific. With the new insights, we can start looking for algorithms that exploit the rapid changes in pairing between processing units, offering us more insight into the fundamental question of how the brain computes," concludes Gewaltig.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by hendrikboom on Tuesday July 16 2019, @03:41PM (2 children)

    by hendrikboom (1125) Subscriber Badge on Tuesday July 16 2019, @03:41PM (#867587) Homepage Journal

    The analysis was performed using cells from the Blue Brain's virtual rodent cortex.

    The *virtual* rodent cortex? Isn't that a piece of hardware? How can that tell us about actual rat neurons?

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 1, Informative) by Anonymous Coward on Tuesday July 16 2019, @04:15PM (1 child)

    by Anonymous Coward on Tuesday July 16 2019, @04:15PM (#867596)

    its a really really complex simulation, that simulates physical geometry of neurons, neurotransmitters and all kinds of messenger molecules.

    Thats why its extreeeeemely computation intensive.

    To do the same thing with a real brain is rather hard/not yet possible, cos act of measuring signal in a neuron is a destructive act, whether you use probes or fluorescent dyes...

    So measuring stuff on large scale in vivo is challenging, and the neuron itself is horribly complex, like all life is...

    So they use a highest fidelity model they have, cos model can be measured without perturbing it?