Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Saturday March 24 2018, @10:55PM   Printer-friendly
from the I-call-dibs...-Scarecrow dept.

A new version of the NEST algorithm could dramatically reduce the amount of memory required to run a whole human brain simulation, while increasing simulation speed on current supercomputers:

During the simulation, a neuron's action potentials (short electric pulses) first need to be sent to all 100,000 or so small computers, called nodes, each equipped with a number of processors doing the actual calculations. Each node then checks which of all these pulses are relevant for the virtual neurons that exist on this node.

That process requires one bit of information per processor for every neuron in the whole network. For a network of one billion neurons, a large part of the memory in each node is consumed by this single bit of information per neuron. Of course, the amount of computer memory required per processor for these extra bits per neuron increases with the size of the neuronal network. To go beyond the 1 percent and simulate the entire human brain would require the memory available to each processor to be 100 times larger than in today's supercomputers.

In future exascale computers, such as the post-K computer planned in Kobe and JUWELS at Jülich in Germany, the number of processors per compute node will increase, but the memory per processor and the number of compute nodes will stay the same.

Achieving whole-brain simulation on future exascale supercomputers. That's where the next-generation NEST algorithm comes in. At the beginning of the simulation, the new NEST algorithm will allow the nodes to exchange information about what data on neuronal activity needs to [be] sent and to where. Once this knowledge is available, the exchange of data between nodes can be organized such that a given node only receives the information it actually requires. That will eliminate the need for the additional bit for each neuron in the network.

With memory consumption under control, simulation speed will then become the main focus. For example, a large simulation of 0.52 billion neurons connected by 5.8 trillion synapses running on the supercomputer JUQUEEN in Jülich previously required 28.5 minutes to compute one second of biological time. With the improved algorithm, the time will be reduced to just 5.2 minutes, the researchers calculate.

Also at the Human Brain Project.

Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers (open, DOI: 10.3389/fninf.2018.00002) (DX)

Previously: Largest neuronal network simulation achieved using K computer (2013)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by tftp on Sunday March 25 2018, @02:09AM

    by tftp (806) on Sunday March 25 2018, @02:09AM (#657751) Homepage
    One bit per neuron might be a too conservative estimate. It's more like one byte per synapse, as synapse interfaces through the cell wall are adjustable.