Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Friday February 12 2016, @01:54AM   Printer-friendly
from the making-smart-computers dept.

The Next Platform has an article about waning interest in brain-inspired neuromorphic computing post-2013 (which has not yet delivered a "revolution in computing") and some of the developments in the field since then:

There have been a couple of noteworthy investments that have fed existing research for neuromorphic architectures. The DARPA Synapse program was one such effort, which beginning in 2008, eventually yielded IBM's "True North" chip—a 4096-core device comprised of 256 programmable "neurons" that act much like synapses in the brain, resulting in a highly energy efficient architecture that while fascinating—means an entire rethink of programming approaches. Since that time, other funding from scientific sources, including the Human Brain Project, have pushed the area further, leading to the creation of the SpiNNaker neuromorphic device, although there is still a lack of a single architecture that appears best for neuromorphic computing in general.

The problem is really that there is no "general" purpose for such devices as of yet and no widely accepted device or programmatic approach. Much of this stems from the fact that many of the existing projects are built around specific goals that vary widely. For starters, there are projects around broader neuromorphic engineering that are more centered on robotics versus large-scale computing applications (and vice versa). One of several computing-oriented approaches taken by Stanford University's Neurogrid project, which was presented in hardware in 2009 and remains an ongoing research endeavor, was to simulate the human brain, thus the programming approach and hardware design are both thus modeled as closely to the brain as possible while others are more oriented toward solving computer science related challenges related to power consumption and computational capability using the same concepts, including a 2011 effort at MIT, work at HP with memristors as a key to neuromorphic device creation, and various other smaller projects, including one spin-off of the True North architecture we described here.

[more]

[...] "Neuromorphic computing is still in its beginning stages," Dr. Catherine Schuman, a researcher working on such architectures at Oak Ridge National Laboratory tells The Next Platform. "We haven't nailed down a particular architecture that we are going to run with. True North is an important one, but there are other projects looking at different ways to model a neuron or synapse. And there are also a lot of questions about how to actually use these devices as well, so the programming side of things is just as important."

The programming approach varies from device to device, as Schuman explains. "With True North, for example, the best results come from training a deep learning network offline and moving that program onto the chip. Others that are biologically inspired implementations like Neurogrid, for instance, are based on spike timing dependent plasticity."

The approach Schuman's team is working on at Oak Ridge and the University of Tennessee is based on a neuromorphic architecture called NIDA, short for the Neuroscience Inspired Dynamic Architecture, which was implemented in FPGA in 2014 and now has a full SDK and tooling around it. The hardware implementation, called Dynamic Adaptive Neural Network Array (DANNA) differs from other approaches to neuromorphic computing in that is allows for programmability of structure and is trained using an evolutionary optimization approach—again, based as closely as possible to what we know (and still don't know) about the way our brains work.

Schuman stresses the exploratory nature of existing neuromorphic computing efforts, including those at the lab, but does see a new host of opportunities for them on the horizon, presuming the programming models can be developed to suit both domain scientists and computer scientists. There are, she notes, two routes for neuromorphic devices in the next several years. First, as embedded processors on sensors and other devices, given their low power consumption and high performance processing capability. Second, and perhaps more important for a research center like Oak Ridge National Lab, neuromorphic devices could act "as co-processors on large-scale supercomputers like Titan today where the neuromorphic processor would sit alongside the traditional CPUs and GPU accelerators." Where they tend to shine most, and where her team is focusing effort, is on the role they might play in real-time data analysis.

Also mentioned: Qualcomm's Zeroth cognitive computing platform and their support for the efforts of Brain Corporation.


Original Submission

Related Stories

Novel Synaptic Architecture for Brain Inspired Computing 5 comments

Submitted via IRC for Fnord666

[...] In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

"In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms," Nandakumar says. "The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity."

Source: Novel synaptic architecture for brain inspired computing

Related: New Type of Memristors Used to Create a Limited Neural Net
The Second Coming of Neuromorphic Computing


Original Submission

IBM's Latest Attempt at a Brain-Inspired Computer 1 comment

A new brain-inspired architecture could improve how computers handle data and advance AI

IBM researchers are developing a new computer architecture, better equipped to handle increased data loads from artificial intelligence. Their designs draw on concepts from the human brain and significantly outperform conventional computers in comparative studies. They report on their recent findings in the Journal of Applied Physics, from AIP Publishing.

[...] The IBM team drew on three different levels of inspiration from the brain. The first level exploits a memory device's state dynamics to perform computational tasks in the memory itself, similar to how the brain's memory and processing are co-located. The second level draws on the brain's synaptic network structures as inspiration for arrays of phase change memory (PCM) devices to accelerate training for deep neural networks. Lastly, the dynamic and stochastic nature of neurons and synapses inspired the team to create a powerful computational substrate for spiking neural networks.

[...] Last year, they ran an unsupervised machine learning algorithm on a conventional computer and a prototype computational memory platform based on phase change memory devices. "We could achieve 200 times faster performance in the phase change memory computing systems as opposed to conventional computing systems." Sebastian said. "We always knew they would be efficient, but we didn't expect them to outperform by this much." The team continues to build prototype chips and systems based on brain-inspired concepts.

Biosensor response from target molecules with inhomogeneous charge localization (DOI: 10.1063/1.5036538) (DX)

Previously: IBM Chip Processes Data Similar to the Way Your Brain Does
IBM Builds New Form of Memory that Could Advance Brain-Inspired Computers
Simulating Neuromorphic Supercomputing Designs
The Second Coming of Neuromorphic Computing
Novel Synaptic Architecture for Brain Inspired Computing


Original Submission

SpiNNaker Neuromorphic Supercomputer Reaches 1 Million Cores 19 comments

'Human brain' supercomputer with 1 million processors switched on for first time

The world's largest neuromorphic supercomputer designed and built to work in the same way a human brain does has been fitted with its landmark one-millionth processor core and is being switched on for the first time.

[...] To reach this point it has taken £15million in funding, 20 years in conception and over 10 years in construction, with the initial build starting way back in 2006. The project was initially funded by the EPSRC and is now supported by the European Human Brain Project. It is being switched on for the first time on Friday, 2 November.

[...] The SpiNNaker machine, which was designed and built in The University of Manchester's School of Computer Science, can model more biological neurons in real time than any other machine on the planet.

SpiNNaker.

Also at CNN.

Related: Simulating Neuromorphic Supercomputing Designs
The Second Coming of Neuromorphic Computing
IBM's Latest Attempt at a Brain-Inspired Computer


Original Submission

An Optimized Structure of Memristive Device for Neuromorphic Computing Systems 7 comments

An optimized structure of memristive device for neuromorphic computing systems:

Lobachevsky University scientists have implemented a new variant of the metal-oxide memristive device, which holds promise for use in RRAM (resistive random access memory) and novel computing systems, including neuromorphic ones.

Variability (lack of reproducibility) of resistive switching parameters is the key challenge on the way to new applications of memristive devices. This variability of parameters in 'metal-oxide-metal' device structures is determined by the stochastic nature of the migration process of the oxygen ion and/or oxygen vacancies responsible for oxidation and reduction of conductive channels (filaments) near the metal/oxide interface. It is also compounded by the degradation of device parameters in case of uncontrolled oxygen exchange.

Traditional approaches to controlling the memristive effect include the use of special electrical field concentrators and the engineering of materials/interfaces in the memristive device structure, which typically require a more complex technological process for fabricating memristive devices.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Friday February 12 2016, @04:00AM

    by Anonymous Coward on Friday February 12 2016, @04:00AM (#303078)

    Think about the massive computing power of the universe for a moment.

    All of life as we know it was evolved over several billion years in the 3 dimensional space of our planet, acting on particles smaller than a grain of sand.
    Thousands of square kilometers of "processing power" times 3+ billion years. That's just one planet. Billions of other planets have had the same opportunity to come up with some form of life but don't have the right ingredients. Goes to show, if you don't have the right environment then life won't happen and you'll still have a dead planet.

    I think about this when I want to evolve my own artificial life inside a computer. It's a project that has intrigued me for years. But even with a massively parallel computer, we have only a teeny tiny amount of processing power - it's almost insignificant. I wonder what kind of processing _would_ it take to evolve complex, intelligent ALife inside a computer from scratch. And what would the right environment be (to avoid a 'dead planet')? I'm sure someone out there is working on this very problem - links welcome!

    • (Score: 4, Insightful) by Non Sequor on Friday February 12 2016, @05:28AM

      by Non Sequor (1005) on Friday February 12 2016, @05:28AM (#303096) Journal

      A dead planet lacks chemical reaction pathways to other chemical states. Over time, life on Earth has been essentially moving towards saturation of all means of gathering ambient energy and marshaling it to some purpose.

      It's generally expected that a perfect simulation of a chunk of physical real estate is always going to require a computer physically larger than that real estate. You can try to simulate a simpler planet, but that means paring down the chemical reaction pathways. Whatever life you can simulate in your simpler planet has fewer options to work with. Life on Earth may very well have needed all of the options it had. There have been multiple massive changes in how life gets its energy. These occurred once evolution had built up a pool of traits that could be combined to make use of a resource that was inaccessible before.

      --
      Write your congressman. Tell him he sucks.
  • (Score: 2) by hemocyanin on Friday February 12 2016, @07:56AM

    by hemocyanin (186) on Friday February 12 2016, @07:56AM (#303124) Journal

    This story was rejected [soylentnews.org], maybe my headline was too flip, but researchers at Salk Institute recently discovered that the brain can store about a petabyte of data -- each synapse can encode about 4.7 bits due to the fact that neurons of three main categories (big, medium, small) can also dynamically and periodically change their length up to 8%. Anyway, it is sort of interesting in this context where it sounds like, based on content in the first link, neuromorphic researchers are looking beyond a binary world.

    http://www.salk.edu/news-release/memory-capacity-of-brain-is-10-times-more-than-previously-thought/ [salk.edu]

    Also see trinary or ternary computer: https://en.wikipedia.org/wiki/Ternary_computer [wikipedia.org] (saves power)

    Anyway, from the Salk article:

    “The implications of what we found are far-reaching,” adds Sejnowski. “Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us.”

    The findings also offer a valuable explanation for the brain’s surprising efficiency. The waking adult brain generates only about 20 watts of continuous power—as much as a very dim light bulb. The Salk discovery could help computer scientists build ultraprecise, but energy-efficient, computers, particularly ones that employ “deep learning” and artificial neural nets—techniques capable of sophisticated learning and analysis, such as speech, object recognition and translation.

    “This trick of the brain absolutely points to a way to design better computers,” says Sejnowski. “Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains.”