A new way of creating a neural network using specially formulated memristors has been described by a team of researchers from Stony Brook University and the University of California Santa Barbara. The process has the potential to place an entire neural network on a single chip:
The system produced by the authors here involved only a 12-by-12 grid of memristors, so it's pretty limited in capacity. But Robert Legenstein, from Austria's Graz University of Technology, writes in an accompanying perspective that "If this design can be scaled up to large network sizes, it will affect the future of computing."
That's because there are still many challenges where a neural network can easily outperform traditional computing hardware—and do so at a fraction of the energy cost. Even on a 30 nm process, it would be possible to place 25 million cells in a square centimeter, with 10,000 synapses on each cell. And all that would dissipate about a Watt.
Training and operation of an integrated neuromorphic network based on metal-oxide memristors [abstract]
(Score: 3, Interesting) by Immerman on Thursday May 07 2015, @11:32PM
Seems like it should be possible, so long as you don't care about perfect accuracy. You'd probably have to include some auxiliary circuitry though to use the crossbar wiring to measure the resistance of each memsistor and then export it to an external medium (probably digital) at the desired level of precision. You could then imprint the same state on a new chip (or the same one, if it stopped operating properly). Of course since you're dealing with analog components there would always be some "noise" inherent in the reading/imprinting process, so you'd never get *exactly* the same state you started with, but you could probably get close enough to receive substantially the same functionality.
On the plus side you could also potentially use the same data to manufacture "hardwired" neural nets using standard resistors to mimic the state of an optimally-trained neural network.