Arthur T Knackerbracket has found the following story:
Neural networks are powerful things, but they need a lot of juice. Engineers at MIT have now developed a new chip that cuts neural nets' power consumption by up to 95 percent, potentially allowing them to run on battery-powered mobile devices.
Smartphones these days are getting truly smart, with ever more AI-powered services like digital assistants and real-time translation. But typically the neural nets crunching the data for these services are in the cloud, with data from smartphones ferried back and forth.
That's not ideal, as it requires a lot of communication bandwidth and means potentially sensitive data is being transmitted and stored on servers outside the user's control. But the huge amounts of energy needed to power the GPUs neural networks run on make it impractical to implement them in devices that run on limited battery power.
Engineers at MIT have now designed a chip that cuts that power consumption by up to 95 percent by dramatically reducing the need to shuttle data back and forth between a chip's memory and processors.
-- submitted from IRC
(Score: 2) by FatPhil on Wednesday February 28 2018, @08:52AM
Read any book on the theoretical model of neural nets, back to Minksy and beyond, and at no point are they imagined to be compute nodes which have to fetch and store data from an external memory. They are described as having their own internal state, which they communicate only, and directly, to other compute nodes. (While running, they also have a constant set of parameters, but passing them that is "programming" the net, no data that needs to flow more than once, and only in one direction.)
The fact that previous neural net processors, and all common processors, have had a crappy implementation shouldn't be a good reason to herald this as being novel. Those who care about the true cost of computation (taking area into account) have been ranting about the wastefulness of RAM for decades.
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves