A new way to solve the ‘hardest of the hard’ computer problems:
A relatively new type of computing that mimics the way the human brain works was already transforming how scientists could tackle some of the most difficult information processing problems.
Now, researchers have found a way to make what is called reservoir computing work between 33 and a million times faster, with significantly fewer computing resources and less data input needed.
In fact, in one test of this next-generation reservoir computing, researchers solved a complex computing problem in less than a second on a desktop computer.
Using the now current state-of-the-art technology, the same problem requires a supercomputer to solve and still takes much longer, said Daniel Gauthier, lead author of the study and professor of physics at The Ohio State University.
[...] Reservoir computing is a machine learning algorithm developed in the early 2000s and used to solve the "hardest of the hard" computing problems, such as forecasting the evolution of dynamical systems that change over time, Gauthier said.
[...] In this study, Gauthier and his colleagues investigated that question and found that the whole reservoir computing system could be greatly simplified, dramatically reducing the need for computing resources and saving significant time. They tested their concept on a forecasting task involving a weather system developed by Edward Lorenz, whose work led to our understanding of the butterfly effect. Their next-generation reservoir computing was a clear winner over today’s state—of-the-art on this Lorenz forecasting task. In one relatively simple simulation done on a desktop computer, the new system was 33 to 163 times faster than the current model.
[...] But when the aim was for great accuracy in the forecast, the next-generation reservoir computing was about 1 million times faster. And the new-generation computing achieved the same accuracy with the equivalent of just 28 neurons, compared to the 4,000 needed by the current-generation model, Gauthier said.
Is this new wine in old bottle or old wine in new bottle?
Journal Reference:
Daniel J. Gauthier, Erik Bollt, Aaron Griffith, et al. Next generation reservoir computing [open], Nature Communications (DOI: 10.1038/s41467-021-25801-2)
Reservoir Computing on Wikipedia.
(Score: 5, Interesting) by Anonymous Coward on Tuesday September 21 2021, @05:59PM (7 children)
It's old wine - but distilled into brandy.
It's an old bottle - but shone up and relabeled.
The short version: they use a neural network (with some specific aspects to its training and parameters, hence the specific name of reservoir computing) to model a complex problem space, with reasonable (useful) accuracy. By proving mathematically that the neural network is equivalent to a different type of mathematical approach (nonlinear vector autoregression machine) , they can save a lot of time and money by solving that in a known way.
Summary: they mathed hard and found a simpler implementation algorithm to achieve the same result.
This is basically a win for algorithmic analysis. Well done, lads and lasses. They can now do process modeling/forecasting faster than before. This doesn't guarantee correctness in detail, but broad-strokes outcome matching.
Useful follow-ups would include analyses of how much more computing prediction you can squeeze out of the same computing resources as a consequence of this, and how much more accuracy you can buy this way.
But it's still, in the big picture, the same old pattern-matching/prediction/automated model creation system as before, basically small AI.
The one real exception, and the sleeper element here, is that by producing equations rather than just a neural black box, we have an intelligible set of explanatory tools that we can use to understand the model. This is where things blow open as wide as the sky, but that is largely glossed over or mentioned in passing.
(Score: 0) by Anonymous Coward on Tuesday September 21 2021, @07:24PM (3 children)
Thank you for this quick summary. I actually clicked into the article to ask if anyone had a TLDR version to share, as the article summary was not very elucidating.
Oh good lord, I went to the journal article and in the very first sentence of the abstract they mention that reservoir computing is "best-in-class." I was fully expecting to see further down that it had also won the J.D. Power & Associates award for something (note for our non-North American readers, I hope your car commercials aren't flooded with these phrases).
(Score: 1, Informative) by Anonymous Coward on Tuesday September 21 2021, @07:36PM (2 children)
You're welcome.
Technically, it does matter that it is best-in-class, because it means that any improvement in speed and efficiency means that we're pushing the envelope out further.
Practically, the ability to ascribe explanatory power to input variables as per a set of equations is the part that changes this from a quantitative improvement, to a qualitative improvement, but the article is light on that.
(Score: 0) by Anonymous Coward on Tuesday September 21 2021, @11:36PM (1 child)
BINGO! Useful to energistically e-enable 24/365 ideas.
https://www.atrixnet.com/bs-generator.html [atrixnet.com]
(Score: 2) by Runaway1956 on Wednesday September 22 2021, @12:11AM
I generated a few phrases. They could be in trouble for plagiarism. I swear I've seen some of those phrases in corporate mission statements. Just like politician speak, they can talk for hours, and say nothing.
Abortion is the number one killed of children in the United States.
(Score: 4, Interesting) by FatPhil on Tuesday September 21 2021, @10:29PM (1 child)
This is why AI upscalers that let you "enhance" CCTV footage (or camphone UAP "evidence") are possibly more harmful than they are useful. We've found a body, so it must have a head and a face - and I know what faces look like, so let's paint one on!
So I'd be circumspect until it's shown to not just be a glorified Elisa. Did the researchers try to *break it*, or simply give it situations that they knew it ought to be able to solve? Fortunately, in academic AI nowadays, there's a fair bit of competition, so hopefully other teams will be trying to debunk any claims that don't actually stand up.
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 0) by Anonymous Coward on Tuesday September 21 2021, @11:23PM
It's actually not a bad fit for a lot of stochastic and chaotic systems. It uses some tricks to allow for noisy inputs during training, but this paper wasn't about the quality of the tool for the purpose, as opposed to how to get the same results a hell of a lot faster and cheaper.
(Score: 3, Insightful) by bradley13 on Wednesday September 22 2021, @06:30AM
That's a really nice summary. And the last point: I missed that in my skim over TFA.
IMHO the big weakness of neural nets is that we have no fricking idea what they have actually learned. Witness Teslas thinking that the moon is a yellow traffic light. If these researchers have taken the first steps towards extracting the learned information from a neural net in an understandable and useful form, that really would be their biggest contribution. Their net was very small, though, so it's not clear if this is the case...
Everyone is somebody else's weirdo.
(Score: 2, Funny) by Anonymous Coward on Tuesday September 21 2021, @06:03PM (3 children)
From wikipedia:
Well that clears things right up.
(Score: 1, Funny) by Anonymous Coward on Tuesday September 21 2021, @06:21PM
Think of it like a re-imagination of the paradigm.
(Score: 4, Funny) by Tork on Tuesday September 21 2021, @07:11PM (1 child)
Boy it's not very often that us Star Trek Voyager fans get an opportunity to be smug!
🏳️🌈 Proud Ally 🏳️🌈
(Score: 1, Insightful) by Anonymous Coward on Wednesday September 22 2021, @03:58PM
People who don't like Voyager are just cynical a-holes who have no taste.
(Score: 0) by Anonymous Coward on Tuesday September 21 2021, @08:13PM
Long Dong Silver lives!