Intel Starts R&D Effort in Probabilistic Computing for AI
Intel announced today that it is forming a strategic research alliance to take artificial intelligence to the next level. Autonomous systems don't have good enough ways to respond to the uncertainties of the real world, and they don't have a good enough way to understand how the uncertainties of their sensors should factor into the decisions they need to make. According to Intel CTO Mike Mayberry the answer is "probabilistic computing", which he says could be AI's next wave.
IEEE Spectrum: What motivated this new research thrust?
Mike Mayberry: We're trying to figure out what the next wave of AI is. The original wave of AI is based on logic and it's based on writing down rules; it's closest to what you'd call classical reasoning. The current wave of AI is around sensing and perception—using a convolutional neural net to scan an image and see if something of interest is there. Those two by themselves don't add up to all the things that human beings do naturally as they navigate the world.
[...] So we've been doing a certain amount of internal work and with academia, and we've decided that there's enough here that we're going to kick off a research community. The goal is to have people share what they know about it, collaborate on it, figure out how you represent probability when you write software, and how you construct computer hardware. We think this will be ... part of the third wave of AI. We don't think we're done there, we think there are other things as well, but this will be around probabilistic computing.
Intel embraces defective computing.
(Score: 0) by Anonymous Coward on Monday May 14 2018, @11:50AM (2 children)
"probabilistic computing"
Isn't this what caused all the problem with leaking CPU:s etc? Perhaps they should focus on FIXING that Before they start to add more "features".
(Score: 2) by takyon on Monday May 14 2018, @11:58AM (1 child)
The kind of customer that needs these can probably keep them walled off from the Internet.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Monday May 14 2018, @02:53PM
Last I checked speech and image recognition are the big AI users and both are online.
(Score: 2) by JoeMerchant on Monday May 14 2018, @11:55AM (3 children)
it wants its fuzzy logic back.
🌻🌻 [google.com]
(Score: 1) by suburbanitemediocrity on Monday May 14 2018, @12:10PM
Proven to be classical control. I read the paper.
(Score: 0) by Anonymous Coward on Monday May 14 2018, @02:53PM
Yes, fuzzy thinking has been possible for a long time[grin]...but oddly enough I just peer reviewed a technical paper (automotive engineering) that claimed to use an improved version of fuzzy logic to control parts of a car. So it's not dead yet.
(Score: 3, Interesting) by Hyperturtle on Monday May 14 2018, @04:09PM
They might need it so their Optane storage performance numbers work as designed and the marketing as intended... as well as the overall CPU performance as a whole (as to the optanes, they work well as it is, but hopefully future iterations post-spectre can work better...the 4K reads are still great though.)
Speculative, probabilistic, cached wild guessing hoping for hits, local on-prem augmented AI with converged local off-line clouds... it's all the pretty similar when the special marketing words are removed.
Being right every time is computationally slow and expensive... they would prefer to keep guessing at high speed. It works very well overall... (minus speculative spectres, of course.)
(Score: 0) by Anonymous Coward on Monday May 14 2018, @02:49PM
Let's call it "AI" because people are currently believing in it again.
(Score: 0) by Anonymous Coward on Monday May 14 2018, @03:03PM
Last summer Intel bought Mobileye to get into the self driving car game. Here's a very critical article in The Register from the other day, https://www.theregister.co.uk/2018/05/10/mobileyes_autonomous_cars/ [theregister.co.uk]
Comments are also good, in particular the last comment from "imispgh2", who claims to be from a member of SAE On-Road Autonomous Driving Validation & Verification Task Force.
Here's one small cutting from the article,
And here is a quote from my selected comment (the last one currently posted at The Register),
(Score: 3, Interesting) by acid andy on Monday May 14 2018, @05:07PM (1 child)
When a neural net is constructed and trained for the sole purpose of identifying a set of features in another set of images, of course it can't do "all the things that human beings do naturally". The human brain has lots of specialized regions that have evolved to perform certain tasks. So why not have lots of neural networks? One for image recognition, one for sound, one for long term memory, one for forward planning, etc., and connect them all together?
I don't see any inherent limitation in the design of neural networks that prevents them being "probabilistic". Aren't the weights they apply to their inputs a way of representing probabilities? If it's non-determinism they're after, random noise can always be introduced. That's been done before (do a search on Stephen Thaler).
"Probabilistic Computing" smells like a buzzword to generate hype and investment to me.
If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
(Score: 3, Informative) by crafoo on Monday May 14 2018, @10:37PM
It's hard to tell from the article. It might mean that they would like some measure of how likely false-positives are based on some measure of the real-time data coming in from the sensors? I don't know. It sounds interesting and it might be fun to look into it a bit further. There is probably a lot more to the subject than is easy to convey in a pop-tech article. They mention NN systems being overconfident in the answers derived from the NN. I take that to mean that yes, they want a reasonable and useful approach to gauging confidence in the NN answer based on the quality of the input test sample and maybe data from other types of sensors (weather conditions, speed, lighting conditions, whatever).