Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by Fnord666 on Monday May 14 2018, @11:41AM   Printer-friendly
from the maybe? dept.

Intel Starts R&D Effort in Probabilistic Computing for AI

Intel announced today that it is forming a strategic research alliance to take artificial intelligence to the next level. Autonomous systems don't have good enough ways to respond to the uncertainties of the real world, and they don't have a good enough way to understand how the uncertainties of their sensors should factor into the decisions they need to make. According to Intel CTO Mike Mayberry the answer is "probabilistic computing", which he says could be AI's next wave.

IEEE Spectrum: What motivated this new research thrust?

Mike Mayberry: We're trying to figure out what the next wave of AI is. The original wave of AI is based on logic and it's based on writing down rules; it's closest to what you'd call classical reasoning. The current wave of AI is around sensing and perception—using a convolutional neural net to scan an image and see if something of interest is there. Those two by themselves don't add up to all the things that human beings do naturally as they navigate the world.

[...] So we've been doing a certain amount of internal work and with academia, and we've decided that there's enough here that we're going to kick off a research community. The goal is to have people share what they know about it, collaborate on it, figure out how you represent probability when you write software, and how you construct computer hardware. We think this will be ... part of the third wave of AI. We don't think we're done there, we think there are other things as well, but this will be around probabilistic computing.

Intel embraces defective computing.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Monday May 14 2018, @11:50AM (2 children)

    by Anonymous Coward on Monday May 14 2018, @11:50AM (#679531)

    "probabilistic computing"
    Isn't this what caused all the problem with leaking CPU:s etc? Perhaps they should focus on FIXING that Before they start to add more "features".

    • (Score: 2) by takyon on Monday May 14 2018, @11:58AM (1 child)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday May 14 2018, @11:58AM (#679536) Journal

      The kind of customer that needs these can probably keep them walled off from the Internet.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Monday May 14 2018, @02:53PM

        by Anonymous Coward on Monday May 14 2018, @02:53PM (#679582)

        Last I checked speech and image recognition are the big AI users and both are online.

  • (Score: 2) by JoeMerchant on Monday May 14 2018, @11:55AM (3 children)

    by JoeMerchant (3937) on Monday May 14 2018, @11:55AM (#679534)

    it wants its fuzzy logic back.

    --
    🌻🌻 [google.com]
    • (Score: 1) by suburbanitemediocrity on Monday May 14 2018, @12:10PM

      by suburbanitemediocrity (6844) on Monday May 14 2018, @12:10PM (#679539)

      Proven to be classical control. I read the paper.

    • (Score: 0) by Anonymous Coward on Monday May 14 2018, @02:53PM

      by Anonymous Coward on Monday May 14 2018, @02:53PM (#679583)

      Yes, fuzzy thinking has been possible for a long time[grin]...but oddly enough I just peer reviewed a technical paper (automotive engineering) that claimed to use an improved version of fuzzy logic to control parts of a car. So it's not dead yet.

    • (Score: 3, Interesting) by Hyperturtle on Monday May 14 2018, @04:09PM

      by Hyperturtle (2824) on Monday May 14 2018, @04:09PM (#679614)

      They might need it so their Optane storage performance numbers work as designed and the marketing as intended... as well as the overall CPU performance as a whole (as to the optanes, they work well as it is, but hopefully future iterations post-spectre can work better...the 4K reads are still great though.)

      Speculative, probabilistic, cached wild guessing hoping for hits, local on-prem augmented AI with converged local off-line clouds... it's all the pretty similar when the special marketing words are removed.

      Being right every time is computationally slow and expensive... they would prefer to keep guessing at high speed. It works very well overall... (minus speculative spectres, of course.)

  • (Score: 0) by Anonymous Coward on Monday May 14 2018, @02:49PM

    by Anonymous Coward on Monday May 14 2018, @02:49PM (#679580)

    Let's call it "AI" because people are currently believing in it again.

  • (Score: 0) by Anonymous Coward on Monday May 14 2018, @03:03PM

    by Anonymous Coward on Monday May 14 2018, @03:03PM (#679589)

    Last summer Intel bought Mobileye to get into the self driving car game. Here's a very critical article in The Register from the other day, https://www.theregister.co.uk/2018/05/10/mobileyes_autonomous_cars/ [theregister.co.uk]

    Comments are also good, in particular the last comment from "imispgh2", who claims to be from a member of SAE On-Road Autonomous Driving Validation & Verification Task Force.

    Here's one small cutting from the article,

    As it turns out, strawman arguments are Shashua's preferred way of responding to any form of criticism, implied or otherwise. But more on that later.

    The upshot of that first argument was that safety requirements should not be data driven. A company shouldn't have to prove it has driven x number of miles before it gets permission to sell an autonomous car. Even though that is precisely what autonomous car companies are effectively doing right now; it's just that it is their millions of miles that they are driving in order to test systems.

    More importantly – and worryingly – the argument that data and safety shouldn't be matched suggests that it doesn't matter if autonomous cars are involved in lots of accidents in the coming years – even if they are statistically higher than human-driven vehicles - so long as the car did not cause the accident, according to the autonomous car owner's definition of what "cause" actually means.

    That strikes us as a dangerously myopic approach to take.

    And here is a quote from my selected comment (the last one currently posted at The Register),

    It is a myth that the use of public shadow driving to develop autonomous vehicles will ever come close to actually creating one. You can never drive the one trillion miles, spend over $300B or harm as many people as this process will harm trying to do so. What happens when you move from benign and hyped scenarios to running thousands of accident scenarios thousands of times each? The answer is to leverage FAA practices and use aerospace/DoD level simulation.

  • (Score: 3, Interesting) by acid andy on Monday May 14 2018, @05:07PM (1 child)

    by acid andy (1683) on Monday May 14 2018, @05:07PM (#679644) Homepage Journal

    The original wave of AI is based on logic and it's based on writing down rules; it's closest to what you'd call classical reasoning. The current wave of AI is around sensing and perception—using a convolutional neural net to scan an image and see if something of interest is there. Those two by themselves don't add up to all the things that human beings do naturally as they navigate the world.

    When a neural net is constructed and trained for the sole purpose of identifying a set of features in another set of images, of course it can't do "all the things that human beings do naturally". The human brain has lots of specialized regions that have evolved to perform certain tasks. So why not have lots of neural networks? One for image recognition, one for sound, one for long term memory, one for forward planning, etc., and connect them all together?

    I don't see any inherent limitation in the design of neural networks that prevents them being "probabilistic". Aren't the weights they apply to their inputs a way of representing probabilities? If it's non-determinism they're after, random noise can always be introduced. That's been done before (do a search on Stephen Thaler).

    "Probabilistic Computing" smells like a buzzword to generate hype and investment to me.

    --
    If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
    • (Score: 3, Informative) by crafoo on Monday May 14 2018, @10:37PM

      by crafoo (6639) on Monday May 14 2018, @10:37PM (#679805)

      It's hard to tell from the article. It might mean that they would like some measure of how likely false-positives are based on some measure of the real-time data coming in from the sensors? I don't know. It sounds interesting and it might be fun to look into it a bit further. There is probably a lot more to the subject than is easy to convey in a pop-tech article. They mention NN systems being overconfident in the answers derived from the NN. I take that to mean that yes, they want a reasonable and useful approach to gauging confidence in the NN answer based on the quality of the input test sample and maybe data from other types of sensors (weather conditions, speed, lighting conditions, whatever).

(1)