Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by Fnord666 on Monday May 14 2018, @11:41AM   Printer-friendly
from the maybe? dept.

Intel Starts R&D Effort in Probabilistic Computing for AI

Intel announced today that it is forming a strategic research alliance to take artificial intelligence to the next level. Autonomous systems don't have good enough ways to respond to the uncertainties of the real world, and they don't have a good enough way to understand how the uncertainties of their sensors should factor into the decisions they need to make. According to Intel CTO Mike Mayberry the answer is "probabilistic computing", which he says could be AI's next wave.

IEEE Spectrum: What motivated this new research thrust?

Mike Mayberry: We're trying to figure out what the next wave of AI is. The original wave of AI is based on logic and it's based on writing down rules; it's closest to what you'd call classical reasoning. The current wave of AI is around sensing and perception—using a convolutional neural net to scan an image and see if something of interest is there. Those two by themselves don't add up to all the things that human beings do naturally as they navigate the world.

[...] So we've been doing a certain amount of internal work and with academia, and we've decided that there's enough here that we're going to kick off a research community. The goal is to have people share what they know about it, collaborate on it, figure out how you represent probability when you write software, and how you construct computer hardware. We think this will be ... part of the third wave of AI. We don't think we're done there, we think there are other things as well, but this will be around probabilistic computing.

Intel embraces defective computing.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday May 14 2018, @03:03PM

    by Anonymous Coward on Monday May 14 2018, @03:03PM (#679589)

    Last summer Intel bought Mobileye to get into the self driving car game. Here's a very critical article in The Register from the other day, https://www.theregister.co.uk/2018/05/10/mobileyes_autonomous_cars/ [theregister.co.uk]

    Comments are also good, in particular the last comment from "imispgh2", who claims to be from a member of SAE On-Road Autonomous Driving Validation & Verification Task Force.

    Here's one small cutting from the article,

    As it turns out, strawman arguments are Shashua's preferred way of responding to any form of criticism, implied or otherwise. But more on that later.

    The upshot of that first argument was that safety requirements should not be data driven. A company shouldn't have to prove it has driven x number of miles before it gets permission to sell an autonomous car. Even though that is precisely what autonomous car companies are effectively doing right now; it's just that it is their millions of miles that they are driving in order to test systems.

    More importantly – and worryingly – the argument that data and safety shouldn't be matched suggests that it doesn't matter if autonomous cars are involved in lots of accidents in the coming years – even if they are statistically higher than human-driven vehicles - so long as the car did not cause the accident, according to the autonomous car owner's definition of what "cause" actually means.

    That strikes us as a dangerously myopic approach to take.

    And here is a quote from my selected comment (the last one currently posted at The Register),

    It is a myth that the use of public shadow driving to develop autonomous vehicles will ever come close to actually creating one. You can never drive the one trillion miles, spend over $300B or harm as many people as this process will harm trying to do so. What happens when you move from benign and hyped scenarios to running thousands of accident scenarios thousands of times each? The answer is to leverage FAA practices and use aerospace/DoD level simulation.