Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by Fnord666 on Tuesday April 11 2017, @09:11PM   Printer-friendly
from the do-you-know-how-you-think? dept.

Will Knight writes:

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn't look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn't follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the car makes its decisions. Information from the vehicle's sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you'd expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can't ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

[...] The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

[...] At some stage we may have to simply trust AI's judgement or do without using it. Likewise, that judgement will have to incorporate social intelligence. Just as society is built upon a contract of expected behaviour, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgements.

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

What do you think, would you trust such AI even if you couldn't parse its methods? Is deep learning AI technology inherently un-knowable?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by Anonymous Coward on Wednesday April 12 2017, @05:30AM

    by Anonymous Coward on Wednesday April 12 2017, @05:30AM (#492628)

    AI already achieving things that classical human developed systems could only dream of. Image recognition of all sorts, driving systems, diagnostic systems, and even just crushing the world's strongest human at Go.

    The whole reason so many things are suddenly accelerating so rapidly is precisely because of deep learning systems. And the current aim in AI is to develop deep learning systems capable of designing their own measurement criteria which would take us even further down the hole of 'I have no idea what it's "thinking" but also will likely produce even more incredible results. So really whether one is willing to accept these systems or not is rather a nonstarter of a question. They do things we have been unable to do otherwise, and so they will be utilized.

    I mean who ever thought we would develop AI, that eventually will begin to compete against human intelligence, and still have a good idea of what's going on under the hood? Perhaps the jumping off point came earlier than some might have expected, but come on. 'Human developed' (for lack of a better phrasing) systems still cannot even overcome the turing test which while touted as a test of machine intelligence will undoubtedly be just another goal point to shift a bit further, much like things such as a machine beating a human at chess was once considered a test of 'intelligence.' As an aside, this is why developing AI systems designed to be used in murder and destruction at this point is already a very bad idea. These systems are complex enough that nobody really understands how an AI system is coming to a conclusion (in precise measurements), yet they're also still too rudimentary to 'sanity check' their results in ways other in deterministic criteria put there by humans who will not be able to anticipate every possible form of potentially abhorrent behavior.

    Starting Score:    0  points
    Moderation   +2  
       Interesting=1, Informative=1, Total=2
    Extra 'Interesting' Modifier   0  

    Total Score:   2