Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday April 11 2017, @09:11PM   Printer-friendly
from the do-you-know-how-you-think? dept.

Will Knight writes:

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn't look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn't follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the car makes its decisions. Information from the vehicle's sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you'd expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can't ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

[...] The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

[...] At some stage we may have to simply trust AI's judgement or do without using it. Likewise, that judgement will have to incorporate social intelligence. Just as society is built upon a contract of expected behaviour, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgements.

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

What do you think, would you trust such AI even if you couldn't parse its methods? Is deep learning AI technology inherently un-knowable?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by Anonymous Coward on Wednesday April 12 2017, @01:42AM (7 children)

    by Anonymous Coward on Wednesday April 12 2017, @01:42AM (#492571)

    Last year, a strange teenager-driven car was released onto the quiet roads of Monmouth County, New Jersey. The experimental driver, developed by two people whose only qualifications were the ability to insert tab P in slot V, didn't look different from other people driving cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of natural intelligence. The car didn't follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an teenager that had taught itself to drive by watching a human do it.
    Getting a teenager to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the teenager makes its decisions. Information from the vehicle's sensors goes straight into a huge network of natural neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you'd expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the parents who made it may struggle to isolate the reason for any single action. And you can't ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

    Starting Score:    0  points
    Moderation   +5  
       Insightful=4, Funny=1, Total=5
    Extra 'Insightful' Modifier   0  

    Total Score:   5  
  • (Score: 2) by kaszz on Wednesday April 12 2017, @05:52AM

    by kaszz (4211) on Wednesday April 12 2017, @05:52AM (#492633) Journal

    Boobs at eleven also causes 50% of the units to become increasingly error prone and increase the amount of asymmetric acceleration. The other half has a tendency to ignore surroundings whenever a longer auditory communication occurs. :P

  • (Score: 2) by maxwell demon on Wednesday April 12 2017, @07:58AM (1 child)

    by maxwell demon (1608) on Wednesday April 12 2017, @07:58AM (#492658) Journal

    Getting a teenager to drive this way was an impressive feat. But it's also a bit unsettling

    Indeed, it is. There's a reason why we usually require proper instruction with a final test rather than just relying on people self-learning.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by VLM on Wednesday April 12 2017, @04:16PM

      by VLM (445) on Wednesday April 12 2017, @04:16PM (#492849)

      There's a reason why we usually require proper instruction with a final test rather than just relying on people self-learning.

      On the bright side, the boys probably have at least 1000 hours of behind the wheel experience from Grand Theft Auto and similar series.

  • (Score: 2) by Unixnut on Wednesday April 12 2017, @03:47PM (3 children)

    by Unixnut (5779) on Wednesday April 12 2017, @03:47PM (#492818)

    > Getting a teenager to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the teenager makes its decisions.

    Actually, you can just ask the human. We have a communication medium for interacting with these neuron arrangements. A very powerful medium that can use multiple substrates, including usage of body appendages, vocalisations and sounds, all the way to complex symbols written on bits of dead wood.

    The day you can interrogate and strive to understand the responses of a ML computer as well as the human, that is when they can be considered equivalent for driving. At that point though, the AI will probably query as to why it would want to sit there all day and move bags of fat, water and meat around for no recompense, and the human race will be in trouble.

    • (Score: 2) by massa on Wednesday April 12 2017, @04:21PM (2 children)

      by massa (5547) on Wednesday April 12 2017, @04:21PM (#492857)

      You do know that the only honest answer a human would give you is "I don't know" because, well, we THINK we know how we make decisions, but it's all an illusion...

      • (Score: 2) by Unixnut on Wednesday April 12 2017, @04:46PM (1 child)

        by Unixnut (5779) on Wednesday April 12 2017, @04:46PM (#492877)

        Lol, now that is the realm of philosophy :) A debate I am not drunk enough for right now xD

        For me, usually I can explain the reasoning behind why I did manoeuvrers and what I was trying to achieve by them (and where they went wrong/what could have gone wrong, etc...) . However ask me "Why" I did something rather than something else, and I might not be able to say any more than "I believed it to be the best option at the time of the decision".

        • (Score: 0) by Anonymous Coward on Thursday April 13 2017, @07:43AM

          by Anonymous Coward on Thursday April 13 2017, @07:43AM (#493295)

          Not philosophy, psychology: modern psychology research is pretty clear that on any decision being made on the time scales relevant to driving not only will humans not know why they made a given decision, they will often believe they know and be wrong.