Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday April 11 2017, @09:11PM   Printer-friendly
from the do-you-know-how-you-think? dept.

Will Knight writes:

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn't look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn't follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the car makes its decisions. Information from the vehicle's sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you'd expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can't ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

[...] The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

[...] At some stage we may have to simply trust AI's judgement or do without using it. Likewise, that judgement will have to incorporate social intelligence. Just as society is built upon a contract of expected behaviour, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgements.

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

What do you think, would you trust such AI even if you couldn't parse its methods? Is deep learning AI technology inherently un-knowable?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Informative) by ikanreed on Tuesday April 11 2017, @09:54PM (9 children)

    by ikanreed (3164) Subscriber Badge on Tuesday April 11 2017, @09:54PM (#492479) Journal

    Unlike the robot, you have no soul.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=1, Informative=2, Total=3
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 0, Funny) by Anonymous Coward on Tuesday April 11 2017, @10:06PM

    by Anonymous Coward on Tuesday April 11 2017, @10:06PM (#492486)
    How do you know he's a ginger?
  • (Score: 2, Informative) by Anonymous Coward on Tuesday April 11 2017, @10:07PM (2 children)

    by Anonymous Coward on Tuesday April 11 2017, @10:07PM (#492487)

    Apropos of absolutely nothing at all, except that one of these guy went on to create the first mini-supercomputer company building filing-cabinet sized machine that now fit in a phone, and I worked with him there: The Soul of a New Machine [amazon.com] And, as companies go, it was a lot more moral than the crap coming out of silicon valley nowadays. He embraced the "stakeholder" model of running a company rather than the "shareholder" model that has poisoned much of the world since. [newsweek.com]

    • (Score: 2) by kaszz on Wednesday April 12 2017, @12:22AM

      by kaszz (4211) on Wednesday April 12 2017, @12:22AM (#492542) Journal

      I think the core flaw of only looking for the profit is that it misses other hidden metrics that makes a company work. And if corporations externalize costs then eventually it will be internalized by force and done so with the inefficiencies of force suffered by the company.

      It now seems more obvious were all these assholes comes from and are programmed. Time to design a human exploit to screw them. Single minded people and groups ought to be easy prey.

    • (Score: 2) by mechanicjay on Wednesday April 12 2017, @05:53PM

      Thanks for the heads up on that book. I grabbed it from the Library yesterday afternoon and am a few chapters in. So far, good stuff!"

      --
      My VMS box beat up your Windows box.
  • (Score: 0, Insightful) by Anonymous Coward on Tuesday April 11 2017, @10:08PM (1 child)

    by Anonymous Coward on Tuesday April 11 2017, @10:08PM (#492488)

    Calling people out for being Gingers should not be allowed on this site, it's discriminatory and would prevent hot women from posting

    • (Score: 0) by Anonymous Coward on Wednesday April 12 2017, @03:47PM

      by Anonymous Coward on Wednesday April 12 2017, @03:47PM (#492820)

      and would prevent hot women from posting

      This being a slashdot-like site already does that. ;)

  • (Score: 4, Insightful) by bob_super on Tuesday April 11 2017, @10:13PM (2 children)

    by bob_super (1357) on Tuesday April 11 2017, @10:13PM (#492494)

    Without going to the religious concept of "soul", you make the right point.

    Neural networks (including our brain) train by experiencing a set of parameters, and deciding whether the outcome was satisfactory.
    The fifth time you throw that ball, it goes through the hoop, you learn that.

    For new experiences, the human has another set of parameters: growing up in a civilization, you know the kids pushing the stroller doesn't know where he's going, and may swerve any second, and you know you have to tag both as "DONT_TOUCH".
    If you have an artificial neural network, how do you know that it has experienced "kid pushing stroller" enough times (wait, was that a double-wide stroller?) to tag it correctly, AND has in its neural brain the training required to identify this as a risky situation, and act accordingly?

    Driving is hard not because of the car mechanics, but because of the essentially infinite outside parameters.
    A conventional program can easily throw its hands up and say "I don't know, code that new case for me". A neural network will make a decision, expecting to get feedback on the outcome... That could be a problem.

    • (Score: 0) by Anonymous Coward on Tuesday April 11 2017, @10:31PM

      by Anonymous Coward on Tuesday April 11 2017, @10:31PM (#492501)

      I'm not an expert on current research but I don't think this is as simplistically true as you think it is, it may be true for some reflex memory and learning, like throwing a ball, but complex action seems to require a lot more reinforcement and active teaching than you assert and that is just for humans, the AI demos have all been simple or simple-ish problems with well defined parameters even the jeopardy business can be solved with a lot of reg ex so though an impressive feat of programing is not AI in a sense that humans would understand, until I can have long and complex conversations with the machine with it suggesting things I would not have thought of I am skeptical it has "experienced" anything at all

    • (Score: 2) by Taibhsear on Thursday April 13 2017, @07:21PM

      by Taibhsear (1464) on Thursday April 13 2017, @07:21PM (#493576)

      A conventional program can easily throw its hands up and say "I don't know, code that new case for me". A neural network will make a decision, expecting to get feedback on the outcome... That could be a problem.

      "Doesn't look like anything to me..."