Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday April 11 2017, @09:11PM   Printer-friendly
from the do-you-know-how-you-think? dept.

Will Knight writes:

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn't look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn't follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the car makes its decisions. Information from the vehicle's sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you'd expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can't ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

[...] The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

[...] At some stage we may have to simply trust AI's judgement or do without using it. Likewise, that judgement will have to incorporate social intelligence. Just as society is built upon a contract of expected behaviour, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgements.

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

What do you think, would you trust such AI even if you couldn't parse its methods? Is deep learning AI technology inherently un-knowable?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by massa on Wednesday April 12 2017, @01:14PM (3 children)

    by massa (5547) on Wednesday April 12 2017, @01:14PM (#492726)

    Until an AI proves itself as capable as a human by those tests, it is still an lab curiosity, not fit for the real world (except to act under direction of humans, as a sort of "mental crutch" to help with specific thinking problems, which is what these DNN are actually good for).

    AFAICT self-driving vehicles pass those specific (driving) tests with flying colors, just like fresh 16 year-olds (or 18 down here in Brasil). I would trust my life on the highway to one of those AIs much sooner than to one of the college freshmen I see driving on the street.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by Unixnut on Wednesday April 12 2017, @02:34PM (2 children)

    by Unixnut (5779) on Wednesday April 12 2017, @02:34PM (#492771)

    > AFAICT self-driving vehicles pass those specific (driving) tests with flying colors, just like fresh 16 year-olds (or 18 down here in Brasil). I would trust my life on the highway to one of those AIs much sooner than to one of the college freshmen I see driving on the street.

    Based on that statement, all I can say is you should have harder driving tests. Not being flippant, and I don't know in Brazil, but I know Europe, in some countries you can basically get a driving licence by looking pretty, a bribe or "service exchange" with the examiner, while others take it seriously, and have tests that most people would fail, regardless of age (e.g. Finland)

    Needless to say, the stats of EU road accidents correlate to the countries where the person passed their "test". I personally am in a country that isn't as hard as Finland, but pretty hard, so we have higher standards of driving on the roads as a result. I see no reason why it should be easy to get a licence. You are operating 1+ Tons of metal with a lot of kinetic energy, it is a privilege not a right to drive.

    • (Score: 2) by massa on Wednesday April 12 2017, @03:45PM (1 child)

      by massa (5547) on Wednesday April 12 2017, @03:45PM (#492816)

      I think you missed the point. The point is that as difficult as the test is in whatever country, it really does only test if the fresh driver can drive the car around some city blocks, safely. And THAT the driving AIs already do far better than fresh drivers. Even if in Finland (or wherever) the driver test is not a crash-oriented test, nor a radical defensive driving test, or even a Kobayashi-Maru style "what to do in a no-win situation" test. And the problems we (as human drivers) encounter (besides the fatigue, alcohol, and twitter problems -- that the AIs already solve) are on those situations that are NEVER tested in a driver-license test.

      • (Score: 3, Informative) by Unixnut on Wednesday April 12 2017, @04:16PM

        by Unixnut (5779) on Wednesday April 12 2017, @04:16PM (#492851)

        and I think you missed my point. Driving is hard, really hard. Not the technical skill required to go round the block a few times (hell, I did that as a 7 year old in my dads car). The ability to anticipate events, the ability to read what other drivers are doing, understand their intent. The ability to silently agree on ways to deal with obstacles, negotiate rights of way (even if it goes against road rules). infer events from things as subtle as reflections off another car indicating one is approaching a blind corner, the ability to hear engine sounds and infer whether what you can't see is accelerating, decelerating or holding steady.

        Then you get into things like bad weather, worn out (or non existent) road markings, non functioning traffic lights, potholes, temporary redirections due to road works, water on roads, etc.. The world is not a perfectly clean and well maintained ideal.

        Yes, superior sensors can mitigate some of the above, but not enough to meet the standards that a human (even a "fresh" teenage one, if properly trained) can achieve. Humans are very adaptable, and we have millions of years of evolutionary training in this. No AI can reach that (at this point in time) so no, they can't do far better than even fresh drivers (unless the drivers were trained poorly, which is the fault of the training, not the human, hence my point about better driving tests).

        That an AI can beat a poorly trained human in a really restricted set of circumstances doesn't surprise me. I am sure we could program an automated car directly (without ML) and it would also beat the human in that test. However that doesn't mean the car can "drive" better than humans can, or that we are close to some technological singularity where everyone will have a robot chauffeur (unless of course, you just want to go round your block endlessly).

        We are still a long way off from an AI that can be considered a fit replacement for a human driver.