Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Wednesday January 04 2017, @09:13AM   Printer-friendly
from the the-computer-made-me-do-it dept.

Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?

Yes, says Markus Ehrenmann of Swisscom.

The current progress being made in processing big data and in machine learning is not always to our advantage. Some algorithms are already putting people at a disadvantage today and will have to be regulated.

For example, if a driverless car recognises an obstacle in the road, the control algorithm has to decide whether it will put the life of its passengers at risk or endanger uninvolved passers-by on the pavement. The on-board computer takes decisions that used to be made by people. It's up to the state to clarify who must take responsibility for the consequences of automated decisions (so-called 'algorithmic accountability'). Otherwise, it would render our legal system ineffective.

[...]
No, says Mouloud Dey of SAS.

We need to be able to audit any algorithm potentially open to inappropriate use. But creativity can't be stifled nor research placed under an extra burden. Our hand must be measured and not premature. Creative individuals must be allowed the freedom to work, and not assigned bad intentions a priori. Likewise, before any action is taken, the actual use of an algorithm must be considered, as it is generally not the computer program at fault but the way it is used.

It's the seemingly mysterious, badly intentioned and quasi-automatic algorithms that are often apportioned blame, but we need to look at the entire chain of production, from the programmer and the user to the managers and their decisions. We can't throw the baby out with the bathwater: an algorithm developed for a debatable use, such as military drones, may also have an evidently useful application which raises no questions.

Two opposing viewpoints are provided in this article; please share yours.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Insightful) by Anonymous Coward on Wednesday January 04 2017, @10:09AM

    by Anonymous Coward on Wednesday January 04 2017, @10:09AM (#449288)

    It's safe to ignore any article on AI where the authors are talking about cars deciding if they should run over one group of people vs another. We're no where near that level of AI yet and that whole argument is bullshit anyway. The car will slam on the breaks. Nothing more and nothing less.

    Starting Score:    0  points
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  

    Total Score:   2  
  • (Score: 2) by q.kontinuum on Wednesday January 04 2017, @11:15AM

    by q.kontinuum (532) on Wednesday January 04 2017, @11:15AM (#449305) Journal

    So lets wait until we are there, cars are finally advanced enough, and cry then why we didn't think about it earlier?

    --
    Registered IRC nick on chat.soylentnews.org: qkontinuum
  • (Score: 1) by cccc828 on Wednesday January 04 2017, @03:25PM

    by cccc828 (2368) on Wednesday January 04 2017, @03:25PM (#449386)

    You do not need any specific level of AI to end up in that situation. Whenever an autonomous system (car, drone, washing machine, recommender system...) does something "wrong" you need to answer the 'why'. Cars are an often used example, because everyone know them and the potential harm is graphic enough for headlines (grave injury and/or loss of life). The Trolley problem [wikipedia.org] is just the most drastic and graphic tought-experiment.

    The worst answer you can give is "The algorithm did it". If a systems causes harm or damages, you want to know who to blame and who to hold liable. For example: if slamming on the breaks is the exact wrong action (maybe you are in the middle of a railway crossing), someone needs to take the blame. Currently the laws in most countries basically say, if you drive a car/use a machine/etc. you are fully responsible for what it does. In the railway crossing example, it is your fault that the train hit you - you should have switched to manual mode (or done something else). Of course no one wants to take the blame for a computer's decision. Try telling someone they have to pay for a fender bender caused by their "smart" car...

    So how to handle the problem? Insurance companies might be reluctant to take the risk of algorithm they do not know, regulatory bodies might not like the idea of unknown algorithms (they also do not like the idea of unknown exhaust fumes...) and courts might want a minimum level of logging/evidence generation.

    • (Score: 3, Interesting) by jmorris on Wednesday January 04 2017, @03:49PM

      by jmorris (4844) on Wednesday January 04 2017, @03:49PM (#449393)

      And here is the best way to deal with the problem. Humans, studied in large groups, are well characterized and risk models can be built to insure them. As things like cars cross the line from tool to actor they will need liability insurance in their own right vs. the current model where the vehicle is insured against loss based on replacement value and the driver for liability based on risk profile.

      Unless the liability portion is assumed by the automaker this will require the insurer have a way to assess their risk. So future one has all cars licensed, not sold as software today is since cars will basically be software with some enabling hardware. Future two has cars sold and insured similar to today and the insurance companies either auditing the software to determine their risk and working with the automakers, industry to industry with something like Underwriters Labs to enforce standards to make them insurable at reasonable standardized rates. Future three has the insurance industry rating car AI like they do humans, on bulk metrics and model / manufacturer track record. Would suck to learn your car has been deemed a reckless "teen" driver and have to pay accordingly, but a powerful motivator for the vendor to issue a patch.

    • (Score: 2) by theluggage on Wednesday January 04 2017, @05:40PM

      by theluggage (1797) on Wednesday January 04 2017, @05:40PM (#449451)

      The "Trolley problem" is a load of waffle based on the unrealistic assumption that the subject has omniscient pre-knowledge of the outcome of each possible option. Why do you think so many of these problems involve passive vehicles running on rails? How blatant a symbol of predestination do you want?

      Hi. I'm a hyperintelligent emergent AI that can correctly predict the number of casualties from every permutation of pending accident, based on some lidar data, a couple of webcam images and a copy of the Beijing Times that someone just shoved under the door. I know this really cool solution to the trolley problem, and if you just let me out of this box I'll show you.

      PS: I promise I won't enslave the human race and create a giant VR in which to torture my enemies for eternity...

      • (Score: 2) by urza9814 on Friday January 06 2017, @10:06PM

        by urza9814 (3954) on Friday January 06 2017, @10:06PM (#450476) Journal

        The "Trolley problem" is a load of waffle based on the unrealistic assumption that the subject has omniscient pre-knowledge of the outcome of each possible option. Why do you think so many of these problems involve passive vehicles running on rails? How blatant a symbol of predestination do you want?

        Hi. I'm a hyperintelligent emergent AI that can correctly predict the number of casualties from every permutation of pending accident, based on some lidar data, a couple of webcam images and a copy of the Beijing Times that someone just shoved under the door. I know this really cool solution to the trolley problem, and if you just let me out of this box I'll show you.

        So the human programming the algorithm while sitting in a cubicle thinking through all the possibilities is expected to have a *worse* reaction than the human sitting in the car with half a second to react?

        The guy in the car gets some leeway from the courts because obviously people don't always react ideally under extreme pressure. Likewise, the guy designing the algorithm will get some leeway because obviously he couldn't predict every single circumstance. But you can still be convicted because you did something incredibly stupid under pressure; and likewise it should still be possible to convict a company that programs an algorithm which reacts poorly in a situation they should have expected.