Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by on Wednesday January 04 2017, @09:13AM   Printer-friendly
from the the-computer-made-me-do-it dept.

Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?

Yes, says Markus Ehrenmann of Swisscom.

The current progress being made in processing big data and in machine learning is not always to our advantage. Some algorithms are already putting people at a disadvantage today and will have to be regulated.

For example, if a driverless car recognises an obstacle in the road, the control algorithm has to decide whether it will put the life of its passengers at risk or endanger uninvolved passers-by on the pavement. The on-board computer takes decisions that used to be made by people. It's up to the state to clarify who must take responsibility for the consequences of automated decisions (so-called 'algorithmic accountability'). Otherwise, it would render our legal system ineffective.

[...]
No, says Mouloud Dey of SAS.

We need to be able to audit any algorithm potentially open to inappropriate use. But creativity can't be stifled nor research placed under an extra burden. Our hand must be measured and not premature. Creative individuals must be allowed the freedom to work, and not assigned bad intentions a priori. Likewise, before any action is taken, the actual use of an algorithm must be considered, as it is generally not the computer program at fault but the way it is used.

It's the seemingly mysterious, badly intentioned and quasi-automatic algorithms that are often apportioned blame, but we need to look at the entire chain of production, from the programmer and the user to the managers and their decisions. We can't throw the baby out with the bathwater: an algorithm developed for a debatable use, such as military drones, may also have an evidently useful application which raises no questions.

Two opposing viewpoints are provided in this article; please share yours.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Dunbal on Wednesday January 04 2017, @01:33PM

    by Dunbal (3515) on Wednesday January 04 2017, @01:33PM (#449346)

    A man can kill another man - with intent, in very special circumstances. As a soldier in a war. In self defense. As a state appointed executioner. The act of killing with intent is not necessarily against the law even if the law is pretty clear that murder is a no-no. Intent is not sufficient.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by q.kontinuum on Wednesday January 04 2017, @04:58PM

    by q.kontinuum (532) on Wednesday January 04 2017, @04:58PM (#449433) Journal

    True. I was just arguing against the analogy algorithm-regulation vs. thought-crime punishment.

    I think everyone agrees the algorithm shouldn't kill people unmotivated, but where it becomes quirky is when the algorithm needs to determine whom to kill / how to assess risks etc. In this case, with an algorithm it can be determined in advance if the action it will take in a given scenario is justified or not.

    --
    Registered IRC nick on chat.soylentnews.org: qkontinuum
  • (Score: 2) by sjames on Thursday January 05 2017, @02:02AM

    by sjames (2882) on Thursday January 05 2017, @02:02AM (#449613) Journal

    However, a new category of manslaughter is actually a trickier problem and more likely for the algorithms to encounter. The case where you truly don't want to kill at all but no matter what you do, someone is likely to die. Our laws don't really cover that. What is the legally correct decision. Take no action and whoever dies dies, aim for the smallest number of deaths, aim for the oldest people to die and avoid children?

    Another thing our courts can't handle is making a decision before an action is taken. There isn't even a way to ask the court if I were to do X, would it be a crime? A lawyer can answer the obvious ones, but given the slightest ambiguity you'll get answers involving "probably". Either way, the answer is non-binding.

    With human drivers, we sidestep the issue somewhat by declaring that it happened too fast for a human to make a decision so we assign a lower culpability than if they had time to carefully consider all the angles and then took an action. Automated cars will force us to actually make a decision on what is the right thing to do.

    • (Score: 2) by urza9814 on Friday January 06 2017, @10:21PM

      by urza9814 (3954) on Friday January 06 2017, @10:21PM (#450481) Journal

      However, a new category of manslaughter is actually a trickier problem and more likely for the algorithms to encounter. The case where you truly don't want to kill at all but no matter what you do, someone is likely to die. Our laws don't really cover that. What is the legally correct decision. Take no action and whoever dies dies, aim for the smallest number of deaths, aim for the oldest people to die and avoid children?

      Human beings make these kinds of decisions every day; algorithms shouldn't be judged any differently. Sometimes there really is no right answer, and it's up to the courts to essentially decide if you did as well as could be expected. Same goes for the designers of the algorithms. A judge will decide if the architects should have reasonably expected the situation which occurred; and if so, the manufacturer is liable.

      Another thing our courts can't handle is making a decision before an action is taken. There isn't even a way to ask the court if I were to do X, would it be a crime? A lawyer can answer the obvious ones, but given the slightest ambiguity you'll get answers involving "probably". Either way, the answer is non-binding.

      It's kinda rare and mostly used in IP disputes, but you CAN actually do that. It's called a declaratory judgement:
      https://en.wikipedia.org/wiki/Declaratory_judgment [wikipedia.org]

      With human drivers, we sidestep the issue somewhat by declaring that it happened too fast for a human to make a decision so we assign a lower culpability than if they had time to carefully consider all the angles and then took an action. Automated cars will force us to actually make a decision on what is the right thing to do.

      With automated cars, the team that designed the algorithm had plenty of time to make a decision. But we will probably sidestep the issue somewhat by declaring that they didn't have full knowledge of what would happen, and bear lower culpability for that reason. So it's a similar situation. They'll be liable for stuff they could have easily predicted; they won't be liable for stuff they couldn't.

      • (Score: 2) by sjames on Saturday January 07 2017, @04:09AM

        by sjames (2882) on Saturday January 07 2017, @04:09AM (#450599) Journal

        Human beings make these kinds of decisions every day;

        They really don't. People may be faced with the situation every day (not the same people each day, of course) but the situation generally precludes actually making a decision. Generally they at best take an action with no time to consider and hope for the best. For example "Oh shit! We're out of control!!! There's a kid! TURN AWAY!", not "I estimate a 79% chance of fatality if we strike the child, but only a 50% chance if we sideswipe the tour bus, so we should turn 35 degrees to the left. The latter is actually within the realm of possibility for an autopilot while the human didn't even notice the tour bus and isn't expected to under the circumstances.

        That's the crux of it. The autopilot can actually consider all factors and actually decide who to hit, not just who not to hit.