Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Wednesday January 04 2017, @09:13AM   Printer-friendly
from the the-computer-made-me-do-it dept.

Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?

Yes, says Markus Ehrenmann of Swisscom.

The current progress being made in processing big data and in machine learning is not always to our advantage. Some algorithms are already putting people at a disadvantage today and will have to be regulated.

For example, if a driverless car recognises an obstacle in the road, the control algorithm has to decide whether it will put the life of its passengers at risk or endanger uninvolved passers-by on the pavement. The on-board computer takes decisions that used to be made by people. It's up to the state to clarify who must take responsibility for the consequences of automated decisions (so-called 'algorithmic accountability'). Otherwise, it would render our legal system ineffective.

[...]
No, says Mouloud Dey of SAS.

We need to be able to audit any algorithm potentially open to inappropriate use. But creativity can't be stifled nor research placed under an extra burden. Our hand must be measured and not premature. Creative individuals must be allowed the freedom to work, and not assigned bad intentions a priori. Likewise, before any action is taken, the actual use of an algorithm must be considered, as it is generally not the computer program at fault but the way it is used.

It's the seemingly mysterious, badly intentioned and quasi-automatic algorithms that are often apportioned blame, but we need to look at the entire chain of production, from the programmer and the user to the managers and their decisions. We can't throw the baby out with the bathwater: an algorithm developed for a debatable use, such as military drones, may also have an evidently useful application which raises no questions.

Two opposing viewpoints are provided in this article; please share yours.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday January 04 2017, @03:00PM

    by Anonymous Coward on Wednesday January 04 2017, @03:00PM (#449372)

    The answer of "who pays" for autonomous car faults is very simple. The owner of the vehicle pays for all damages incurred from use of the vehicle. Like today, vehicle owners will have liability insurance to cover such damages.

  • (Score: 3, Insightful) by q.kontinuum on Wednesday January 04 2017, @05:00PM

    by q.kontinuum (532) on Wednesday January 04 2017, @05:00PM (#449435) Journal

    No. If a defective part of the car causes the accident, the manufacturer might very well be liable instead of the car-owner. Same should hold true with defective algorithms. But in this case, we need rules to determine if an algorithm is defective or now.

    --
    Registered IRC nick on chat.soylentnews.org: qkontinuum
    • (Score: 0) by Anonymous Coward on Wednesday January 04 2017, @08:13PM

      by Anonymous Coward on Wednesday January 04 2017, @08:13PM (#449498)

      If a defective part causes the accident, the vehicle owner (i.e., their insurance) will pay damages to the injured party.

      In turn, the vehicle owner (i.e., their insurance provider) will attempt to collect reimbursement from the manufacturer (who probably also has liability insurance). Then the manufacturer, in turn, can point fingers at their suppliers and try to collect from them.

      The manufacturer may not even exist by the time damages are incurred. The owner has to have the ultimate responsibility.

      • (Score: 2) by q.kontinuum on Wednesday January 04 2017, @09:07PM

        by q.kontinuum (532) on Wednesday January 04 2017, @09:07PM (#449530) Journal

        I would assume that the process you describe implies that the ultimate responsibility is with the vendor, while the immediate responsibility lies with the vehicle owner. But let's not split hairs. The process you describe still relies on an accepted definition of what the algorithm should try to implement to make it possible to judge if it was faulty or not. An algorithm that factors in the material value of the car or the skin color or gender of the pedestrian, even if deciding otherwise within the allowed margin of judgement, might e.g. be considered inherently flawed in some countries, rational in others.

        --
        Registered IRC nick on chat.soylentnews.org: qkontinuum