Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by on Wednesday January 04 2017, @09:13AM   Printer-friendly
from the the-computer-made-me-do-it dept.

Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?

Yes, says Markus Ehrenmann of Swisscom.

The current progress being made in processing big data and in machine learning is not always to our advantage. Some algorithms are already putting people at a disadvantage today and will have to be regulated.

For example, if a driverless car recognises an obstacle in the road, the control algorithm has to decide whether it will put the life of its passengers at risk or endanger uninvolved passers-by on the pavement. The on-board computer takes decisions that used to be made by people. It's up to the state to clarify who must take responsibility for the consequences of automated decisions (so-called 'algorithmic accountability'). Otherwise, it would render our legal system ineffective.

[...]
No, says Mouloud Dey of SAS.

We need to be able to audit any algorithm potentially open to inappropriate use. But creativity can't be stifled nor research placed under an extra burden. Our hand must be measured and not premature. Creative individuals must be allowed the freedom to work, and not assigned bad intentions a priori. Likewise, before any action is taken, the actual use of an algorithm must be considered, as it is generally not the computer program at fault but the way it is used.

It's the seemingly mysterious, badly intentioned and quasi-automatic algorithms that are often apportioned blame, but we need to look at the entire chain of production, from the programmer and the user to the managers and their decisions. We can't throw the baby out with the bathwater: an algorithm developed for a debatable use, such as military drones, may also have an evidently useful application which raises no questions.

Two opposing viewpoints are provided in this article; please share yours.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by theluggage on Wednesday January 04 2017, @06:10PM

    by theluggage (1797) on Wednesday January 04 2017, @06:10PM (#449464)

    Should any future self-driving cars (and other safety-critical systems) have to undergo statutory testing and meet the appropriate standards, just like plain old non-autonomous cars already do in most civilised countries? Will the existing standards need to be revised to accommodate self-driving cars?

    Seriously, does anybody not on the board of Uber think that shouldn't happen? Will any insurance company in the universe cover people to drive/sell/own them if they aren't regulated?

    I'm certainly not getting into the hot seat of a self-driving car until the maker tells me I'm indemnified against any death, injury or damage it causes. Meanwhile, I think people are going to be really frustrated while their autonomous car drives around at 5mph below the posted speed limit, waits for ever at busy junctions where its never 100.00% safe to pull out and gets stuck behind pushbikes until you suffer whiplash when your car mistakes a leaf for a child crossing the road, slams on the brakes and the apoplectic Audi driver that was following 2cm behind you slams into your rear.

    However, its not the algorithms that are the problem: its the people who (to pick a random example) re-name their cruise control and lane-keeping assistant "autopilot" then wonder why drivers take their hands off the wheel... or use an algorithm to filter out possible fraudulent benefit applications (good idea) but then automatically penalise people without having a human checking for false positives... (see also: automated DMCA takedowns).

    Don't regulate algorithms - regulate the people who implement them.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2