Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Wednesday January 04 2017, @09:13AM   Printer-friendly
from the the-computer-made-me-do-it dept.

Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?

Yes, says Markus Ehrenmann of Swisscom.

The current progress being made in processing big data and in machine learning is not always to our advantage. Some algorithms are already putting people at a disadvantage today and will have to be regulated.

For example, if a driverless car recognises an obstacle in the road, the control algorithm has to decide whether it will put the life of its passengers at risk or endanger uninvolved passers-by on the pavement. The on-board computer takes decisions that used to be made by people. It's up to the state to clarify who must take responsibility for the consequences of automated decisions (so-called 'algorithmic accountability'). Otherwise, it would render our legal system ineffective.

[...]
No, says Mouloud Dey of SAS.

We need to be able to audit any algorithm potentially open to inappropriate use. But creativity can't be stifled nor research placed under an extra burden. Our hand must be measured and not premature. Creative individuals must be allowed the freedom to work, and not assigned bad intentions a priori. Likewise, before any action is taken, the actual use of an algorithm must be considered, as it is generally not the computer program at fault but the way it is used.

It's the seemingly mysterious, badly intentioned and quasi-automatic algorithms that are often apportioned blame, but we need to look at the entire chain of production, from the programmer and the user to the managers and their decisions. We can't throw the baby out with the bathwater: an algorithm developed for a debatable use, such as military drones, may also have an evidently useful application which raises no questions.

Two opposing viewpoints are provided in this article; please share yours.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Azuma Hazuki on Wednesday January 04 2017, @05:25PM

    by Azuma Hazuki (5086) on Wednesday January 04 2017, @05:25PM (#449442) Journal

    What does "regulating algorithms" even mean?

    Test for correctness, yes. Make sure they're validated, formally correct in the mathematical sense, yes. But "regulate?" This is an example of why non-technical people (government types, all goddamn lawyers and bankers) shouldn't try to get involved in what they don't understand. Leave this one to the engineers and programmers, and adjust the current system of insurance and liability law *after the fact* to handle this. No one is *truly* impartial but engineers and programmers are the closest I've seen to it in this world.

    --
    I am "that girl" your mother warned you about...
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by q.kontinuum on Wednesday January 04 2017, @11:59PM

    by q.kontinuum (532) on Wednesday January 04 2017, @11:59PM (#449589) Journal

    What does "regulating algorithms" even mean?

    I would understand it as "regulating, which rules algorithms must follow". Obviously, whenever applicable they need to implement the existing traffic laws and regulations. But in some extreme cases (rockfall, earthquake, upcoming truck / bus), the current traffic laws are not enough. If an algorithm is applied to determine if the car smashes into a brick-wall (potentially killing the passenger(s) or into a living soft target (probably killing them), I would like some regulation that it does not evaluate some bluetooth-beacon sold by "for-the-children corporation" to decide if it hits the soft target or the brick-wall. Neither would I want an algorithm factoring in the car-value vs. expected compensations, the cloth-brand the pedestrian wears, the skin-colour, the gender or running an online-check for political views, sexual orientation or lifestyle-choices based on some smartphone identification.

    Good engineers [1] are in the first place good at implementing the algorithm. But a good engineer is not necessarily a good human-being, and the interest of the car-owner is not necessarily the same as the interest of the general public. Nevertheless it is the car-owner that pays for the software and therefore indirectly makes the decision, if it is not sufficiently otherwise regulated.

    Also one could argue that mandating certain test-procedures for the implementation of the algorithms could be abbreviated in laymen-terms as a regulation of algorithms.

    [1] I'm talking about good technical understanding, good logical thinking, clean ways of working and creativity. Of course you could also define a good engineer as someone who thinks about the implications of his work and takes philosophical and ethical courses as well, but I'm not convinced this reflects the current market situation.

    --
    Registered IRC nick on chat.soylentnews.org: qkontinuum