Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by on Wednesday January 04 2017, @09:13AM   Printer-friendly
from the the-computer-made-me-do-it dept.

Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?

Yes, says Markus Ehrenmann of Swisscom.

The current progress being made in processing big data and in machine learning is not always to our advantage. Some algorithms are already putting people at a disadvantage today and will have to be regulated.

For example, if a driverless car recognises an obstacle in the road, the control algorithm has to decide whether it will put the life of its passengers at risk or endanger uninvolved passers-by on the pavement. The on-board computer takes decisions that used to be made by people. It's up to the state to clarify who must take responsibility for the consequences of automated decisions (so-called 'algorithmic accountability'). Otherwise, it would render our legal system ineffective.

[...]
No, says Mouloud Dey of SAS.

We need to be able to audit any algorithm potentially open to inappropriate use. But creativity can't be stifled nor research placed under an extra burden. Our hand must be measured and not premature. Creative individuals must be allowed the freedom to work, and not assigned bad intentions a priori. Likewise, before any action is taken, the actual use of an algorithm must be considered, as it is generally not the computer program at fault but the way it is used.

It's the seemingly mysterious, badly intentioned and quasi-automatic algorithms that are often apportioned blame, but we need to look at the entire chain of production, from the programmer and the user to the managers and their decisions. We can't throw the baby out with the bathwater: an algorithm developed for a debatable use, such as military drones, may also have an evidently useful application which raises no questions.

Two opposing viewpoints are provided in this article; please share yours.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by MrGuy on Wednesday January 04 2017, @03:48PM

    by MrGuy (1007) on Wednesday January 04 2017, @03:48PM (#449392)

    Rather than ask "Should we regulate algorithms?" as a starting point, I'd recommend starting for "What feasibly could be regulated about algorithms?" and determine if those are good or bad ideas.

    "Should we regulate algorithms" is a maddeningly vague question, and doesn't have a single answer.

    A code audit to "prove" an algorithm that makes credit decisions doesn't explicitly make decision based on the race of the borrower is "regulation." A mandate that every system on a driverless car must be mathematically proven to be incapable of ever harming a human is a very different kind of "regulation." Both are social goods. But the first is straightforward, easy to define, and simply extends existing regulation that applies to human decision makers to automated decision makers. The second is complex (possibly impossible), is difficult to define, and imposes specific constraints on highly complex decision making that isn't entirely clear.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by Thexalon on Wednesday January 04 2017, @05:50PM

    by Thexalon (636) on Wednesday January 04 2017, @05:50PM (#449458)

    A code audit to "prove" an algorithm that makes credit decisions doesn't explicitly make decision based on the race of the borrower is "regulation."

    I disagree with how easy the first regulation would be to enforce if somebody really wanted to make an algorithm racist:
    1. There's no guarantee that the code that is audited is the code that is actually executed, especially if the code goes through a compiler first (see Ken Thompson's classic Reflections on Trusting Trust [cmu.edu]). Or, in one of the infamous cases involving Diebold Election Systems, there was a separate branch of the code sent to the auditors than was actually installed on the system.
    2. There are lots of ways to be racist without being explicitly racist. For example, instead of saying "black people = -100 points", you say "People who live in certain zip codes, or use certain names, express certain religious preferences, and/or like certain kinds of music = -100 points", and it just so happens that the zip codes, names, etc you fed into it were much more heavily used by black people.
    3. Automated black-box testing could very easily miss it as well: All I do is look for input that looks like it came from the automated testing system and don't apply the racist algorithm to that data, while applying it to everybody else.

    This kind of thing is much more likely to happen when there's a substantial financial incentive to be racist, like there still is in many apartment rental markets.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.