Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Wednesday January 04 2017, @09:13AM   Printer-friendly
from the the-computer-made-me-do-it dept.

Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?

Yes, says Markus Ehrenmann of Swisscom.

The current progress being made in processing big data and in machine learning is not always to our advantage. Some algorithms are already putting people at a disadvantage today and will have to be regulated.

For example, if a driverless car recognises an obstacle in the road, the control algorithm has to decide whether it will put the life of its passengers at risk or endanger uninvolved passers-by on the pavement. The on-board computer takes decisions that used to be made by people. It's up to the state to clarify who must take responsibility for the consequences of automated decisions (so-called 'algorithmic accountability'). Otherwise, it would render our legal system ineffective.

[...]
No, says Mouloud Dey of SAS.

We need to be able to audit any algorithm potentially open to inappropriate use. But creativity can't be stifled nor research placed under an extra burden. Our hand must be measured and not premature. Creative individuals must be allowed the freedom to work, and not assigned bad intentions a priori. Likewise, before any action is taken, the actual use of an algorithm must be considered, as it is generally not the computer program at fault but the way it is used.

It's the seemingly mysterious, badly intentioned and quasi-automatic algorithms that are often apportioned blame, but we need to look at the entire chain of production, from the programmer and the user to the managers and their decisions. We can't throw the baby out with the bathwater: an algorithm developed for a debatable use, such as military drones, may also have an evidently useful application which raises no questions.

Two opposing viewpoints are provided in this article; please share yours.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by khallow on Wednesday January 04 2017, @12:06PM

    by khallow (3766) Subscriber Badge on Wednesday January 04 2017, @12:06PM (#449320) Journal
    I'd go even further. There's no problem that can be solved by regulating algorithms. Notice the very first paragraph:

    Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?

    First, as the parent post noted, every one of these "algorithms" is a real world action or event. If I make an algorithm on my computer for any one of these three things, it will have absolutely no relevance and thus, no need to be regulated until the very point it gets used. At that point the use may be regulated though even then, it can ill-advised. Really, why would anyone think that government regulation of news filters is anything but a terrible idea?

    That brings me to the second point. There's no problem here that needs solving. Some bad outcome happens because of the choice of algorithm? Regulation already covers that.

    Third, again as the parent noted, when you regulate algorithms as opposed to actions, you cross a big line. You're now regulating thoughts, intent, and beliefs. Government shouldn't be meddling in that. Legally, it shouldn't matter why people comply with the law. We shouldn't care if a self-driving car chooses to act as it does to minimize loss of life or to minimize liability to the car manufacturer. You can't regulate virtue or correct thinking into people and it is worthless to try.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: 3, Interesting) by Anonymous Coward on Wednesday January 04 2017, @01:21PM

    by Anonymous Coward on Wednesday January 04 2017, @01:21PM (#449343)

    It is being done TODAY!

    FDA has software program/standards that monitor, audit and regulate software from Pace-Makers and Insulin-Pumps to their support devices / code. It follows NASA software group standards used both in Shuttle but even in earlier Apollo 4bit software. For one thing EVERY branch is checked to valid if what happens and if failure occurs that the software resets and recovers itself.

  • (Score: 3, Insightful) by choose another one on Wednesday January 04 2017, @02:36PM

    by choose another one (515) Subscriber Badge on Wednesday January 04 2017, @02:36PM (#449366)

    > I'd go even further. There's no problem that can be solved by regulating algorithms. Notice the very first paragraph:

    Not only that, there are many problems that would be created by attempting to regulate algorithms outside the context of use.

    Should glass be regulated to be clear, untinted and laminated so it breaks safely? - clearly not, that would cause problems in many applications.
    Should glass used in car windscreens be regulated to be clear, untinted and laminated so it breaks safely? - probably, and it is, as part of regulating _cars_.

  • (Score: 2) by urza9814 on Friday January 06 2017, @09:31PM

    by urza9814 (3954) on Friday January 06 2017, @09:31PM (#450452) Journal

    I would agree with you, with one slight modification: Algorithms *should* be regulated, but only using existing laws.

    I think we're already at a point where algorithms often *aren't* subject to that regulation. AT&T used a bad algorithm which allowed Weev to access data of other customers. AT&T wasn't charged, Weev was sentenced to a couple years in prison and nearly a hundred thousand in fines. Even existing law doesn't apply when "an algorithm did it." And THAT is why IT security is such a mess -- the people who find the holes are punished while the people who create them are not.

    This gets even more important with stuff like self-driving cars. Are you going to buy the car that will always sacrifice you to save a pedestrian, or the car that will sacrifice the pedestrian to save you? Probably most people will take the car that protects its occupants; but it's really not fair that the people who made the decision to buy the car are the only ones who WON'T face that potentially deadly consequence of their decision. It might not even be a choice -- if manufacturers assume that's what people want, that's all they'll produce. Unless the car manufacturers are actually held accountable for the decisions of those algorithms. If the car makes a choice to kill the occupant, the occupant was aware of and accepted that risk. If it makes a choice to kill a pedestrian, that's manslaughter at the very least. But no auto manufacturer in the world would ever be convicted for that, because they own too many legislators who would find it too unprofitable...