Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?
Yes, says Markus Ehrenmann of Swisscom.
The current progress being made in processing big data and in machine learning is not always to our advantage. Some algorithms are already putting people at a disadvantage today and will have to be regulated.
For example, if a driverless car recognises an obstacle in the road, the control algorithm has to decide whether it will put the life of its passengers at risk or endanger uninvolved passers-by on the pavement. The on-board computer takes decisions that used to be made by people. It's up to the state to clarify who must take responsibility for the consequences of automated decisions (so-called 'algorithmic accountability'). Otherwise, it would render our legal system ineffective.
[...]
No, says Mouloud Dey of SAS.We need to be able to audit any algorithm potentially open to inappropriate use. But creativity can't be stifled nor research placed under an extra burden. Our hand must be measured and not premature. Creative individuals must be allowed the freedom to work, and not assigned bad intentions a priori. Likewise, before any action is taken, the actual use of an algorithm must be considered, as it is generally not the computer program at fault but the way it is used.
It's the seemingly mysterious, badly intentioned and quasi-automatic algorithms that are often apportioned blame, but we need to look at the entire chain of production, from the programmer and the user to the managers and their decisions. We can't throw the baby out with the bathwater: an algorithm developed for a debatable use, such as military drones, may also have an evidently useful application which raises no questions.
Two opposing viewpoints are provided in this article; please share yours.
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @09:36AM
You don't make thought or algorithm a crime. Action is crime. When the car hits and kills someone, that's manslaughter, and the chain of responsibility stops at some point - the car's buyer for not intervening, the algo writer for failing, the carmaker for not having safe parameters for distance and speed etc., heck maybe the sensor maker if one failed in an out of spec manner.
Ask any engineer what the answer is and if they don't say this then ask them whether they think this might be possible, and odds vs. their original answer.
(Score: 2, Interesting) by khallow on Wednesday January 04 2017, @12:06PM
Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?
First, as the parent post noted, every one of these "algorithms" is a real world action or event. If I make an algorithm on my computer for any one of these three things, it will have absolutely no relevance and thus, no need to be regulated until the very point it gets used. At that point the use may be regulated though even then, it can ill-advised. Really, why would anyone think that government regulation of news filters is anything but a terrible idea?
That brings me to the second point. There's no problem here that needs solving. Some bad outcome happens because of the choice of algorithm? Regulation already covers that.
Third, again as the parent noted, when you regulate algorithms as opposed to actions, you cross a big line. You're now regulating thoughts, intent, and beliefs. Government shouldn't be meddling in that. Legally, it shouldn't matter why people comply with the law. We shouldn't care if a self-driving car chooses to act as it does to minimize loss of life or to minimize liability to the car manufacturer. You can't regulate virtue or correct thinking into people and it is worthless to try.
(Score: 3, Interesting) by Anonymous Coward on Wednesday January 04 2017, @01:21PM
It is being done TODAY!
FDA has software program/standards that monitor, audit and regulate software from Pace-Makers and Insulin-Pumps to their support devices / code. It follows NASA software group standards used both in Shuttle but even in earlier Apollo 4bit software. For one thing EVERY branch is checked to valid if what happens and if failure occurs that the software resets and recovers itself.
(Score: 3, Insightful) by choose another one on Wednesday January 04 2017, @02:36PM
> I'd go even further. There's no problem that can be solved by regulating algorithms. Notice the very first paragraph:
Not only that, there are many problems that would be created by attempting to regulate algorithms outside the context of use.
Should glass be regulated to be clear, untinted and laminated so it breaks safely? - clearly not, that would cause problems in many applications.
Should glass used in car windscreens be regulated to be clear, untinted and laminated so it breaks safely? - probably, and it is, as part of regulating _cars_.
(Score: 2) by urza9814 on Friday January 06 2017, @09:31PM
I would agree with you, with one slight modification: Algorithms *should* be regulated, but only using existing laws.
I think we're already at a point where algorithms often *aren't* subject to that regulation. AT&T used a bad algorithm which allowed Weev to access data of other customers. AT&T wasn't charged, Weev was sentenced to a couple years in prison and nearly a hundred thousand in fines. Even existing law doesn't apply when "an algorithm did it." And THAT is why IT security is such a mess -- the people who find the holes are punished while the people who create them are not.
This gets even more important with stuff like self-driving cars. Are you going to buy the car that will always sacrifice you to save a pedestrian, or the car that will sacrifice the pedestrian to save you? Probably most people will take the car that protects its occupants; but it's really not fair that the people who made the decision to buy the car are the only ones who WON'T face that potentially deadly consequence of their decision. It might not even be a choice -- if manufacturers assume that's what people want, that's all they'll produce. Unless the car manufacturers are actually held accountable for the decisions of those algorithms. If the car makes a choice to kill the occupant, the occupant was aware of and accepted that risk. If it makes a choice to kill a pedestrian, that's manslaughter at the very least. But no auto manufacturer in the world would ever be convicted for that, because they own too many legislators who would find it too unprofitable...
(Score: 2) by cubancigar11 on Wednesday January 04 2017, @06:01PM
Why do we need regulation in the first place? Not a rhetorical question, a serious one. Because you seem to think there is always a clear cause and a clear effect.
See, if you die on road because some chump wrote an algorithm for a guided missile, he will be convicted. But if a scurd missile drops on you because a drone malfunctioned, who do you think will be punished? No one.
Now replace president with driver and drone with a self-driving car. You can not pick to come to a favoured conclusion but logically the only difference is about power and that doesn't a good law make.
This is too complicated and I think people would prefer to outsource it to a regulatory body.
(Score: 2) by urza9814 on Friday January 06 2017, @09:40PM
Are you implying that events occur which have no cause?
If a drone malfunctions and drops a missile, there is certainly a cause. It would surely get complicated, because there would probably be many parties involved, but I'm certain you could successfully sue SOMEONE for that. Was the drone not properly maintained? Was the maintenance crew being overworked? Was there a flaw in the design of the drone or the bomb? What idiot decided it was a good idea to fly live munitions over a populated area? Who programmed the control software? Or the bomb itself? Is it a smart bomb? Is it smart enough to know it was released unintentionally? Could the drone have been made to notice and deactivate it before impact? That's one hell of a negligence case...
Of course, they could just say "IT'S A MATTER OF NATIONAL SECURITY" and cover the whole thing up. They could do that if they shot someone too, but that doesn't mean murder is legal or shouldn't be regulated, nor does that mean the shooting didn't have a cause. It just means they're corrupt.
(Score: 2) by cubancigar11 on Saturday January 07 2017, @03:15PM
What I am saying is that there are normally so many causes to any event it is normally impossible to pin-point it to one cause. The reason we have a complicated system to catch a culprit is as much about finding a psychological closure/revenge as it is about trying to fix the actual cause. Now I think being able to find someone to sue is just wrong, the actual goal of judicial system is to create deterrents to stop actual cause.
That said, let me bring the analogy close to a motor vehicle. Look at a bus - it has a big potential to cause mayhem - and it is very regulated. Not only you need a different license to drive it, almost all buses are run on predefined routes. That is a regulation of algorithm in way. Hence I am very sure a similar regulation is future.