Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?
Yes, says Markus Ehrenmann of Swisscom.
The current progress being made in processing big data and in machine learning is not always to our advantage. Some algorithms are already putting people at a disadvantage today and will have to be regulated.
For example, if a driverless car recognises an obstacle in the road, the control algorithm has to decide whether it will put the life of its passengers at risk or endanger uninvolved passers-by on the pavement. The on-board computer takes decisions that used to be made by people. It's up to the state to clarify who must take responsibility for the consequences of automated decisions (so-called 'algorithmic accountability'). Otherwise, it would render our legal system ineffective.
[...]
No, says Mouloud Dey of SAS.We need to be able to audit any algorithm potentially open to inappropriate use. But creativity can't be stifled nor research placed under an extra burden. Our hand must be measured and not premature. Creative individuals must be allowed the freedom to work, and not assigned bad intentions a priori. Likewise, before any action is taken, the actual use of an algorithm must be considered, as it is generally not the computer program at fault but the way it is used.
It's the seemingly mysterious, badly intentioned and quasi-automatic algorithms that are often apportioned blame, but we need to look at the entire chain of production, from the programmer and the user to the managers and their decisions. We can't throw the baby out with the bathwater: an algorithm developed for a debatable use, such as military drones, may also have an evidently useful application which raises no questions.
Two opposing viewpoints are provided in this article; please share yours.
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @09:28AM
Why not...
(Score: 2) by DeathMonkey on Wednesday January 04 2017, @07:24PM
The answer depends on the situation, the rights of the people involved, what purpose the device serves. In fact, it depends on pretty much every variable except the fact that it's an algorithm!
(Score: 2) by forkazoo on Wednesday January 04 2017, @09:39PM
I'd tell you why not, but I am not legally allowed to query a database of historical precedent to find a good example.
Thank goodness the people who want to tamper with elections are also subject to the regulation. There's no way they'd want to commit an additional crime above and beyond tampering with an election, by also using an algorithm without a permit! At least, that's what the new government that won the last election says. It's all taken care of. No way to be sure, since I can't legally use any algorithms to question the validity of that election.
(Score: 0) by Anonymous Coward on Thursday January 05 2017, @01:33AM
I'm sorry Dave, you ran an unauthorized QuickSort. ~Hal
(Score: 1) by garrulus on Wednesday January 04 2017, @09:33AM
The writers should be personally legally responsible for the outcome
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @10:06AM
In other words, make it financially impossible for anyone other than a huge corporation to construct and implement an algorithm? That's arguably worse than letting the government trying to do the job in its typical, grossly incompetent, grossly corrupt and tremendously politicized way.
(Score: 2, Interesting) by Anonymous Coward on Wednesday January 04 2017, @12:55PM
No, Dev and managers are personally responsible even in LARGE corp. I have had Surety Bonds on my work. I even left one company, when a manager wanted to do something stupid and I refused to both protect me and my bond.
I will believe a corporation is person, once Texas puts one to death.
(Score: 1) by nitehawk214 on Wednesday January 04 2017, @03:12PM
The Insurance companies are responsible.
Liability will determine what choices will be made. If they can find a convenient legal scapegoat, they will do so as a way to dodge corporate liability for shoddy code.
I don't know what the correct answer to any of this is but, "Sacrifice a code monkey in India each time a software algorithm harms a person, then business as usual." isn't going to be terribly popular.
"Don't you ever miss the days when you used to be nostalgic?" -Loiosh
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @09:36AM
You don't make thought or algorithm a crime. Action is crime. When the car hits and kills someone, that's manslaughter, and the chain of responsibility stops at some point - the car's buyer for not intervening, the algo writer for failing, the carmaker for not having safe parameters for distance and speed etc., heck maybe the sensor maker if one failed in an out of spec manner.
Ask any engineer what the answer is and if they don't say this then ask them whether they think this might be possible, and odds vs. their original answer.
(Score: 2, Interesting) by khallow on Wednesday January 04 2017, @12:06PM
Accidents involving driverless cars, calculating the probability of recidivism among criminals, and influencing elections by means of news filters—algorithms are involved everywhere. Should governments step in?
First, as the parent post noted, every one of these "algorithms" is a real world action or event. If I make an algorithm on my computer for any one of these three things, it will have absolutely no relevance and thus, no need to be regulated until the very point it gets used. At that point the use may be regulated though even then, it can ill-advised. Really, why would anyone think that government regulation of news filters is anything but a terrible idea?
That brings me to the second point. There's no problem here that needs solving. Some bad outcome happens because of the choice of algorithm? Regulation already covers that.
Third, again as the parent noted, when you regulate algorithms as opposed to actions, you cross a big line. You're now regulating thoughts, intent, and beliefs. Government shouldn't be meddling in that. Legally, it shouldn't matter why people comply with the law. We shouldn't care if a self-driving car chooses to act as it does to minimize loss of life or to minimize liability to the car manufacturer. You can't regulate virtue or correct thinking into people and it is worthless to try.
(Score: 3, Interesting) by Anonymous Coward on Wednesday January 04 2017, @01:21PM
It is being done TODAY!
FDA has software program/standards that monitor, audit and regulate software from Pace-Makers and Insulin-Pumps to their support devices / code. It follows NASA software group standards used both in Shuttle but even in earlier Apollo 4bit software. For one thing EVERY branch is checked to valid if what happens and if failure occurs that the software resets and recovers itself.
(Score: 3, Insightful) by choose another one on Wednesday January 04 2017, @02:36PM
> I'd go even further. There's no problem that can be solved by regulating algorithms. Notice the very first paragraph:
Not only that, there are many problems that would be created by attempting to regulate algorithms outside the context of use.
Should glass be regulated to be clear, untinted and laminated so it breaks safely? - clearly not, that would cause problems in many applications.
Should glass used in car windscreens be regulated to be clear, untinted and laminated so it breaks safely? - probably, and it is, as part of regulating _cars_.
(Score: 2) by urza9814 on Friday January 06 2017, @09:31PM
I would agree with you, with one slight modification: Algorithms *should* be regulated, but only using existing laws.
I think we're already at a point where algorithms often *aren't* subject to that regulation. AT&T used a bad algorithm which allowed Weev to access data of other customers. AT&T wasn't charged, Weev was sentenced to a couple years in prison and nearly a hundred thousand in fines. Even existing law doesn't apply when "an algorithm did it." And THAT is why IT security is such a mess -- the people who find the holes are punished while the people who create them are not.
This gets even more important with stuff like self-driving cars. Are you going to buy the car that will always sacrifice you to save a pedestrian, or the car that will sacrifice the pedestrian to save you? Probably most people will take the car that protects its occupants; but it's really not fair that the people who made the decision to buy the car are the only ones who WON'T face that potentially deadly consequence of their decision. It might not even be a choice -- if manufacturers assume that's what people want, that's all they'll produce. Unless the car manufacturers are actually held accountable for the decisions of those algorithms. If the car makes a choice to kill the occupant, the occupant was aware of and accepted that risk. If it makes a choice to kill a pedestrian, that's manslaughter at the very least. But no auto manufacturer in the world would ever be convicted for that, because they own too many legislators who would find it too unprofitable...
(Score: 2) by cubancigar11 on Wednesday January 04 2017, @06:01PM
Why do we need regulation in the first place? Not a rhetorical question, a serious one. Because you seem to think there is always a clear cause and a clear effect.
See, if you die on road because some chump wrote an algorithm for a guided missile, he will be convicted. But if a scurd missile drops on you because a drone malfunctioned, who do you think will be punished? No one.
Now replace president with driver and drone with a self-driving car. You can not pick to come to a favoured conclusion but logically the only difference is about power and that doesn't a good law make.
This is too complicated and I think people would prefer to outsource it to a regulatory body.
(Score: 2) by urza9814 on Friday January 06 2017, @09:40PM
Are you implying that events occur which have no cause?
If a drone malfunctions and drops a missile, there is certainly a cause. It would surely get complicated, because there would probably be many parties involved, but I'm certain you could successfully sue SOMEONE for that. Was the drone not properly maintained? Was the maintenance crew being overworked? Was there a flaw in the design of the drone or the bomb? What idiot decided it was a good idea to fly live munitions over a populated area? Who programmed the control software? Or the bomb itself? Is it a smart bomb? Is it smart enough to know it was released unintentionally? Could the drone have been made to notice and deactivate it before impact? That's one hell of a negligence case...
Of course, they could just say "IT'S A MATTER OF NATIONAL SECURITY" and cover the whole thing up. They could do that if they shot someone too, but that doesn't mean murder is legal or shouldn't be regulated, nor does that mean the shooting didn't have a cause. It just means they're corrupt.
(Score: 2) by cubancigar11 on Saturday January 07 2017, @03:15PM
What I am saying is that there are normally so many causes to any event it is normally impossible to pin-point it to one cause. The reason we have a complicated system to catch a culprit is as much about finding a psychological closure/revenge as it is about trying to fix the actual cause. Now I think being able to find someone to sue is just wrong, the actual goal of judicial system is to create deterrents to stop actual cause.
That said, let me bring the analogy close to a motor vehicle. Look at a bus - it has a big potential to cause mayhem - and it is very regulated. Not only you need a different license to drive it, almost all buses are run on predefined routes. That is a regulation of algorithm in way. Hence I am very sure a similar regulation is future.
(Score: 1, Insightful) by Anonymous Coward on Wednesday January 04 2017, @09:47AM
The algorithms should be the result of regulation in the first place, not the other way around. They should not be regulated on their own, it's their use that should be regulated.
In case of autonomous cars, there is already a lot of regulation in place, even tests to see if you are fit to drive. Each algorithm that want's to drive a car should undergo that same test.
In case of an accident, the algorithm is judged by the same rules as normal persons. If the algorithm is found to be at fault, I think you can easily get your inspiration for when a rich guys personal driver crashes the car.
(Score: 2) by q.kontinuum on Wednesday January 04 2017, @11:13AM
In case of an accident, the algorithm is judged by the same rules as normal persons.
And then the algorithm is sentenced to prison and will pay the compensation to the victim?
Or the developer implementing it? Or the Company which commissioned it? Or the driver for using it?
Registered IRC nick on chat.soylentnews.org: qkontinuum
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @03:00PM
The answer of "who pays" for autonomous car faults is very simple. The owner of the vehicle pays for all damages incurred from use of the vehicle. Like today, vehicle owners will have liability insurance to cover such damages.
(Score: 3, Insightful) by q.kontinuum on Wednesday January 04 2017, @05:00PM
No. If a defective part of the car causes the accident, the manufacturer might very well be liable instead of the car-owner. Same should hold true with defective algorithms. But in this case, we need rules to determine if an algorithm is defective or now.
Registered IRC nick on chat.soylentnews.org: qkontinuum
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @08:13PM
If a defective part causes the accident, the vehicle owner (i.e., their insurance) will pay damages to the injured party.
In turn, the vehicle owner (i.e., their insurance provider) will attempt to collect reimbursement from the manufacturer (who probably also has liability insurance). Then the manufacturer, in turn, can point fingers at their suppliers and try to collect from them.
The manufacturer may not even exist by the time damages are incurred. The owner has to have the ultimate responsibility.
(Score: 2) by q.kontinuum on Wednesday January 04 2017, @09:07PM
I would assume that the process you describe implies that the ultimate responsibility is with the vendor, while the immediate responsibility lies with the vehicle owner. But let's not split hairs. The process you describe still relies on an accepted definition of what the algorithm should try to implement to make it possible to judge if it was faulty or not. An algorithm that factors in the material value of the car or the skin color or gender of the pedestrian, even if deciding otherwise within the allowed margin of judgement, might e.g. be considered inherently flawed in some countries, rational in others.
Registered IRC nick on chat.soylentnews.org: qkontinuum
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @09:55AM
Step 1: Regulate algorithms. Step 2: Kill anybody who tries to make strong AI.
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @12:58PM
Step 3: GOTO 2
(Score: 2, Insightful) by Anonymous Coward on Wednesday January 04 2017, @10:09AM
It's safe to ignore any article on AI where the authors are talking about cars deciding if they should run over one group of people vs another. We're no where near that level of AI yet and that whole argument is bullshit anyway. The car will slam on the breaks. Nothing more and nothing less.
(Score: 2) by q.kontinuum on Wednesday January 04 2017, @11:15AM
So lets wait until we are there, cars are finally advanced enough, and cry then why we didn't think about it earlier?
Registered IRC nick on chat.soylentnews.org: qkontinuum
(Score: 1) by cccc828 on Wednesday January 04 2017, @03:25PM
You do not need any specific level of AI to end up in that situation. Whenever an autonomous system (car, drone, washing machine, recommender system...) does something "wrong" you need to answer the 'why'. Cars are an often used example, because everyone know them and the potential harm is graphic enough for headlines (grave injury and/or loss of life). The Trolley problem [wikipedia.org] is just the most drastic and graphic tought-experiment.
The worst answer you can give is "The algorithm did it". If a systems causes harm or damages, you want to know who to blame and who to hold liable. For example: if slamming on the breaks is the exact wrong action (maybe you are in the middle of a railway crossing), someone needs to take the blame. Currently the laws in most countries basically say, if you drive a car/use a machine/etc. you are fully responsible for what it does. In the railway crossing example, it is your fault that the train hit you - you should have switched to manual mode (or done something else). Of course no one wants to take the blame for a computer's decision. Try telling someone they have to pay for a fender bender caused by their "smart" car...
So how to handle the problem? Insurance companies might be reluctant to take the risk of algorithm they do not know, regulatory bodies might not like the idea of unknown algorithms (they also do not like the idea of unknown exhaust fumes...) and courts might want a minimum level of logging/evidence generation.
(Score: 3, Interesting) by jmorris on Wednesday January 04 2017, @03:49PM
And here is the best way to deal with the problem. Humans, studied in large groups, are well characterized and risk models can be built to insure them. As things like cars cross the line from tool to actor they will need liability insurance in their own right vs. the current model where the vehicle is insured against loss based on replacement value and the driver for liability based on risk profile.
Unless the liability portion is assumed by the automaker this will require the insurer have a way to assess their risk. So future one has all cars licensed, not sold as software today is since cars will basically be software with some enabling hardware. Future two has cars sold and insured similar to today and the insurance companies either auditing the software to determine their risk and working with the automakers, industry to industry with something like Underwriters Labs to enforce standards to make them insurable at reasonable standardized rates. Future three has the insurance industry rating car AI like they do humans, on bulk metrics and model / manufacturer track record. Would suck to learn your car has been deemed a reckless "teen" driver and have to pay accordingly, but a powerful motivator for the vendor to issue a patch.
(Score: 2) by theluggage on Wednesday January 04 2017, @05:40PM
The "Trolley problem" is a load of waffle based on the unrealistic assumption that the subject has omniscient pre-knowledge of the outcome of each possible option. Why do you think so many of these problems involve passive vehicles running on rails? How blatant a symbol of predestination do you want?
Hi. I'm a hyperintelligent emergent AI that can correctly predict the number of casualties from every permutation of pending accident, based on some lidar data, a couple of webcam images and a copy of the Beijing Times that someone just shoved under the door. I know this really cool solution to the trolley problem, and if you just let me out of this box I'll show you.
PS: I promise I won't enslave the human race and create a giant VR in which to torture my enemies for eternity...
(Score: 2) by urza9814 on Friday January 06 2017, @10:06PM
So the human programming the algorithm while sitting in a cubicle thinking through all the possibilities is expected to have a *worse* reaction than the human sitting in the car with half a second to react?
The guy in the car gets some leeway from the courts because obviously people don't always react ideally under extreme pressure. Likewise, the guy designing the algorithm will get some leeway because obviously he couldn't predict every single circumstance. But you can still be convicted because you did something incredibly stupid under pressure; and likewise it should still be possible to convict a company that programs an algorithm which reacts poorly in a situation they should have expected.
(Score: 4, Interesting) by Dr Spin on Wednesday January 04 2017, @10:12AM
The world is bigger than the USA.
There are over 100 countries in it, each with at least one government.
Some of these governments may have perfectly sane methods of deciding on policy. Others
have methods that are widely agreed to be terminally insane.
What good governments do, bad governments are likely to do also, but unlikely to do
well.
All of the above is broadly also true of people and machines.
Conclusion:
What ever you do, bad things will happen. Some bad things are worse than others. The world
has no completely reliable method of ensuring the causes of bad things happening are minimized,
BUT the method applied to civil aerospace operations has a pretty good track records, and might
be worth pursuing in this case.
This involves cooperation of manufacturers and governments across the world, and took the best
part of 100 years to develop. So don't hold your breath (or step out in the road in front of a
car, with or without driver).
Above all, do not step out in front of Internet connected cars as they are likely to be piloted by
ransomware.
Warning: Opening your mouth may invalidate your brain!
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @10:56AM
No reason to do ANYTHING special for algorithms, if a car gets in an accident the police get to charge it with a crime and confiscate it, just like they can do with any other possession.
Buggy software just means more cars lining LEO's pockets.
Self solving problem! :)
(Score: 2) by Dunbal on Wednesday January 04 2017, @11:15AM
Wait - isn't this "thought crime" for computers? Have we really come this far?
(Score: 2) by q.kontinuum on Wednesday January 04 2017, @11:21AM
isn't this "thought crime" for computers?
No. An algorithm is not a thought, but more comparable to an actual intent. Like someone pulling a weapon, aiming at someone, and making it clear without a doubt he intends to pull the trigger.
Registered IRC nick on chat.soylentnews.org: qkontinuum
(Score: 2) by Dunbal on Wednesday January 04 2017, @01:33PM
A man can kill another man - with intent, in very special circumstances. As a soldier in a war. In self defense. As a state appointed executioner. The act of killing with intent is not necessarily against the law even if the law is pretty clear that murder is a no-no. Intent is not sufficient.
(Score: 2) by q.kontinuum on Wednesday January 04 2017, @04:58PM
True. I was just arguing against the analogy algorithm-regulation vs. thought-crime punishment.
I think everyone agrees the algorithm shouldn't kill people unmotivated, but where it becomes quirky is when the algorithm needs to determine whom to kill / how to assess risks etc. In this case, with an algorithm it can be determined in advance if the action it will take in a given scenario is justified or not.
Registered IRC nick on chat.soylentnews.org: qkontinuum
(Score: 2) by sjames on Thursday January 05 2017, @02:02AM
However, a new category of manslaughter is actually a trickier problem and more likely for the algorithms to encounter. The case where you truly don't want to kill at all but no matter what you do, someone is likely to die. Our laws don't really cover that. What is the legally correct decision. Take no action and whoever dies dies, aim for the smallest number of deaths, aim for the oldest people to die and avoid children?
Another thing our courts can't handle is making a decision before an action is taken. There isn't even a way to ask the court if I were to do X, would it be a crime? A lawyer can answer the obvious ones, but given the slightest ambiguity you'll get answers involving "probably". Either way, the answer is non-binding.
With human drivers, we sidestep the issue somewhat by declaring that it happened too fast for a human to make a decision so we assign a lower culpability than if they had time to carefully consider all the angles and then took an action. Automated cars will force us to actually make a decision on what is the right thing to do.
(Score: 2) by urza9814 on Friday January 06 2017, @10:21PM
Human beings make these kinds of decisions every day; algorithms shouldn't be judged any differently. Sometimes there really is no right answer, and it's up to the courts to essentially decide if you did as well as could be expected. Same goes for the designers of the algorithms. A judge will decide if the architects should have reasonably expected the situation which occurred; and if so, the manufacturer is liable.
It's kinda rare and mostly used in IP disputes, but you CAN actually do that. It's called a declaratory judgement:
https://en.wikipedia.org/wiki/Declaratory_judgment [wikipedia.org]
With automated cars, the team that designed the algorithm had plenty of time to make a decision. But we will probably sidestep the issue somewhat by declaring that they didn't have full knowledge of what would happen, and bear lower culpability for that reason. So it's a similar situation. They'll be liable for stuff they could have easily predicted; they won't be liable for stuff they couldn't.
(Score: 2) by sjames on Saturday January 07 2017, @04:09AM
Human beings make these kinds of decisions every day;
They really don't. People may be faced with the situation every day (not the same people each day, of course) but the situation generally precludes actually making a decision. Generally they at best take an action with no time to consider and hope for the best. For example "Oh shit! We're out of control!!! There's a kid! TURN AWAY!", not "I estimate a 79% chance of fatality if we strike the child, but only a 50% chance if we sideswipe the tour bus, so we should turn 35 degrees to the left. The latter is actually within the realm of possibility for an autopilot while the human didn't even notice the tour bus and isn't expected to under the circumstances.
That's the crux of it. The autopilot can actually consider all factors and actually decide who to hit, not just who not to hit.
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @12:57PM
First code reviews. Then code analysis tools like lint, sonar, codenarc. Then test coverage analyzer like Emma, cobertura d, etc. Then pair programming. Now government regulation of the creative process of writing software!?!!!!
Please let me get back to programming mother f***er
http://tinyurl.com/63e8txg [tinyurl.com]
(Score: 2) by driven on Wednesday January 04 2017, @03:20PM
Code reviews would have only limited impact, given that most of the "brains" of machine learning are stored in the neural network nodes that only contain data.
I think training at a level higher than the algorithm is more appropriate - rigorous testing in all sorts of driving conditions: pedestrians, weather, visibility, no-win situations, poor traction, etc. The algorithm is less important than the end result.
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @10:22PM
Let's see, if I work for BMW, how am I going to train my AI? Obviously by tailgating, flashing lights and generally annoying as many other drivers as possible.
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @01:02PM
Not sure if it counts as regulation, but on the upside it does not involve a government body. The ACM has been publishing reference form of algorithms for decades.
http://netlib.org/toms/ [netlib.org]
Seems as though a learned body as that would be the quintessential gatekeeper.
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @03:29PM
Government will adopt (read: anoint) the long-standing and widely recognized work of one of these "private" organizations (and will require a person to purchase a "private" copy of this work, because ignorance of the "law" is no excuse...), and then years later everyone will be telling libertarians once again how dangerous automated cars would be had bureaucrats in government not single-handedly developed all the rules and regulations that make our sweet little children safe. Wash, rinse, repeat. Business as usual, except the government will need more money this year than last year.
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @01:33PM
Seems like both authors agree that any potentially liable source code for should be housed in a neutral third party's (potentially private) source code repository so it can be reviewed by the courts if need be. Of course, one must also be able to prove that the source code in the repository is the one being run. I suppose that disassembly isn't too high a burden, but doesn't feel right to me.
(Score: 3, Insightful) by MrGuy on Wednesday January 04 2017, @03:48PM
Rather than ask "Should we regulate algorithms?" as a starting point, I'd recommend starting for "What feasibly could be regulated about algorithms?" and determine if those are good or bad ideas.
"Should we regulate algorithms" is a maddeningly vague question, and doesn't have a single answer.
A code audit to "prove" an algorithm that makes credit decisions doesn't explicitly make decision based on the race of the borrower is "regulation." A mandate that every system on a driverless car must be mathematically proven to be incapable of ever harming a human is a very different kind of "regulation." Both are social goods. But the first is straightforward, easy to define, and simply extends existing regulation that applies to human decision makers to automated decision makers. The second is complex (possibly impossible), is difficult to define, and imposes specific constraints on highly complex decision making that isn't entirely clear.
(Score: 2) by Thexalon on Wednesday January 04 2017, @05:50PM
I disagree with how easy the first regulation would be to enforce if somebody really wanted to make an algorithm racist:
1. There's no guarantee that the code that is audited is the code that is actually executed, especially if the code goes through a compiler first (see Ken Thompson's classic Reflections on Trusting Trust [cmu.edu]). Or, in one of the infamous cases involving Diebold Election Systems, there was a separate branch of the code sent to the auditors than was actually installed on the system.
2. There are lots of ways to be racist without being explicitly racist. For example, instead of saying "black people = -100 points", you say "People who live in certain zip codes, or use certain names, express certain religious preferences, and/or like certain kinds of music = -100 points", and it just so happens that the zip codes, names, etc you fed into it were much more heavily used by black people.
3. Automated black-box testing could very easily miss it as well: All I do is look for input that looks like it came from the automated testing system and don't apply the racist algorithm to that data, while applying it to everybody else.
This kind of thing is much more likely to happen when there's a substantial financial incentive to be racist, like there still is in many apartment rental markets.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 2) by Thexalon on Wednesday January 04 2017, @04:16PM
In one word: Insurance.
The reason is that we're talking about managing a risk. Take, for instance, self-driving cars: A car with better driving software is safer than a car with worse driving software, and the insurance premiums will reflect that. The insurance companies, loathe to pay claims, will lean on the car manufacturers to improve the quality of their software, just like they do for every other area of the car. Insurance companies already do quite a lot of the crash testing and regulation via contracts. That said, I wouldn't be surprised to also see government getting involved in testing and regulation, with regards to what happens in major failure conditions.
A similar situation would happen with medical devices, industrial systems, etc.
That said, a lot of the fear-mongering over algorithms doing tasks formerly done by people seriously over-estimates the quality of the meatbags doing the task now.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 1, Insightful) by Anonymous Coward on Wednesday January 04 2017, @04:24PM
Rather than micromanaging everything, what's more important is to make sure that driverless cars and such run free software. Otherwise, they will almost certainly be filled with spying anti-features (for use by the government and corporations), DRM, and all sorts of other nonsense. If that basic condition isn't met, then driverless cars are a complete non-starter.
(Score: 2) by Azuma Hazuki on Wednesday January 04 2017, @05:25PM
What does "regulating algorithms" even mean?
Test for correctness, yes. Make sure they're validated, formally correct in the mathematical sense, yes. But "regulate?" This is an example of why non-technical people (government types, all goddamn lawyers and bankers) shouldn't try to get involved in what they don't understand. Leave this one to the engineers and programmers, and adjust the current system of insurance and liability law *after the fact* to handle this. No one is *truly* impartial but engineers and programmers are the closest I've seen to it in this world.
I am "that girl" your mother warned you about...
(Score: 2) by q.kontinuum on Wednesday January 04 2017, @11:59PM
What does "regulating algorithms" even mean?
I would understand it as "regulating, which rules algorithms must follow". Obviously, whenever applicable they need to implement the existing traffic laws and regulations. But in some extreme cases (rockfall, earthquake, upcoming truck / bus), the current traffic laws are not enough. If an algorithm is applied to determine if the car smashes into a brick-wall (potentially killing the passenger(s) or into a living soft target (probably killing them), I would like some regulation that it does not evaluate some bluetooth-beacon sold by "for-the-children corporation" to decide if it hits the soft target or the brick-wall. Neither would I want an algorithm factoring in the car-value vs. expected compensations, the cloth-brand the pedestrian wears, the skin-colour, the gender or running an online-check for political views, sexual orientation or lifestyle-choices based on some smartphone identification.
Good engineers [1] are in the first place good at implementing the algorithm. But a good engineer is not necessarily a good human-being, and the interest of the car-owner is not necessarily the same as the interest of the general public. Nevertheless it is the car-owner that pays for the software and therefore indirectly makes the decision, if it is not sufficiently otherwise regulated.
Also one could argue that mandating certain test-procedures for the implementation of the algorithms could be abbreviated in laymen-terms as a regulation of algorithms.
[1] I'm talking about good technical understanding, good logical thinking, clean ways of working and creativity. Of course you could also define a good engineer as someone who thinks about the implications of his work and takes philosophical and ethical courses as well, but I'm not convinced this reflects the current market situation.
Registered IRC nick on chat.soylentnews.org: qkontinuum
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @05:48PM
Engineering has a certification process, and engineers are held liable of they don't follow certain procedures to reduce or prevent disasters with buildings, bridges, planes, etc.
We can (attempt) do the same with software developers and network/system designers, BUT it will make them more expensive, projects will take longer, and be uglier because safety (predictability) will get weighed above eye candy. Engineers will have to choose a safe (road tested) but dated UI over the latest fad.
It's fine by me, I could get paid more, but products will be costlier and profits may be lower for companies.
(Score: 2) by theluggage on Wednesday January 04 2017, @06:10PM
Should any future self-driving cars (and other safety-critical systems) have to undergo statutory testing and meet the appropriate standards, just like plain old non-autonomous cars already do in most civilised countries? Will the existing standards need to be revised to accommodate self-driving cars?
Seriously, does anybody not on the board of Uber think that shouldn't happen? Will any insurance company in the universe cover people to drive/sell/own them if they aren't regulated?
I'm certainly not getting into the hot seat of a self-driving car until the maker tells me I'm indemnified against any death, injury or damage it causes. Meanwhile, I think people are going to be really frustrated while their autonomous car drives around at 5mph below the posted speed limit, waits for ever at busy junctions where its never 100.00% safe to pull out and gets stuck behind pushbikes until you suffer whiplash when your car mistakes a leaf for a child crossing the road, slams on the brakes and the apoplectic Audi driver that was following 2cm behind you slams into your rear.
However, its not the algorithms that are the problem: its the people who (to pick a random example) re-name their cruise control and lane-keeping assistant "autopilot" then wonder why drivers take their hands off the wheel... or use an algorithm to filter out possible fraudulent benefit applications (good idea) but then automatically penalise people without having a human checking for false positives... (see also: automated DMCA takedowns).
Don't regulate algorithms - regulate the people who implement them.
(Score: 2) by gidds on Wednesday January 04 2017, @07:02PM
There seems to be some confusion between algorithms — which are abstract approaches to solving problems — and their concrete implementations.
A program might contain implementations of many different algorithms, some customised or tuned to a particular purpose, and all working together to achieve the overall aim. It might conceivably make sense to regulate the program, and even more so to regulate the device or system using the program. But regulating an algorithm? How could it make even the tiniest bit of sense to control, say, the very idea of a Shellsort, or a Fast Fourier Transform?
[sig redacted]
(Score: 0) by Anonymous Coward on Wednesday January 04 2017, @10:45PM
Do away with the following:
1) I see a problem that makes me feel bad
2) I'll make a rule against it that costs me nothing
3) Others had to do the work but it's making me feel better