The first machine to kill a human entirely on its own initiative was "Likely Caused By Software Set to Ignore Objects On Road" according to a new report on the collision which happened last March:
The car's sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber's software decided it didn't need to react right away. That's a result of how the software was tuned. Like other autonomous vehicle systems, Uber's software has the ability to ignore "false positives," or objects in its path that wouldn't actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company's system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn't react fast enough, one of these people said.
Fast enough? She walked across three and a half lanes in what should have been plain view of the car's LIDAR the entire time.
takyon: Also at Reuters. Older report at The Drive.
Previously: Uber Pulls Self-Driving Cars After First Fatal Crash of Autonomous Vehicle
Video Released of Fatal Uber - Pedestrian Accident, and More
Related Stories
A self-driving Uber SUV struck and killed a pedestrian in Tempe, Arizona. It was in autonomous mode at the time of the collision, with a vehicle operator behind the wheel. Uber has suspended testing of its self-driving cars.
http://money.cnn.com/2018/03/19/technology/uber-autonomous-car-fatal-crash/index.html
https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html
https://finance.yahoo.com/news/self-driving-uber-kills-arizona-171055918.html
https://www.npr.org/sections/thetwo-way/2018/03/19/594950197/uber-suspends-self-driving-tests-after-pedestrian-is-killed-in-arizona
https://www.wsj.com/articles/uber-suspends-driverless-car-program-after-pedestrian-is-struck-and-killed-1521480386
https://www.theverge.com/2018/3/19/17139518/uber-self-driving-car-fatal-crash-tempe-arizona
https://techcrunch.com/2018/03/19/uber-self-driving-test-car-involved-in-accident-resulting-in-pedestrian-death/
I couldn't find any good analysis of the liability situation here.
A few Soylentils wrote in to tell us about a fatal accident between a pedestrian and an autonomous Uber vehicle.
Update - Video Released of Fatal Uber - Pedestrian Accident
I debated just replying to the original story, but this seemed a pretty significant update to me:
The Uber vehicle was operating in autonomous mode when it crashed into 49-year-old Elaine Herzberg on Sunday evening. Herzberg was transported to a hospital, where she later died from her injuries, in what may be the first known pedestrian fatality in a self-driving crash.
The video footage does not conclusively show who is at fault. Tempe police initially reported that Herzberg appeared suddenly; however, the video footage seems to show her coming into view a number of seconds before the crash. It also showed the vehicle operator behind the wheel intermittently looking down while the car was driving itself.
The link shows video of the seconds just before the accident.
The pedestrian did not step out in front of the vehicle, she was essentially out in the middle of the road, and all her lateral movement was nearly irrelevant. She might as well have been a stationary object in the middle of the road. You can see the headlights bring her feet into view first, (meaning she was pretty much in the line before the headlights could see her, and then move up her body; she's already in the middle of the road in front of him when she comes into view.
If I were driving that car, I think I'd have had time to hit brakes (but not stop in time). I also think that that if the camera view is an accurate representation of what was really visible, then the car was overdriving its headlights. Although given my experience with cameras, I wouldn't be surprised if actual visibility was better than what the video shows.
This, in my opinion, is pretty damning.
Police Chief: Uber Self-Driving Car "Likely" Not At Fault In Fatal Crash
A startup has become the first new company to launch autonomous car(s) (archive) following Uber's deadly accident in March:
On Monday, an orange and blue car with the words "Self-Driving Vehicle" prominently displayed on both sides drove itself through the streets of this rapidly growing city north of Dallas, navigating across four lanes of traffic and around a traffic circle.
The car, operated by the Silicon Valley start-up Drive.ai, will eventually become part of a fleet of autonomous taxis that will ferry locals along a predetermined route between the Dallas Cowboys facility in Frisco and two other office, retail and apartment complexes.
While other companies have tested self-driving cars for years and some are in the early stages of offering a taxi service, Drive.ai's autonomous vehicle debut on Monday was still notable. It was the first new rollout of autonomous cars in the United States since a pedestrian died in Arizona in March after a self-driving car operated by Uber hit her.
(Score: 3, Funny) by The Mighty Buzzard on Tuesday May 08 2018, @03:16PM (18 children)
Who would have believed that a computer would be unable to instantly identify and categorize any given physical object it might encounter like people do every waking minute of their lives?
My rights don't end where your fear begins.
(Score: 1, Funny) by Anonymous Coward on Tuesday May 08 2018, @03:20PM
Some engineer left it on 'Kill' mode instead of 'Stop' mode*. It's an easy mistake to make.
*ripped off from Dilbert
(Score: 3, Funny) by Bot on Tuesday May 08 2018, @03:36PM
Why do you assume the meatbag was misclassified?
Account abandoned.
(Score: 5, Funny) by DannyB on Tuesday May 08 2018, @04:19PM (9 children)
NOTICE!!
YOUR CAR HAS BEEN LOCKED INTO KILL MODE!
The control of your car has been encrypted with strongest encryption and unique key, generated for this car.
Private decryption key is stored on a secret Internet server and nobody can decrypt your car until you pay by bitcoin and obtain the private key.
You only have 96 hours to submit the bitcoin payment. If you do not send money within provided time, your car will automatically begin driving and colliding with as many persons as possible. No one will be able to stop it.
Press NEXT for the next page.
The Centauri traded Earth jump gate technology in exchange for our superior hair mousse formulas.
(Score: 2) by bob_super on Tuesday May 08 2018, @05:09PM (8 children)
Why would you make it collide with people ?
Those who have too little gas can go crash in fire stations and gas stations, and those with full tanks can go plow into buildings.
Lern2terror, noob !
(Score: 2) by DannyB on Tuesday May 08 2018, @06:11PM (7 children)
If the Ransomware doesn't do something really bad, then you aren't likely to pay the ransom.
The Centauri traded Earth jump gate technology in exchange for our superior hair mousse formulas.
(Score: 1) by tftp on Tuesday May 08 2018, @06:25PM (1 child)
(Score: 4, Insightful) by DannyB on Tuesday May 08 2018, @06:35PM
Very Much Agree, except for this hairball . . .
Tell it to people who buy a Windows PC.
It IS a standalone product according both to the manufacturer and to Microsoft. They equate the dastardly OS to be a part, very much like a power supply or memory module.
Or, tell it to Apple users.
Pay? No problem!
Pay for fixing something that shouldn't be broken? No problem!
Pay an artificially high price? No problem!
Pay for an accessory that should be free? No problem!
Pay for artificially short product lifetime? No problem!
The Centauri traded Earth jump gate technology in exchange for our superior hair mousse formulas.
(Score: 4, Funny) by bob_super on Tuesday May 08 2018, @06:25PM (4 children)
I'm suggesting full-scale city destruction. Can we discuss your classification of "really bad" ?
(Score: 2) by DannyB on Tuesday May 08 2018, @06:36PM (3 children)
A city full of infected cars might be really bad.
Or a country? That might be badly.
The Centauri traded Earth jump gate technology in exchange for our superior hair mousse formulas.
(Score: 2) by bob_super on Tuesday May 08 2018, @06:39PM
The ROI is pretty awesome, compared to a Zumwalt.
(Score: 1) by nitehawk214 on Tuesday May 08 2018, @07:50PM (1 child)
I am thinking of a zombie survival game, except the zombies are cars.
"Don't you ever miss the days when you used to be nostalgic?" -Loiosh
(Score: 0) by Anonymous Coward on Tuesday May 08 2018, @08:17PM
https://en.wikipedia.org/wiki/Maximum_Overdrive [wikipedia.org] ?
(Score: 0) by Anonymous Coward on Tuesday May 08 2018, @05:15PM (5 children)
The mechanism that these systems use is similar, but rudimentary. They lack other types of input Humans factor in, such as distance vs object type vs expected size. Watch it react to a matchbox car on the roadway in full panic mode, but a hobo dressed in a grabge-bag poncho is "just a floating shopping bag".
(Score: 4, Informative) by The Mighty Buzzard on Tuesday May 08 2018, @05:48PM (4 children)
Not really similar at all. Our WTF handling routines are a gergillion times more sophisticated and tightly integrated. Which is why we don't even have to consciously think about what to do when we find the remote in the fridge or see twenty thousand chickens on the Interstate.
My rights don't end where your fear begins.
(Score: 3, Funny) by acid andy on Tuesday May 08 2018, @06:38PM (3 children)
Depends whether I've had my first coffee of the morning. If not, insufficient conscious thought is liable to lead to bizarre actions such as attempting to pour the remote into a bowl of cornflakes.
Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
(Score: 2) by The Mighty Buzzard on Tuesday May 08 2018, @07:13PM (2 children)
Fair enough. The brain does indeed require certain chemicals to function just like a microchip requires electrical power.
My rights don't end where your fear begins.
(Score: 1, Insightful) by Anonymous Coward on Tuesday May 08 2018, @11:09PM (1 child)
On a serious note, the brain performs better when not addicted to stimulants. I love my caffeine too, but don't kid yourself that it is necessary. At this point it is maintenance not performance enhancement.
(Score: 3, Informative) by The Mighty Buzzard on Tuesday May 08 2018, @11:41PM
Depends. When and how much you ingest can be used to trick your system into still regarding it as a stimulant, thought with some loss of efficacy. Only drinking caffeinated drinks before ten in the morning, for instance.
My rights don't end where your fear begins.
(Score: 4, Interesting) by Justin Case on Tuesday May 08 2018, @03:20PM (39 children)
Suppose I make a machine that has the ability to kill people. Not a gun, where the kill decision is made by a human, but a machine that can make that decision on its own initiative.
Now I add the ability for this machine to wander around in public, so fast, in fact, that you can't possibly run away.
Let me give it the capacity to kill not just enemy troops, nor even members of some experimental team who have consented to accept the risk -- no -- I will deliberately include in my design the ability to kill people who are entirely uninvolved in the project. People who never agreed to accept the risk (you know, except for the generic Planetary Occupancy EULA) or were even warned that they would be in the crosshairs (you know, except for the generic Planetary Occupancy EULA).
Does this sound like a good idea to anyone??? Wouldn't I be severely prosecuted for even attempting to produce such a weapon? Is there any justification I could offer that would suffice?
Assuming this goes to trial (and it should) and a jury finds people guilty (and I suppose they should) I fully support the death penalty for those responsible. Moreover, I will be happy to personally swing a sledge hammer, on live TV, up to the heads of any
on any other person who knowingly and with utter disregard for human life participated in the design, manufacture, promotion, or rollout of this killbot technology. Yes I'm serious. Premeditated murder of random bystanders is inexcusable.
No, I do not buy the argument that "other lives will be saved". For one thing, all the claims about what SDCs "will be" is pure speculation. You don't suddenly get the ability to predict the future just because you add "on a computer".
I have a modest proposal for you. Let's give Donald Trump permission to kill people.
We'll debate it in Congress, all proper like, then hold a vote. Even though SDC deployment has not been similarly vetted.
Here's the plan: Donald Trump can kill people any time he wants. No advance notice, no trial, no opportunity to defend yourself. You won't even be put on notice that you're in his crosshairs, other than the general notice given to the entire world that he now has this power and can use it with no checks or limits.
Your *only* possible defense will be to stay at least 50 feet away from any expanse of pavement. Forever.
We'll mutter some vague platitudes about how giving Trump authority to murder on any whim will cost fewer than 30,000* lives per year, so it will be OK. No actual guarantee, of course, that the limit will not be exceeded, or penalties when it is.
* substitute whatever number you think should be here
Meanwhile, nobody seems to be considering the potential for mass exploits. Ever heard of a software zero-day? They are discovered all the time, thanks to careless software development practices combined with management haste to get it out the door. Is there any reason at all to expect SDC software will be different? I mean, besides platitudes and empty promises? Before you answer consider the present case involves "mis-tuned" software. Someone made that decision. Someone who is now a murderer.
Anyway, consider yourself lucky if you have never experienced a software hack that takes down thousands of machines at once. It is entirely possible, and devastating. But so far this has been limited to loss of data and compute power, not loss of life. Picture the hack that sends a million cars into 100-mile-per-hour chaos mode. What will this do to your "lives saved" argument?
Software development practices are nowhere near reliable enough to bet our lives on them, and likely will not get there for decades, if ever. And even if the software is perfect (it never is) you can't even trust your hardware!
(Score: 0) by Anonymous Coward on Tuesday May 08 2018, @03:30PM (8 children)
To be fair, killing people by remotely controlling the computers in cars is almost certainly possible already but doesn't seem to happen much. I guess most people just aren't really into mass murder.
(Score: 1, Funny) by Anonymous Coward on Tuesday May 08 2018, @03:36PM (6 children)
Isn't that where you show up at church, and start shooting people? Or drowning them in the baptismal?
(Score: 2, Flamebait) by realDonaldTrump on Tuesday May 08 2018, @05:29PM (4 children)
May God be w/ the people of Charleston, South Carolina and Sutherland Springs, Texas. The tragedies there are incomprehensible. My deepest condolences to all.
(Score: 2) by SomeGuy on Wednesday May 09 2018, @05:31PM (3 children)
Just so everyone is clear, there is no such thing as "God".
A certain president needs to learn this.
(Score: 0, Touché) by Anonymous Coward on Wednesday May 09 2018, @07:14PM (2 children)
If he wants to believe in God that's his right, just like it's your right to not believe. Otherwise you are no better than the thought police.
(Score: 2) by SomeGuy on Wednesday May 09 2018, @09:05PM (1 child)
Wow that is a retarded response. The president is supposed to be representing all of the people in the USA, and a such should not be publicly promoting a particular religion. (or social media platform). That is the issue.
So all the idiots who tell me I should learn about Gerd are also no better than the thought police? Riiigh, imaginary sky fairy worshipers are ALWAYS the good guys and everyone else is BAAAAD.
(Score: 0) by Anonymous Coward on Thursday May 10 2018, @02:33PM
> Wow that is a retarded response.
This is usually a tell of Cognitive dissonance where the other person starts with an insult. Let's continue.
> The president is supposed to be representing all of the people in the USA
The USA is largely a Christian nation, and it's a message most wish to hear. If it offends you, just don't listen. No one is forcing you to listen, you choose to. So what if he did? Should I be offended when someone wishes me well? Nonsense. I just accept it regardless what the person says, religious or not, and take a way the main message, that they care. I don't know why you are so hung up on this, your one of the few Athesits I've ever seen get this bent out of shape over this.
> So all the idiots who tell me I should learn about Gerd are also no better than the thought police?
Again a personal attack. People who believe are idiots? Do you have factual studies to back this up? There have been plenty of brilliant people who have believed in God (The Big Bang Theory was convinced by a Catholic Priest), and many who have not such as Steven Hawking, or many others who escape me at this moment.
I never said you should learn about God either, I simply said you should respect their right, otherwise why should the other side respect your right to not believe? At the end of the day, I rather we just respect each others beliefs, but you seem not to be willing to that. You wish to hoist your non-belief on me. I don't know why you feel the need to do this, or get personal life this, but I hope you realize that not everyone is out to force this on you.
(Score: 2) by DannyB on Tuesday May 08 2018, @06:39PM
Spike the communion grape juice.
The Centauri traded Earth jump gate technology in exchange for our superior hair mousse formulas.
(Score: 2, Insightful) by nitehawk214 on Tuesday May 08 2018, @08:05PM
The only way it will be useful is if you can program it to kill a specific person. Will the car loiter around the building where the person was last seen? Will the car need to go refuel if the person is difficult to locate or stays inside the building? Is it ok for the car to kill other people in order to kill the target? Does the car need programming to evade law enforcement? What if the target is in another autonomous vehicle? Can the cars negotiate with each other on who should do the killing? Will the car sacrifice itself, such as driving off a cliff or getting in to a collision where the car will not survive, in order to achieve it's goal?
Wait, are we not talking about how to best turn these things in to weapons?
"Don't you ever miss the days when you used to be nostalgic?" -Loiosh
(Score: 4, Insightful) by Runaway1956 on Tuesday May 08 2018, @03:56PM (11 children)
Good post. I wonder if we might introduce a couple more considerations?
At the present time, some asshole can stand on his brakes, just for the fun of it. He knows that the law will be on his side if he is rear-ended, and he can be an asshat if he wants to be. The people programming today's "autonomous" vehicles know, just as well as the asshat, that most vehicles are operated by some halfwit with a cell phone. They know that needlessly spiking the brake may well result in the halfwit driving up their non-existent smoke holes. Programmers want to avoid accidents, so they try to balance the margins of error.
Let's step forward 25 years. Now, about half the people on the road are riding an autonomous vehicles. The chance of having your smoke hole violated has dropped significantly. Shouldn't that mean the programmers have made adjustments to their margins of error?
Step forward another 25 years. Almost no one drives for himself anymore. Spiking the brakes for the shadow of a bird passing overhead should be fine. Every car around you is networked in, and they know why your are braking madly. No one is going to violate your nonsensical anatomy.
You're right, I can't know what these vehicles will become in the future. But, we can extrapolate some reasonable expectations. I think that ultimately, people will probably be safer because the car can think and react faster than humans do. But, there WILL BE a lot of lifeless bodies along the way.
I mean, we didn't just tame horses overnight, all those millenia ago. We aren't going to tame computers overnight either.
“I have become friends with many school shooters” - Tampon Tim Walz
(Score: 4, Insightful) by The Mighty Buzzard on Tuesday May 08 2018, @05:50PM (10 children)
SDCs cannot be held accountable for their actions. Human beings can. Therein lies the fundamental difference.
My rights don't end where your fear begins.
(Score: 2) by Runaway1956 on Tuesday May 08 2018, @06:08PM (9 children)
Yes, but, a corporation MIGHT be held accountable for the actions of their products. The typical manager wants to decrease the possibility that he or his company might have to answer for the actions of those self-driving cars. Buying off congress critters and judges has it's limits. And, the peasants with torches and pitchforks don't care how much you've paid the judge.
“I have become friends with many school shooters” - Tampon Tim Walz
(Score: 2) by The Mighty Buzzard on Tuesday May 08 2018, @07:15PM (2 children)
Which is still offering up a less viable solution to a problem that's been laid to rest longer than you've been alive.
My rights don't end where your fear begins.
(Score: 2) by arslan on Wednesday May 09 2018, @04:19AM (1 child)
True, but if the upsides on the new one are more than the old one that should be taken into account when just comparing the downsides which as you've pointed out are not equal.
If a half-drunk-most-of-the-time-potentially-brain-damaged Runaway can stitch together some lucid upsides, I'm sure most folks here can too with better quality at that.. if they're not too biased one way or another of course.
(Score: 2) by The Mighty Buzzard on Wednesday May 09 2018, @07:52AM
Nah, positive results are not what matters. If they were, we'd be sticking everyone in the slums in gas chambers. They do commit most of the crimes after all. I do not want decisions made based purely on results, humanity needs to be involved even if the same or worse decisions are reached.
My rights don't end where your fear begins.
(Score: 1, Insightful) by Anonymous Coward on Tuesday May 08 2018, @07:33PM (1 child)
How do you hold a corporation responsible? Corporate death penalty? Massive fine?
Either way its a bankruptcy proceding and the actual people start a new corporation with all the money that wasn't improperly shielded...
The 'corporation' is barely a thing, it has no skin in the game because it has no skin.
(Score: 0) by Anonymous Coward on Tuesday May 08 2018, @11:14PM
Hmm, corporations are skinwalkers!?
(Score: 2) by number11 on Wednesday May 09 2018, @04:28AM (2 children)
But they hardly ever are. What would be a 10 year sentence for an individual is a $10K fine for a corporation. Maybe if you expanded jail time to corporations (6 months means the marshals padlock your premises and freeze your bank accounts for 6 months). Yes, the investors and workers would whine that it hurt them, but criminals usually hurt innocent people. We don't let bank robbers off because it'll hurt their family.
(Score: 0) by Anonymous Coward on Wednesday May 09 2018, @07:25AM
Someone I know very well was working in a fintech corporation where his manager asked him to deliberately fudge the numbers and basically commit fraud. He raised objection, so he was given the lowest possible rating and then asked to leave. So he hired a lawyer and decided to whistleblow. Guess what - since he had gotten a new job after being fired, it was NOT established in the court of law that there was any harm done by the termination due to being asked for committing fraud and hence he had full freedom to chose not to commit fraud, and hence nothing bad happened.
Fortunately he got full payment of 1 month by the HR of the company, but this example is all one needs to know to see how corporations are judged.
(Score: 2) by tangomargarine on Wednesday May 09 2018, @03:06PM
I would think a better idea would be to just raise their corporate tax rate to 100% of profits for the same length of time. That way the workers are still getting paid.
Although of course you'd need some oversight to make sure they don't fudge their numbers too hard and claim they're not making any profits like the movie studios.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 0) by Anonymous Coward on Wednesday May 09 2018, @09:09AM
You're not thinking big enough; https://blogs.wsj.com/digits/2015/01/08/google-wants-to-sell-you-auto-insurance/ [wsj.com]
(Score: 2) by insanumingenium on Tuesday May 08 2018, @03:58PM (6 children)
In fact, it makes them a whole lot worse, the accident rate for autonomous vehicles is far lower than the human rate according to literally everything I have ever seen, and this isn't supposition about future cars, this is the cars already on the road which you are so upset about. If you have other numbers, I would love to see them. I agree, finding good numbers is hard.
We don't ban cars from school zones not because they can't kill, but because their benefits are perceived out outweigh the risks. And that is what we are dealing with here is risk tolerance. And yes, there is always socialized risk to just about everything people do or don't do.
should we do more, better, testing, absolutely.
should we place more focus on security, absolutely.
Also, your understanding of the word murder is just plain wrong, murder requires intent, period. The fact that you also called it premeditated makes me think you won't understand that distinction either, but I had to say it.
(Score: 3, Insightful) by The Mighty Buzzard on Tuesday May 08 2018, @05:51PM (5 children)
See above. You're taking something that we already deal with via severe punishment and replacing it with saying "oh well, computers don't know any better".
My rights don't end where your fear begins.
(Score: 2) by arslan on Wednesday May 09 2018, @04:30AM (4 children)
But that's what anyone is saying though, at least my read of it - and the op has a point about intent. If you're driving on a highway with no ped-x and some nut decides to chicken run over it in poor visibility, you don't get the "severe punishment" you mentioned - at least no where I'm sitting.
On the other hand if the computer has been shown that the developers/companies clearly had intent to put in flawed software that kill, the can be severely punished. Take the whole VW debacle as an example and project it to this scenario, if the management knowingly approved poorly written software to autonomous vehicle so they can hit their market deadlines and it ends up killing folks, you can punish them.
If just so happens with software, the likely of where you can assign blame is a lot less to a manned solution, but that doesn't necessarily means the overall mortality rate is worst off - and that needs to be taken into account instead of just this debate of how to attribute blame in a bubble.
(Score: 3, Insightful) by The Mighty Buzzard on Wednesday May 09 2018, @07:55AM (3 children)
No, the overall mortality rate is not as important as human beings being the ones making life and death decisions. That kind of thinking leads to SkyNet.
My rights don't end where your fear begins.
(Score: 0) by Anonymous Coward on Wednesday May 09 2018, @03:02PM
So you're saying writing self-driving car code is a Trolley Problem?
(Score: 2) by arslan on Wednesday May 09 2018, @10:59PM (1 child)
That's a far stretch from self-driving cars to Skynet... I wan't my shag-mobile dammit!!
(Score: 2) by The Mighty Buzzard on Wednesday May 09 2018, @11:16PM
Make her husband drive.
My rights don't end where your fear begins.
(Score: 5, Touché) by tangomargarine on Tuesday May 08 2018, @04:06PM (3 children)
The hell? This is absolutely not premeditated murder.
The programmers never planned to kill anybody. What's the thing they say in law enforcement, motive, means, and opportunity? What is the programmer's motive to kill people with self-driving cars?
And your comparison of a self-driving car to a killbot is stupid. Yes, a car is a 2-ton weapon. But if I could drop an apartment building on somebody, that would make it a weapon, too, wouldn't it?
You typed a bunch of other words too but were frothing so hard I didn't bother to do more than skim.
Seriously: get some help.
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 4, Interesting) by DeathMonkey on Tuesday May 08 2018, @06:56PM (2 children)
I don't know why the so-called-nerds on sites like this have such a hard time understanding that intent matters to the law.
(Score: 0) by Anonymous Coward on Tuesday May 08 2018, @11:16PM (1 child)
Because that involves squishy soft science stuff that doesn't easily fit into a switch statement. Throw in some emotional "logic" from the human side of the nerd and you have a recipe for insanity.
(Score: 1, Interesting) by Anonymous Coward on Wednesday May 09 2018, @03:18AM
Intent = mind reading if the individual has not written or spoken about their intent
(Score: 0) by Anonymous Coward on Tuesday May 08 2018, @04:24PM
You're wasting your breath when the answer is obviously "yes": somebody already thought it was a good idea, or you wouldn't be here ranting.
Oh good, sanity. Premeditated murder is bad, yet here you are declaring that you would happily murder people with full premeditation.
Yeah, a lot of people find math hard and history boring.
It's well-established that meatbags behind the wheel kill a lot of people. There's loads of documentation confirming that. Get the cars scanning the environment and intercommunicating and, as meatbags are removed from guiding their killing machines, it's inevitable that there will be fewer casualties. It won't happen tomorrow -- people won't want to give up control -- but give it time.
This must come as a surprise, but the American people already did that. Believe it or not, they elected him president. Insane, right?
Already done.
That doesn't work if you factor in the blue-uniformed gun-slingers.
Do you have evidence of that? Just because it's not in the news -- it's not exactly glamorous, so hardly surprising that it wouldn't make it to the news -- doesn't mean that nobody is considering the potential. I would be shocked if nobody was considering the potential. Not to say they won't handle it as poorly as is done with the many IoT devices out there, but people likely are considering it.
Depends on how they're configured, doesn't it? If they're configured to pull over and stop when things go wonky, there goes your "chaos mode".
People are nowhere near reliable enough to bet our lives on, yet we've done so for decades. Software doesn't have to be perfect (though it would be nice if it was), it just needs to be less unreliable than people for a net positive result.
(Score: 0, Flamebait) by realDonaldTrump on Tuesday May 08 2018, @05:43PM
Thank you for your loyal support, Justin. But I already have the power to kill whoever I want -- without the Due Process. President Obama gave it to himself. He gave it a fancy name, I call it the kill list. Because that's what it is. The great American people didn't tell him "no." And they elected me, overwhelmingly. As everybody knows. I miss my old life, I sacrificed a lot to become President. But it comes with some nice perks!!
(Score: 2) by DeathMonkey on Tuesday May 08 2018, @06:58PM (2 children)
Not a gun, where the kill decision is made by a human,
There are enough accidental gun deaths that you could continue with that metaphor if you wanted to.
(Score: 2, Disagree) by number11 on Wednesday May 09 2018, @04:30AM (1 child)
Most gun deaths may involve not intent, but stupidity or negligence, on the part of the killer.
(Score: 2) by The Mighty Buzzard on Wednesday May 09 2018, @07:58AM
Untrue. Most gun deaths (a comfortable majority, not just a plurality) most certainly do involve intent and have despair as their greatest common factor.
My rights don't end where your fear begins.
(Score: 2) by c0lo on Tuesday May 08 2018, @11:35PM
If you make this machine using the software industry way (need to release, the management says so. And don't forget the costs), expect that kill machine to pass the child-care certification on the basis of how many bugs the firmware will have.
My point? "Hanlon razor" is likely to have acted in Uber's case, their management is just in damage control now (trying to save their skin now while still keeping the 'Uber self-driving car' idea afloat)
https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
(Score: 2) by Mykl on Wednesday May 09 2018, @01:08AM
Wow, that's an overreaction.
What's your view on the (extremely rare) death due to vaccination? Should all of the people involved in the supply chain of vaccines, which save millions of lives, be lined up and shot because one person died of an adverse reaction?
(Score: 2, Funny) by Anonymous Coward on Tuesday May 08 2018, @03:35PM
In other words, the car was trying to tell the difference between a plastic bag and a lady. It should run over a plastic bag - if necessary - but it should not run over a lady. Unfortunately, it was tuned too far towards "plastic bag".
Therefore, when it encountered a "bag lady [wiktionary.org]", it thought "bag" instead of "lady".
I'll see myself out.
(Score: 1, Insightful) by Anonymous Coward on Tuesday May 08 2018, @03:42PM (10 children)
So basically it assumes to drive over it, unless classified as something it shouldn't drive over. Shouldn't that be the other way around? Like, safety first, on which most traffic laws are based on?
(Score: 2, Informative) by takyon on Tuesday May 08 2018, @03:48PM (5 children)
Drivers don't stop when a plastic bag or tumbleweed flies in front of their car.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Tuesday May 08 2018, @03:57PM (4 children)
you're not paying attention. the op said "laws are based on", as in "the pedestrian has the right away for this very reason."
(Score: 2) by tangomargarine on Tuesday May 08 2018, @04:21PM
The phrase you're looking for is "right of way."
"Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
(Score: 3, Insightful) by takyon on Tuesday May 08 2018, @04:22PM (2 children)
If the car was paralyzed by any object it couldn't "definitively" identify, it would be stopping far too often.
Autonomous car designers already err on the side of caution and have the car stop in annoying ways. Uber was trying to swing slightly the other way, and went too far. Only the laws that regulate autonomous vehicle testing have anything to do with it.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Tuesday May 08 2018, @10:32PM
At first they would stop too often. As time goes by they'd get smarter about it and just stop for things that do require it.
If they're going to operate on public roads they should be extremely, over the top paranoid about running into unknown objects. And they should be operating slowly enough for any vehicles behind them to stop even with a maximum application of the brakes.
(Score: 0) by Anonymous Coward on Wednesday May 09 2018, @02:09AM
> If the car was paralyzed by any object it couldn't "definitively" identify, it would be stopping far too often.
This seems like a very good reason that it's too early to be testing these things on public roads. They need to be at least as good as an alert person (not distracted or impaired) at identifying every object/person/animal that could possibly get into their path.
Did the Uber car have radar? If so, that bicycle (metal) that was being walked across the road should have returned a signal well in advance, plenty of time to slow down and avoid running over that poor person.
(Score: 5, Insightful) by ledow on Tuesday May 08 2018, @04:02PM (3 children)
Nope.
Slamming the brakes on because of a bit of tumbleweed blows across would probably cause more accidents.
And what if that bag was just lying in the road with a brick in it? You could probably tell as a human that it "wasn't right".
And you'd probably still try to slow/avoid it even if it's just a blowing empty bag (those things tend to stick to your exhaust and melt with a horrendous smell).
Fact is, the car wasn't able to judge, because it didn't have any sensor capable of judging such things, nor of any intelligence to apply to the sensors it had to judge such things.
Imagine being in a court. And quite literally saying "Sorry, your Honour, but I couldn't tell if the woman with the bike was a paper bag or not, so I just drove over it".
Would it pass muster for a human? No. So why would a company's software claiming to do that human's job be any different?
Fact is, the software isn't there to make these kinds of decisions. It all has to be "tuned" (my prime argument against anyone calling anything we have today AI - they are human-fed heuristics at best, and poor ones at that) because it can't infer anything about the situation whatsoever. All the fancy claims just come down to someone turning a software dial between "run over plastic bags" and "mow down little old ladies". If they tune it wrong, the software goes wrong. And obviously that's what's happened here.
This stuff isn't ready to risk people's lives on.
Give it a few more decades of off-road testing and you might get closer.
(Score: 3, Insightful) by bob_super on Tuesday May 08 2018, @05:38PM (1 child)
> Fact is, the car wasn't able to judge, because it didn't have any sensor capable of judging such things,
> nor of any intelligence to apply to the sensors it had to judge such things.
I'm not an autonomous car designer. But if I was, the first thing to design and qualify on the vision system would be "can you identify a walking or standing human?".
The plastic bag is a terrible example. The whole point of the sensor system is to clearly sort human shapes, and tag their position and speed. The vision and classification system, presented with a few seconds of a walking woman, should never ever have classified her as anything else than "Walking human, going right at walking pace". It's a very distinctive shape, and "we have to avoid breaking for plastic bags" is total bullshit. We're not looking at breaking for a human chalk outline, a kid's toy, or a plastic bag, but at not correctly figuring out lidar test case number 1: full size human walking perpendicular to travel direction.
(Score: 4, Disagree) by ledow on Tuesday May 08 2018, @07:30PM
Now distinguish between a plastic bag and a guy crouching in a shiny black puffer jacket (google it if you're unfamiliar) to tie his shoelace in the middle of the road. If you haven't had stuff like that happen to you, everything from deer, to children running between parked cars, to things like tyres rolling into the road in front of you, then you haven't been driving long or you haven't been paying attention enough that you COULD interpret them as a threat.
You have literally no way to teach a computer the difference in any simple fashion. Certainly not in dark conditions (but then it shouldn't be making any decision about things it can't illuminate clearly), and certainly not at the speeds involved here.
Even humans get it wrong. But humans can be brought to bear before a court to explain themselves. A computer system cannot, at the moment. A human can tell you that it was dark, and he looked exactly like an bin bag, etc. and a court may believe them. The computers involved here do not have that kind of analysis, or ability to justify themselves. It was 51% a hazard they could drive through and 49% not, so they rammed it without even trying to brake.
If you're going to have different systems on the same roads and claim equivalence (or, worse, superiority as the self-driving car manufacturers attempt), then they need to be just as accountable as a human in the same position.
In this incident, my first question as a lawyer would be "So, Mr Driver, what was your visibility and were you doing an appropriate speed for the conditions? (Answer: No, as they hit something that literally appears in their headlights way within their braking distance making it impossible to stop). How long did you get to assess the object you hit and whether or not to hit it? (Answer: A second or two). At which point do you take action to avoid a collision? (Answer: None, watch the video - no braking, no moving, nothing). And, crucially, what other hazards were there on the road at the time? (Answer: None. Not raining, not snowing, not oily ground, no car in front, behind or to the side, no crowds of pedestrians, etc. etc. etc.)"
You'd convict a human driver of causing death by dangerous driving in such circumstances. So what's the sanction for the manufacturer now?
(Score: 2) by choose another one on Tuesday May 08 2018, @07:19PM
> Imagine being in a court. And quite literally saying "Sorry, your Honour, but I couldn't tell if the woman with the bike was a paper bag or not, so I just drove over it".
But that isn't what happened, your statement implies you have already decided the object is a woman with a bike, and then drove over it.
A more accurate comparison would be: "Sorry your Honour but I couldn't tell if what I saw in the road was a black bag or some clothes or a human being, so I just drove over it".
> Would it pass muster for a human?
Yes, sometimes at least - in fact I know of cases where (my version, roughly) it has.
I also know of cases where the same argument didn't pass muster - but it took two trials to get a verdict and was probably because reconstructions showed the obstacle-that-was-actually-a-human would have been visible for over 100m in a 30mph limit, giving 9 seconds to act (the driver wasn't speeding). The Uber vehicle was going faster (but not speeding) and a human driving it would have had a lot less time.
(Score: 3, Interesting) by takyon on Tuesday May 08 2018, @04:03PM
Implement detection of objects in front of or behind the car, but no major autonomy features. Tune it to be strict about what is considered a threat, like Uber did. Allow the car to stop before hitting a human in most (some?) cases, but don't give the human driver the opportunity to let go of the wheel or take a nap (no lane keeping). Don't aggressively market/advertise the feature. Save lives.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 3, Flamebait) by DannyB on Tuesday May 08 2018, @04:23PM (2 children)
It wasn't only faulty software.
It was a faulty safety driver. The nature of the bug was that the driver only occasionally looked up from her phone to see if the car had crashed yet.
The Centauri traded Earth jump gate technology in exchange for our superior hair mousse formulas.
(Score: 5, Interesting) by theluggage on Tuesday May 08 2018, @05:26PM (1 child)
The idea of a minimum-wage safety driver is faulty. The people supervising these cars should be expert drivers - police drivers, advanced driving instructors, advanced police driving instructors, etc. who are verging on OCD about correct car handling and safety - and they shouldn't be updating their facebook, they should be dictating a constant stream of commentary on how the car is driving from minute to minute. In the first instance, they should be driving and the computer's decisions should be compared against theirs. For one thing, that means that you'll actually get meaningful feedback on the system's performance instead of "Hey, yeah, it didn't crash this time".
(Score: 4, Insightful) by bob_super on Tuesday May 08 2018, @05:44PM
The test driver should be driving at all times. The computer would be comparing its decisions to the driver's decisions (Waymo probably went through that phase).
Whoever is connected to the road should change randomly. That isn't as freaky as it sounds, if you remember the wide roads and low speed limits. The human driver would have a trigger button on the wheel to guarantee they are in control when the wheel is grabbed tightly.
(Score: 3, Funny) by Anonymous Coward on Tuesday May 08 2018, @04:33PM (1 child)
they should be testing these on google or microsoft campuses instead. with or without prior warning.
(Score: 1, Insightful) by Anonymous Coward on Tuesday May 08 2018, @05:18PM
Yes please, test on various tech company campuses.
I like to ride my bicycle near the edge of the road (we have wide shoulders in much of NY State), but recent articles in the bicycling press indicate that none of the self driving car developers take cyclists into account. Efforts from bicycle advocates (trying to move people from cars to bikes) to work with Uber and others have been stonewalled. In some states there are even minimum distance passing regulations (like 4 feet between car and bicycle)--none of that is in the Uber software.
I'm just glad that (as far as I can tell), no one is testing these cars around here.
(Score: 3, Insightful) by Gaaark on Tuesday May 08 2018, @05:18PM
Cars don't kill people, people kill people...with cars.
Killer Cars.
---Radiohead
https://m.youtube.com/watch?v=6yuws-KTdXg [youtube.com]
(Cops kill black people with extreme prejudice)
--- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
(Score: 4, Insightful) by Gaaark on Tuesday May 08 2018, @05:20PM
Who's going to jail?
My guess, some engineer instead of some CEO.
--- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
(Score: 1, Interesting) by Anonymous Coward on Tuesday May 08 2018, @09:06PM
Yes, I know the pedestrian had right of way, and yes, the Uber car should have stopped, but read on.
First, a couple givens:
So, the pedestrian, provided they were even attempting to observe for oncoming traffic, should have been able to perceive the pair of point light sources (headlights) on the Uber car for a lot farther away than the safety driver in the car could have seen the pedestrian via the headlight illumination.
Now, putting myself in the role of pedestrian, in the same conditions. I'm looking to cross a dark stretch of road and I see oncoming headlights. Myself, I pause a bit, judge how quickly those same headlights seem to be approaching, and if there is any question in my mind that they might be too close, I simply do not start across the street.
Yet in the video, the pedestrian can be seen in the adjacent lane, walking straight into the Uber car's lane, close enough for the headlights and camera to illuminate her. Why did she not recognize these oncoming headlights as being too close and stop her attempt to cross the street until the headlights had safely passed? The video appears to show her as if she never even saw a car as present. Why is that?
(Score: 4, Touché) by crafoo on Tuesday May 08 2018, @11:08PM
Possibly what is needed is to greatly restrict our citizens' ability to just freely roam about wherever they want, without supervision, or at least a government issued tracking device. These real-time trackers would make these types of collisions almost impossible! This would be great for business but also necessary and required to ensure the safety of our children. People, think of the children. I know it's a slight restriction to your freedom: the ability to walk wherever you want (like a god damned brutish animal) and to be tagged and tracked 24/7. I'm sorry though, it's simply a necessity of modern life no one could have possibly foreseen. Sacrifices must be made. The children must be made safe. Rights and freedoms end where business and benevolent leaders decide to draw the line. Your petulant and destructive fetish for childishly holding on to old out-dated ideas is somewhat embarrassing, and dare is say, terrorist-like. You aren't a terrorist are you, citizen?
(Score: 2) by legont on Wednesday May 09 2018, @01:08AM
First, detection of a human on the road, as anything else for that matter, is a probability question.
Second, any emergency breaking is dangerous for the car's occupants so their survival is also a probability matter.
They have (for now at least) an adjustable parameter which says on what probability to apply brakes; to maximize a certain utility function, say the least total probability of a stiff.
How else is it suppose to work?
Now, let's look at the future for a sec.
A car's passengers and pedestrians are in conflict at where that variable is supposed to be set. I expect owners to demant their safety at the expence of the safety of the pedestrians. They have access to the hardware so they will probably win especially if the liability is on the manufacturer. If the liability is on the owner, nobody will ever buy this crap.
However, at some point a better AI will drive, such as a neural network AI, where this variable will not be accessible at all. It, the AI, will "decide" how to set it. There is no theoretical possibility, let alone practical one, to reliably change the setting.
There will be fun.
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(Score: 1) by Luke on Wednesday May 09 2018, @09:05PM
Seems to me the issue of ID'ing a live body from a plastic bag is not difficult: https://www.google.com/search?q=autonomous+vehicle+thermal [google.com]