The first machine to kill a human entirely on its own initiative was "Likely Caused By Software Set to Ignore Objects On Road" according to a new report on the collision which happened last March:
The car's sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber's software decided it didn't need to react right away. That's a result of how the software was tuned. Like other autonomous vehicle systems, Uber's software has the ability to ignore "false positives," or objects in its path that wouldn't actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company's system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn't react fast enough, one of these people said.
Fast enough? She walked across three and a half lanes in what should have been plain view of the car's LIDAR the entire time.
takyon: Also at Reuters. Older report at The Drive.
Previously: Uber Pulls Self-Driving Cars After First Fatal Crash of Autonomous Vehicle
Video Released of Fatal Uber - Pedestrian Accident, and More
(Score: 0) by Anonymous Coward on Tuesday May 08 2018, @04:24PM
You're wasting your breath when the answer is obviously "yes": somebody already thought it was a good idea, or you wouldn't be here ranting.
Oh good, sanity. Premeditated murder is bad, yet here you are declaring that you would happily murder people with full premeditation.
Yeah, a lot of people find math hard and history boring.
It's well-established that meatbags behind the wheel kill a lot of people. There's loads of documentation confirming that. Get the cars scanning the environment and intercommunicating and, as meatbags are removed from guiding their killing machines, it's inevitable that there will be fewer casualties. It won't happen tomorrow -- people won't want to give up control -- but give it time.
This must come as a surprise, but the American people already did that. Believe it or not, they elected him president. Insane, right?
Already done.
That doesn't work if you factor in the blue-uniformed gun-slingers.
Do you have evidence of that? Just because it's not in the news -- it's not exactly glamorous, so hardly surprising that it wouldn't make it to the news -- doesn't mean that nobody is considering the potential. I would be shocked if nobody was considering the potential. Not to say they won't handle it as poorly as is done with the many IoT devices out there, but people likely are considering it.
Depends on how they're configured, doesn't it? If they're configured to pull over and stop when things go wonky, there goes your "chaos mode".
People are nowhere near reliable enough to bet our lives on, yet we've done so for decades. Software doesn't have to be perfect (though it would be nice if it was), it just needs to be less unreliable than people for a net positive result.