Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Tuesday May 08 2018, @03:11PM   Printer-friendly
from the false-negative dept.

The first machine to kill a human entirely on its own initiative was "Likely Caused By Software Set to Ignore Objects On Road" according to a new report on the collision which happened last March:

The car's sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber's software decided it didn't need to react right away. That's a result of how the software was tuned. Like other autonomous vehicle systems, Uber's software has the ability to ignore "false positives," or objects in its path that wouldn't actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company's system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn't react fast enough, one of these people said.

Fast enough? She walked across three and a half lanes in what should have been plain view of the car's LIDAR the entire time.

takyon: Also at Reuters. Older report at The Drive.

Previously: Uber Pulls Self-Driving Cars After First Fatal Crash of Autonomous Vehicle
Video Released of Fatal Uber - Pedestrian Accident, and More


Original Submission #1Original Submission #2

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Tuesday May 08 2018, @04:24PM

    by Anonymous Coward on Tuesday May 08 2018, @04:24PM (#677079)

    Does this sound like a good idea to anyone???

    You're wasting your breath when the answer is obviously "yes": somebody already thought it was a good idea, or you wouldn't be here ranting.

    Moreover, I will be happy to personally swing a sledge hammer, on live TV, up to the heads of any other person who knowingly and with utter disregard for human life participated in the design, manufacture, promotion, or rollout of this killbot technology. Yes I'm serious. Premeditated murder of random bystanders is inexcusable.

    Oh good, sanity. Premeditated murder is bad, yet here you are declaring that you would happily murder people with full premeditation.

    No, I do not buy the argument that "other lives will be saved".

    Yeah, a lot of people find math hard and history boring.

    It's well-established that meatbags behind the wheel kill a lot of people. There's loads of documentation confirming that. Get the cars scanning the environment and intercommunicating and, as meatbags are removed from guiding their killing machines, it's inevitable that there will be fewer casualties. It won't happen tomorrow -- people won't want to give up control -- but give it time.

    I have a modest proposal for you. Let's give Donald Trump permission to kill people.

    This must come as a surprise, but the American people already did that. Believe it or not, they elected him president. Insane, right?

    Here's the plan: Donald Trump can kill people any time he wants. No advance notice, no trial, no opportunity to defend yourself. You won't even be put on notice that you're in his crosshairs, other than the general notice given to the entire world that he now has this power and can use it with no checks or limits.

    Already done.

    Your *only* possible defense will be to stay at least 50 feet away from any expanse of pavement. Forever.

    That doesn't work if you factor in the blue-uniformed gun-slingers.

    Meanwhile, nobody seems to be considering the potential for mass exploits. Ever heard of a software zero-day? They are discovered all the time, thanks to careless software development practices combined with management haste to get it out the door.

    Do you have evidence of that? Just because it's not in the news -- it's not exactly glamorous, so hardly surprising that it wouldn't make it to the news -- doesn't mean that nobody is considering the potential. I would be shocked if nobody was considering the potential. Not to say they won't handle it as poorly as is done with the many IoT devices out there, but people likely are considering it.

    Picture the hack that sends a million cars into 100-mile-per-hour chaos mode. What will this do to your "lives saved" argument?

    Depends on how they're configured, doesn't it? If they're configured to pull over and stop when things go wonky, there goes your "chaos mode".

    Software development practices are nowhere near reliable enough to bet our lives on them,

    People are nowhere near reliable enough to bet our lives on, yet we've done so for decades. Software doesn't have to be perfect (though it would be nice if it was), it just needs to be less unreliable than people for a net positive result.