Submitted via IRC for SoyCow1337
In review of fatal Arizona crash, U.S. agency says Uber software had flaws
WASHINGTON (Reuters) - An Uber self-driving test vehicle that struck and killed an Arizona woman in 2018 had software flaws, the National Transportation Safety Board said Tuesday as it disclosed the company’s autonomous test vehicles were involved in 37 crashes over the prior 18 months.
NTSB may use the findings from the first fatal self-driving car accident to make recommendations that could impact how the entire industry addresses self-driving software issues or to regulators about how to oversee the industry.
The board will meet Nov. 19 to determine the probable cause of the March 2018 accident in Tempe, Arizona that killed 49-year-old Elaine Herzberg as she was walking a bicycle across a street at night.
In a report released ahead of the meeting, the NTSB said the Uber Technologies Inc vehicle had failed to properly identify her as a pedestrian crossing a street.
That accident prompted significant safety concerns about the nascent self-driving car industry, which is working to get vehicles into commercial use.
In the aftermath of the crash, Uber suspended all testing and did not resume until December in Pennsylvania with revised software and significant new restrictions and safeguards,
A spokeswoman for Uber's self-driving car effort, Sarah Abboud, said the company regretted the crash that killed Herzberg and noted it has “adopted critical program improvements to further prioritize safety. We deeply value the thoroughness of the NTSB's investigation into the crash and look forward to reviewing their recommendations.”
The NTSB reported at least two prior crashes in which Uber test vehicles may not have identified roadway hazards. The NTSB said between September 2016 and March 2018, there were 37 crashes of Uber vehicles in autonomous mode, including 33 that involved another vehicle striking test vehicles.
(Score: 3, Interesting) by PiMuNu on Friday November 08 2019, @03:33PM (1 child)
Quotes from theregister:
https://www.theregister.co.uk/2019/11/06/uber_self_driving_car_death/ [theregister.co.uk]
the code couldn't recognize her as a pedestrian, because she was not at an obvious designated crossing.
“The system design did not include a consideration for jaywalking pedestrians,” the watchdog stated
> First, an actual optimization would be to not drive at all.
I think you misunderstand my use of "optimization". The algorithms in question are what I would consider minimisation or optimisation routines. From a set of input variables, they seek to find an optimised or minimised set of parameters to achieve some programmer-led goals. For example, based on a set of pixels, they seek to find the "best fit" or "optimal" identification for what the object was e.g. "bicycle", "car" or "fish", and what its path was, e.g. "stationary", "about to hit the car", etc.
If the programmer tells the algorithm to only allow objects to be potentially crossing if near a pedestrian crossing; then that would seem negligent but IANAL. This is what theregister article is implying.
(Score: 1) by khallow on Saturday November 09 2019, @02:14AM
That has nothing to do with optimization. I think we can agree that the software was doing it wrong (just in your sentence, "recognize as a pedestrian" when it shouldn't matter what she gets recognized as, "designated crossing" being in the decision process for spotting objects in the road, etc), but that doesn't mean the software wasn't looking. Getting rid of such bugs are after all the point of this test driving in the first place.
You should have a factor of safety here. These shouldn't be anywhere close to a minimization/optimization just in case some unforeseen system or environmental issue pushes the scenario into a dangerous failure mode.