Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Tuesday May 08 2018, @03:11PM   Printer-friendly
from the false-negative dept.

The first machine to kill a human entirely on its own initiative was "Likely Caused By Software Set to Ignore Objects On Road" according to a new report on the collision which happened last March:

The car's sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber's software decided it didn't need to react right away. That's a result of how the software was tuned. Like other autonomous vehicle systems, Uber's software has the ability to ignore "false positives," or objects in its path that wouldn't actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company's system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn't react fast enough, one of these people said.

Fast enough? She walked across three and a half lanes in what should have been plain view of the car's LIDAR the entire time.

takyon: Also at Reuters. Older report at The Drive.

Previously: Uber Pulls Self-Driving Cars After First Fatal Crash of Autonomous Vehicle
Video Released of Fatal Uber - Pedestrian Accident, and More


Original Submission #1Original Submission #2

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by bob_super on Tuesday May 08 2018, @05:38PM (1 child)

    by bob_super (1357) on Tuesday May 08 2018, @05:38PM (#677105)

    > Fact is, the car wasn't able to judge, because it didn't have any sensor capable of judging such things,
    > nor of any intelligence to apply to the sensors it had to judge such things.

    I'm not an autonomous car designer. But if I was, the first thing to design and qualify on the vision system would be "can you identify a walking or standing human?".

    The plastic bag is a terrible example. The whole point of the sensor system is to clearly sort human shapes, and tag their position and speed. The vision and classification system, presented with a few seconds of a walking woman, should never ever have classified her as anything else than "Walking human, going right at walking pace". It's a very distinctive shape, and "we have to avoid breaking for plastic bags" is total bullshit. We're not looking at breaking for a human chalk outline, a kid's toy, or a plastic bag, but at not correctly figuring out lidar test case number 1: full size human walking perpendicular to travel direction.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 4, Disagree) by ledow on Tuesday May 08 2018, @07:30PM

    by ledow (5567) on Tuesday May 08 2018, @07:30PM (#677149) Homepage

    Now distinguish between a plastic bag and a guy crouching in a shiny black puffer jacket (google it if you're unfamiliar) to tie his shoelace in the middle of the road. If you haven't had stuff like that happen to you, everything from deer, to children running between parked cars, to things like tyres rolling into the road in front of you, then you haven't been driving long or you haven't been paying attention enough that you COULD interpret them as a threat.

    You have literally no way to teach a computer the difference in any simple fashion. Certainly not in dark conditions (but then it shouldn't be making any decision about things it can't illuminate clearly), and certainly not at the speeds involved here.

    Even humans get it wrong. But humans can be brought to bear before a court to explain themselves. A computer system cannot, at the moment. A human can tell you that it was dark, and he looked exactly like an bin bag, etc. and a court may believe them. The computers involved here do not have that kind of analysis, or ability to justify themselves. It was 51% a hazard they could drive through and 49% not, so they rammed it without even trying to brake.

    If you're going to have different systems on the same roads and claim equivalence (or, worse, superiority as the self-driving car manufacturers attempt), then they need to be just as accountable as a human in the same position.

    In this incident, my first question as a lawyer would be "So, Mr Driver, what was your visibility and were you doing an appropriate speed for the conditions? (Answer: No, as they hit something that literally appears in their headlights way within their braking distance making it impossible to stop). How long did you get to assess the object you hit and whether or not to hit it? (Answer: A second or two). At which point do you take action to avoid a collision? (Answer: None, watch the video - no braking, no moving, nothing). And, crucially, what other hazards were there on the road at the time? (Answer: None. Not raining, not snowing, not oily ground, no car in front, behind or to the side, no crowds of pedestrians, etc. etc. etc.)"

    You'd convict a human driver of causing death by dangerous driving in such circumstances. So what's the sanction for the manufacturer now?