Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday February 21 2017, @06:48PM   Printer-friendly
from the not-the-time-for-napping dept.

An Anonymous Coward writes:

As predicted by many (including posts here on SN), extensive testing now shows that if the driver's workload is reduced to near zero they are in no position to intervene should the autonomous system get in trouble.

The Detroit-based company has tried many ways to keep its engineers alert during autonomous car test runs, employing everything from alarm bells and lights to even putting a second engineer in the vehicle to monitor their counterpart. "No matter — the smooth ride was just too lulling and engineers struggled to maintain 'situational awareness,'" said Ford product development chief, Raj Nair.

Ford's strategy of eventually removing the steering wheel and pedals from self-driving cars has ignited a debate between automakers on how to approach the development of Level 3 self-driving vehicles, or if Level 3 should even exist at all.

BMW, Mercedes-Benz, and Audi will introduce semi-autonomous Level 3 vehicles next year that require human intervention within 10 seconds or the vehicle will slow to a stop in its lane. However, other automakers like Nissan and Honda have upcoming systems that give the driver 30 seconds to prepare and re-engage the vehicle or it will pull to the side of the road.

The article continues with quotes from other manufacturers and US DOT. As a reminder, levels from 0 (no automation) through 5 have been defined by SAE. Level 3 is "conditional automation" and it's starting to look like this level is not such a good idea.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by ledow on Wednesday February 22 2017, @08:05AM

    by ledow (5567) on Wednesday February 22 2017, @08:05AM (#470049) Homepage

    Prima facie evidence that humans shouldn't be trusted, shouldn't be responsible, thus shouldn't be *able* to take control of these vehicles except on an emergency override basis.

    And if we're heading that route, you have a much tougher sell ahead of you, car manufacturers, because it's no longer "our" car.

    But one day, this will come up in court to say "Look, the manufacturer's own engineers couldn't keep focused when they had no direct control, how do you expect a man on the street to, therefore I shouldn't be held responsible for the car swerving into that playground and taking out that little girl" and you're going to have a hard time fighting that lawsuit because your own guys couldn't keep awake doing the same under test conditions.

    The only logical outcome of automating a human's actions is removal of the human from those actions entirely. Anything else is a nonsense. Sadly, the technology isn't up to driving on its own either, so by pressing ahead with it, all you've done is open yourselves up to liability for your "AI"'s actions and now it's going to get messy.

    - Want to take control from the human.
    - Don't want to take responsibility for that control yourselves.
    - Provably can't trust the human any more when their control is taken away.

    One of those is going to have to give.

    As I tell my boss, in every workplace I can have the responsibility AND the control, or neither.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Wednesday February 22 2017, @03:16PM

    by Anonymous Coward on Wednesday February 22 2017, @03:16PM (#470211)

    One possible scenario -- you program a destination when you get in the car. The car checks out the route and decides if this is a route that it can drive autonomously. Due to incomplete maps, bad weather (snow over the lane markings...), cell-network-gaps, recent road works, etc., the autonomous system may refuse to engage and the human has to drive manually.

    In this case, there is no hand off mid-trip. At least this solves the inattention problem -- if the car "knows" the chosen route the driver can safely snooze (well, as safe as possible when moving in a box at high ground speed).

    If the driver wanted to go autonomous when available, they could program several way-points and the car might agree to drive for some of them, but not others. Again the hand off would be done at a known safe place to stop.