Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday February 03 2017, @01:51PM   Printer-friendly
from the calling-off-the-wedding dept.

Google's self-driving vehicles are "disengaging" less frequently on California roads:

California's Department of Motor Vehicles released its annual autonomous vehicle disengagement report today, in which all the companies that are actively testing self-driving cars on public roads in the Golden State disclose the number of times that human drivers were forced to take control of their driverless vehicles. The biggest news to come out of this report is from Waymo, Google's new self-driving car company, which reported a huge drop in disengagements in 2016 despite an almost equally huge increase in the number of miles driven. In other words, Waymo's self-driving cars are failing at a much lower rate, even as they are driving a whole lot more miles. The company says that since 2015, its rate of safety-related disengages has fallen from 0.8 per thousand miles to 0.2 per thousand miles in 2016. So while Waymo increased its driving by 50 percent in the state — racking up a total of 635,868 miles — the company's total number of reportable disengages fell from 341 in 2015 to 124.

"This four-fold improvement reflects the significant work we've been doing to make our software and hardware more capable and mature," Dmitri Dolgov, head of self-driving technology for Waymo, wrote in a blog post. "And because we're creating a self-driving car that can take you from door to door, almost all our time has been spent on complex urban or suburban streets. This has given us valuable experience sharing the road safely with pedestrians and cyclists, and practicing advanced maneuvers such as making unprotected left turns and traversing multi-lane intersections."

The majority of Waymo's disengagements were the result of "software glitches," the company says. "Unwanted maneuvers," "perception discrepancies," and "recklessly behaving road user" also accounted for dozens of disengagements. There were no reports of crashes or accidents.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday February 03 2017, @02:57PM

    by Anonymous Coward on Friday February 03 2017, @02:57PM (#462391)

    ...in California. [sfexaminer.com]

  • (Score: 4, Interesting) by maxwell demon on Friday February 03 2017, @03:19PM

    by maxwell demon (1608) on Friday February 03 2017, @03:19PM (#462408) Journal

    "Unwanted maneuvers," "perception discrepancies," and "recklessly behaving road user" also accounted for dozens of disengagements. There were no reports of crashes or accidents.

    What exactly do those terms mean here?

    "Unwanted maneuvers" — Did Google's car do an unwanted maneuver? And if so, what is the difference to a "software glitch"? And otherwise, what does qualify as "unwanted maneuver" of other people? "We'd prefer that they wouldn't do it, as it is hard for our software to handle"?

    "perception discrepancies" — What is this supposed to mean? That the car was right, but the human was wrong? Because otherwise, wouldn't it be, again, a software glitch?

    "recklessly behaving road user" — I hope that one doesn't refer to the autopilot … although I guess that would find customers, too ;-)

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by takyon on Friday February 03 2017, @03:34PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday February 03 2017, @03:34PM (#462425) Journal

      "recklessly behaving road user" — I hope that one doesn't refer to the autopilot … although I guess that would find customers, too ;-)

      An automated ride and a blowjob? Sex bots that transform into cars? [youtube.com]

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by ikanreed on Friday February 03 2017, @04:40PM

      by ikanreed (3164) Subscriber Badge on Friday February 03 2017, @04:40PM (#462466) Journal

      I would think the meanings are pretty clear:

      Unwanted maneuvers refers to the car doing something the human monitor thinks it ought not to: crossing lanes, turning unexpectedly, running red lights, speeding, whatever.
      Perception discrepancies are when the camera goes "Oh shit there's something there" and the lidar goes "I don't see it" and the car turns over to the human.
      Recklessly behaving road user definitely refers to people doing dangerous shit on the road, and the car doesn't know how to predict their future behavior and avoid running into them.

      • (Score: 2) by Arik on Friday February 03 2017, @05:06PM

        by Arik (4543) on Friday February 03 2017, @05:06PM (#462479) Journal
        "Unwanted maneuvers refers to the car doing something the human monitor thinks it ought not to: crossing lanes, turning unexpectedly, running red lights, speeding, whatever."

        Except that would already be accounted for under software glitch.
        --
        If laughter is the best medicine, who are the best doctors?
        • (Score: 2) by ikanreed on Friday February 03 2017, @05:33PM

          by ikanreed (3164) Subscriber Badge on Friday February 03 2017, @05:33PM (#462490) Journal

          Only if we define "Glitch" in a way that is identical to "Something other than what the driver wants".

          Which is a perfectly acceptable definition, but to a programmer, a glitch is usually more akin to an actual programming error, rather than a disconnect between requirements and what the user actually wants.

        • (Score: 2) by fishybell on Friday February 03 2017, @05:52PM

          by fishybell (3156) on Friday February 03 2017, @05:52PM (#462501)

          If we include everything under "software glitch," what exactly would we learn from these reports?

          Unless people can fall asleep drunk and get to their final destination nearly 100% of the time, people won't trust self-driving cars.

          Learning about specific types of problems helps ease tensions, but if they didn't group them into somewhat vague categories, you'd have 1 category per incident. Instead of 15 "the car didn't do what I expected" you'd have 15 different "the car did X when I expected Y" reports.

          I'm guessing "software glitch" refers to the software crashing or freezing. To me, that's something I would never be okay with.

          • (Score: 1) by Scruffy Beard 2 on Friday February 03 2017, @06:31PM

            by Scruffy Beard 2 (6030) on Friday February 03 2017, @06:31PM (#462522)

            Hey, I freeze myself in traffic sometimes. Do it too long, you get honked at.

    • (Score: 2) by DeathMonkey on Friday February 03 2017, @06:37PM

      by DeathMonkey (1380) on Friday February 03 2017, @06:37PM (#462525) Journal

      What exactly do those terms mean here?
      "Unwanted maneuvers" — Did Google's car do an unwanted maneuver?

      Yes. If you look at the report the cause is actually "disengage for unwanted maneuver of the vehicle" so the causes in the summary are excerpted a bit.

      I'd copy/paste them here but it's a scanned report...

  • (Score: 2) by goodie on Friday February 03 2017, @05:14PM

    by goodie (1877) on Friday February 03 2017, @05:14PM (#462483) Journal

    I'm not sure at all how this all works but my understanding is that driving patterns are governed by machine learning algorithms. I am sure adjustments are made but I also think that if there is more data available, the algorithm should perform better.

    Which brings me to the point that right now I think it's a shame that each company is implementing its own way of doing this. If the algorithms are patented or proprietary that's one thing but the large amounts of data accumulated to enable the training of those algorithms would be, in my opinion, much more interesting if it were available publicly, in a standard format so that every company contributes and uses data to each implement their own self-driving mechanisms. But yeah not sure that'd ever happen ;-).

  • (Score: 0) by Anonymous Coward on Friday February 03 2017, @07:32PM

    by Anonymous Coward on Friday February 03 2017, @07:32PM (#462548)

    Drivers could become too complacent with fewer disengagements. The reduction in disengagements does not imply an increase in safety.

  • (Score: 2) by AthanasiusKircher on Friday February 03 2017, @07:40PM

    by AthanasiusKircher (5291) on Friday February 03 2017, @07:40PM (#462554) Journal

    I'm sure there's been significant improvements in this technology. But I recall from looking in detail at some of these reports in the past from Google that they don't provide data on when the drivers simply choose to drive themselves, other than perhaps a mileage summary. (And I recall that there was a rather large number of miles these cars were driven manually.)

    So, it opens this sort of data to reporting bias. Is it raining today or other unfavorable visibility or road conditions? Maybe the driver chooses to drive manually. Does the driver know there's an ongoing construction zone with unclear signs? Okay -- the driver just chooses to take over most of the trip.

    What we get are only reports when a driver was forced to take over due to an imminent AI problem. We don't get detailed reports on when drivers manually decide conditions aren't great for dealing with the AI, so they don't even engage it in the first place.

    Again, that's not saying AI in these vehicles isn't improving. But the problem with many of Google's stats is that if you look closely at their reports, you realize there's a lot of "manual" miles driven in these cars. (For example, buried in this report [googleusercontent.com], Google says over 15 months that the AI cars drove about 424k miles in autonomous mode, but 100k miles manually. How many of those 100k miles were driven manually to avoid AI problems in the first place is not reported.)

    So, it's a bit disingenuous to have these stats like "X million miles driven with no accidents" or whatever, when not only do drivers frequently disengage the AI when required to because the AI isn't responding correctly, but there could also be plenty of places or occasions when these cars aren't actually tested in conditions that real drivers might have to deal with (and where accidents may be more frequent, like inclement weather or unusual/unclear traffic patterns). It would be sort of equivalent to taking a new driver out with a learner's permit, but stopping the car and switching seats when it's time to go into heavy traffic or whatever. Obviously this is the cautious thing to do, and I'm not arguing the drivers should be testing in non-ideal conditions yet -- but I'm not sure how much we'd make of stats about "miles driven without an accident" in a situation like that either.

    Perhaps someone can find a recent report that addresses these issues. Back when I tried sorting through Google's data out of curiosity (several months ago at least), this was all left rather ambiguous.

    • (Score: 2) by takyon on Saturday February 04 2017, @12:55AM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Saturday February 04 2017, @12:55AM (#462681) Journal

      How many of those 100k miles were driven manually to avoid AI problems in the first place is not reported.

      If they aren't providing data about when and where the cars were driven, to allow the determination of weather and traffic conditions, then that figure won't be too helpful anyway.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]