Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Thursday December 06 2018, @07:10AM   Printer-friendly
from the that's-not-driving dept.

Waymo has announced a driverless taxi service called Waymo One, but it will only be usable for around 400 preapproved "early riders" in the Phoenix metro area, rather than the general public. While self-driving Chrysler Pacifica hybrid minivans will be used, they will continue to retain a safety driver behind the wheel.

Waymo's "new" service could be described as a launch in name only:

The banner Waymo is unfurling, though, is tattered by caveats. Waymo One will only be available to the 400 or so people already enrolled in Waymo's early rider program, which has been running in the calm, sunny Phoenix suburb of Chandler for about 18 months. (They can bring guests with them and have been freed from non-disclosure agreements that kept them from publicly discussing their experiences.) More glaringly, the cars will have a human behind the wheel, there to take control in case the car does something it shouldn't.

So no, this is not the anyone-can-ride, let-the-robot-drive experience Waymo and its competitors have been promising for years. Building a reliably safe system has proven far harder than just about everyone anticipated and its cars aren't ready to drive without human oversight. But Waymo promised to launch a commercial service sometime in 2018, it didn't want to miss its deadline and risk its reputation as the leader of the industry it essentially created, and not even the might of Waymo parent company Alphabet can delay the end of the calendar year.

So Waymo is pushing out a software update, tweaking its branding, and calling it a launch.

Also at Reuters, Gizmodo, The Atlantic, and Ars Technica.

See also: Waymo's driverless cars on the road: Cautious, clunky, impressive

Previously: Google/Waymo Self-Driving Minivan Tested with the Public in Phoenix AZ
Waymo Orders Thousands More Chrysler Pacifica Minivans for Driverless Fleet
Walmart and Waymo to Trial Driverless Shuttle Service in Phoenix for Grocery Pickups
Google's Waymo Plans to Launch a Self-Driving Car Service in December (the service falls short of what is described in this November article)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday December 06 2018, @07:52AM (8 children)

    by Anonymous Coward on Thursday December 06 2018, @07:52AM (#770548)

    Just so long as it's less than are killed by human drivers.

  • (Score: 0) by Anonymous Coward on Thursday December 06 2018, @08:12AM (7 children)

    by Anonymous Coward on Thursday December 06 2018, @08:12AM (#770551)
    So far humans lead by one due to Uber.
    • (Score: 2, Insightful) by Anonymous Coward on Thursday December 06 2018, @08:23AM (6 children)

      by Anonymous Coward on Thursday December 06 2018, @08:23AM (#770554)

      More than one. In fact try one every 25 seconds.

      Besides we need to talk about deaths per mile rather than absolute figures, but I'd be shocked if these aren't safer than humans. They may be scarier because we can't predict/understand the crashes, but that doesn't make them less safe, just more unsettling.

      Several people were fatally injured in the time you were reading this. Driving down the rate of deaths per mile driven, and the number of miles driven, is important, moreso than people's peace-of-mind.

      • (Score: 0) by Anonymous Coward on Thursday December 06 2018, @08:34AM (3 children)

        by Anonymous Coward on Thursday December 06 2018, @08:34AM (#770563)
        Actually, inability to correct the problem (say, misread lane markings, like Tesla did in Mt. View, killing the driver) is very scary. This means that the error will occur again and again. Remember the experiments, where scientists were adding innocent marks to road signs, and suddenly neural networks went bananas?
        • (Score: 0) by Anonymous Coward on Thursday December 06 2018, @08:58AM (2 children)

          by Anonymous Coward on Thursday December 06 2018, @08:58AM (#770578)

          Incomprehensibility doesn't imply unimprovability, off-policy reinforcement learning will allow it to be prevented without us understanding it or requiring it happen again for each new software version.

          Furthermore, if it's already safer than humans could be made to be then it doesn't need to improve further.

          As for your last point, I don't see how that's relevant except as being unsettling. If your claim is that people will set out to cause crashes if it becomes harder to detect them having done so, then you're going to have to explain why people don't 'spill' water onto roads in winter. That's similarly hard to detect and far easier to get away with.

          • (Score: 0) by Anonymous Coward on Thursday December 06 2018, @09:23AM (1 child)

            by Anonymous Coward on Thursday December 06 2018, @09:23AM (#770582)
            Road signs are easily and commonly damaged by weather, dirt, graffiti. No ill intent here - the sign is perfectly readable to a human. A machine is not so resilient to input noise - that's all.
            • (Score: 1, Informative) by Anonymous Coward on Thursday December 06 2018, @09:56AM

              by Anonymous Coward on Thursday December 06 2018, @09:56AM (#770586)

              The specific reasons for crashes don't matter in the slightest if they kill less people per mile. This only matters if you think a) we won't be able to beat human safety levels without robust recognition (plausible) and b) we won't be able to achieve robust recognition (doubtful). Let's consider b further since I don't take issue with a.

              Neural networks are universal function approximators, either they can do robust-as-humans recognition or the physics ruling human brains is uncomputable. The question isn't whether it's possible, but rather whether we can practically train them to be robust. Showing that networks not trained to be robust aren't robust doesn't mean that they can't be practically trained to be, it just means that robustness isn't automatic with current training techniques and we need to design for it. I don't see any reason to believe it's not practical to train it to recognize the class ``unclear signage'' once that's a design goal and sufficient training data is accrued.

              If these cars are being rolled out then I expect they're robust enough for the test data (though they probably didn't test in neighborhoods covered in graffiti/snow/mud/&c). This particular implementation may not be sufficiently robust, but sooner or later someone will collect enough data and find the right techniques. This is a difficulty, but not a fundamental limitation.

      • (Score: 0) by Anonymous Coward on Thursday December 06 2018, @10:22AM (1 child)

        by Anonymous Coward on Thursday December 06 2018, @10:22AM (#770588)

        > ...deaths per mile ...

        I'm happy to talk about deaths per mile, for my demographic. My father taught me to drive at age 5 (off the road), survived my hormone driven years and am well into middle age. Don't drive drunk or high, don't have a cell phone for distraction, have been through a couple of advanced driver training schools including an intro 3-day racing class, give other (erratic) drivers plenty of room, drive well maintained cars, and haven't had more than a fender bender in all my years of driving. I realize that my number might come up at any time--so I try to avoid developing a false sense of security.

        My guess is that my demographic has about a tenth of the deaths per mile of the system average. When these robots get that good I'll start to think about it. Otherwise I'm happy with my odds on the highway.

        • (Score: 0) by Anonymous Coward on Thursday December 06 2018, @10:48AM

          by Anonymous Coward on Thursday December 06 2018, @10:48AM (#770594)

          Be careful not to underestimate the reduction in risk from better reaction times/360° vision though. I'm more excited for my car protecting me from others than myself, if only because there's more of them.