Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday October 21 2016, @01:48AM   Printer-friendly
from the No-steering-wheel?-No-brakes?-No-problem! dept.

The AP via U.S. News & World Report reported last month that California's Department of Motor Vehicles has proposed regulations that would allow fully autonomous cars, without a human driver, steering wheel or control pedals. On Wednesday the state's government held a workshop to hear public comment on the proposal. In January, Google had stated that it would not offer such cars in California if the state required a human driver to be present. According to KCRA-TV,

DMV hopes to have a finalized draft of regulations in the coming months. There will then be a 45-day public comment period before the draft is approved.

additional coverage:


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by fido_dogstoyevsky on Friday October 21 2016, @01:54AM

    by fido_dogstoyevsky (131) <axehandleNO@SPAMgmail.com> on Friday October 21 2016, @01:54AM (#417033)

    No driver, no steering wheel, no pedals - no thank you.

    --
    It's NOT a conspiracy... it's a plot.
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Friday October 21 2016, @03:05AM

    by Anonymous Coward on Friday October 21 2016, @03:05AM (#417068)

    Yeah, it will be quite a while before I trust a computer to adapt to situations as well as most humans can.

    • (Score: 2) by Arik on Friday October 21 2016, @05:02AM

      by Arik (4543) on Friday October 21 2016, @05:02AM (#417110) Journal
      Personally, I'd be happy to trust a computer vs a human driver on the road, with current technology... *if we as a whole weren't absolute idiots with computers.'

      In my ideal world there's no reason these things shouldn't be safer than human drivers, with a handful of caveats. (They should absolutely refuse to drive you outside areas they are truly certain they know, among others.) But the fact is I've seen how programming has devolved over the past decades, when it should have been evolving, and I'd rather let the retarded 9 year old down the street drive the car than any program that's going to be put out by any of these companies today. We used to joke about fuzzy logic but when this shit is coming out through multiple levels of unexamined abstraction it's no longer a joke, it's a nightmare.
      --
      If laughter is the best medicine, who are the best doctors?
    • (Score: 1, Touché) by Anonymous Coward on Friday October 21 2016, @09:15AM

      by Anonymous Coward on Friday October 21 2016, @09:15AM (#417167)

      Trusting the computer is one thing. Trusting whoever has control over the computer (hint: that won't be you) is another thing.

  • (Score: 4, Insightful) by AthanasiusKircher on Friday October 21 2016, @02:28PM

    by AthanasiusKircher (5291) on Friday October 21 2016, @02:28PM (#417262) Journal

    No driver, no steering wheel, no pedals - no thank you.

    Here's the thing -- until such a car is ready, tested, and safe to ride, I don't think ANY "autonomous" cars should be allowed on the roads outside of testing with trained drivers.

    There's this bizarre idea some people seem to have that an AI car should be able to "sound an alarm" or whatever and transfer control to a human driver when it doesn't know what to do. Anyone who thinks about that for more than 10 seconds will realize how completely idiotic that idea is for general consumers who might "drive" these things. Already we have a severe problem with people are distracted while they drive, glancing over at cell phones, texting, etc. Now, imagine when you're in a car that's "relatively autonomous" and you can ride for 30 minutes at a time without paying attention to the road at all. People will be reading the newspaper, eating breakfast, watching a DVD...

    Does anyone seriously think that a human in those circumstances would be able to be given an alert by the car and take over the controls to make split-second decisions??

    That's absolutely the worst-case scenario for road safety, probably even worse than all-human drivers we have now, who get into a lot of accidents. At least now human drivers have some chance of paying attention. But if they fall into the lull of an AI car, this mythical idea that humans could take over when the AI can't figure out to do is a terrible idea.

    I think there will likely still be a place to have manual controls to take over at the discretion of the driver. But until AI cars have been proven safe enough that there is no need for manual controls (i.e., meaning they can handle ALL circumstances, and at least exit them safely 99.99% of the time), I won't ride in one.

    • (Score: 2) by gidds on Saturday October 22 2016, @04:50PM

      by gidds (589) on Saturday October 22 2016, @04:50PM (#417601)

      until such a car is ready, tested, and safe to ride, I don't think ANY "autonomous" cars should be allowed on the roads outside of testing with trained drivers.

      How do you define "ready, tested, and safe"?

      After all, Google cars have done I-can't-remember-how-many tens of thousands of hours on public roads, with no accidents that were its fault; enough to show it's at least as safe as most human drivers.  (Though not enough to how how much safer; given the relative rarity of accidents, it would be prohibitive to pin that down exactly.)

      Is that safe enough?  If not, what are your criteria?

      (Honest question.  I'm undecided on the issue.)

      --
      [sig redacted]
      • (Score: 2) by AthanasiusKircher on Sunday October 23 2016, @03:32PM

        by AthanasiusKircher (5291) on Sunday October 23 2016, @03:32PM (#417867) Journal

        After all, Google cars have done I-can't-remember-how-many tens of thousands of hours on public roads, with no accidents that were its fault; enough to show it's at least as safe as most human drivers.

        First, I think I'll wait for independent confirmation of such testing. Also, my understanding of this testing is that the vast majority of these miles were accumulated on routes that were well-mapped with few "surprises," and that the human drivers could choose to take over if any situation arose that seemed to require their attention. Also, these were mostly obvious "tests" with alert drivers at the helm.

        Lots of accidents happen when "unusual" things happen on the roads. Poor weather or poor visibility, unusual signals or signs that change normal traffic patterns, etc. As far as I understand the issue, Google has avoided testing under these conditions so far (or has only done limited testing). Combining this with the operator option to "take over" controls at any point, I don't yet put much stock in statistics Google has released.

        When an independent testing agency can certify that Google's cars can go millions of miles under a variety of common road conditions (or, alternatively, have a COMPLETELY reliable failsafe that pulls the car over safely in 100% of the cases where it determines the roads aren't "safe'), or in places with changing traffic patterns and signals that can't be pre-mapped, etc. without an operator EVER taking over, THEN I'll start to consider your claim that "it's at least as safe as human drivers."

        (And even then, I personally am not remotely interested in a car that's "at least as safe as a human driver" on average. I certainly drive much less aggressively -- and therefore likely safer -- than most people on the road. Most of my family does. A while back when Google first started making claims about hundreds of thousands of miles without an accident, I polled my extended family over a holiday dinner. I think collectively we have driven at least a few million miles, with only two accidents -- both of which were minor and clearly the fault of other drivers. Meanwhile, Google's cars have been involved in over a dozen collisions -- mostly minor -- with less mileage than that. Yes, Google claims that in almost all of them, their car wasn't at fault, but clearly those cars seem to end up in more fender benders than my conservatively-driving family... probably because Google cars tend to make unusual decisions that other drivers aren't expecting. So, yeah, my standard is going to be MUCH higher before I allow a self-driving car to take me around.)

        • (Score: 2) by AthanasiusKircher on Sunday October 23 2016, @03:47PM

          by AthanasiusKircher (5291) on Sunday October 23 2016, @03:47PM (#417870) Journal

          By the way, here's [google.com] what Google's FAQ says about this:

          When or why would the test drivers take control of the vehicle?

          All test drivers are professionally trained to take control of the vehicle at any time. They're also trained to be conservative in deciding when they should take over—if they notice that the car is not being as smooth or safe as it could be, they'll take over driving immediately. Our engineering team is then able to replay the situation on our computers and see what the car would’ve done had the test driver not taken over. We can then make improvements to the software based on this information and ensure that we will be able to smoothly navigate similar situations in future.

          Obviously, this is a prudent course of action and necessary to the process of "training" the cars and working the bugs out. But to my knowledge, Google has never released any statistics about how often such interventions happen, under what circumstances, etc. Thus, it's impossible to evaluate any safety claims about the "millions of miles" they claim their "autonomous" cars have driven, because when anything out of the ordinary happens, the driver can (and apparently frequently does) take over.

          • (Score: 2) by AthanasiusKircher on Sunday October 23 2016, @04:25PM

            by AthanasiusKircher (5291) on Sunday October 23 2016, @04:25PM (#417882) Journal

            Sorry for another self-reply, but I was wrong about what Google has reported, and I'll freely admit when I made an error. Google DID release stats on last December on when and how often drivers take over [googleusercontent.com], though I don't remember seeing this report linked in media coverage, so I wasn't aware of it before.

            Anyhow, over a 15-month period, the AI either forced the driver to take over, or the driver chose to disengage the AI for safety reasons 341 times, over the course of about 424,000 miles driven "autonomously." The report also says that drivers drove those cars manually roughly 100,000 miles during the same period, so presumably the report wouldn't take into account situations where the driver never engaged the AI to begin with because of road conditions, known hazards, an unknown route, etc.

            Again, this is clearly prudent and cautious behavior, and the report notes that many of the disengagements were unlikely to result in an accident even if the AI had been left on. But since we don't know such things for certain, and we don't know how many times drivers have chosen to deliberately go in manual mode for whatever reason (or what common conditions/situations the Google folks have deliberately avoided), we can't really estimate whether Google cars would actually be safer than the average human driver if left completely to drive themselves.