More bad news for Uber: one of the ride-hailing giant's self-driving Volvo SUVs has been involved in a crash in Arizona — apparently leaving the vehicle flipped onto its side, and with damage to at least two other human-driven cars in the vicinity.
The aftermath of the accident is pictured in photos and a video posted to Twitter by a user of @FrescoNews, a service for selling content to news outlets. According to the company's tweets, the collision happened in Tempe, Arizona, and no injuries have yet been reported.
Uber has also confirmed the accident and the veracity of the photos to Bloomberg. We've reached out to the company with questions and will update this story with any response. Update: Uber has now provided us with the following statement: "We are continuing to look into this incident and can confirm we had no backseat passengers in the vehicle."
TechCrunch understands Uber's self-driving fleet in Arizona has been grounded, following the incident, while an investigation is undertaken. The company has confirmed the vehicle involved in the incident was in self-driving mode. We're told no one was seriously injured.
Local newspaper reports suggest another car failed to yield to Uber's SUV, hitting it and resulting in the autonomous vehicle flipping onto its side. Presumably the Uber driver was unable to take over the controls in time to prevent the accident.
Source: TechCrunch
(Score: 5, Insightful) by AthanasiusKircher on Monday March 27 2017, @03:14PM (2 children)
Exactly.
I feel like one of the biggest misconceptions about self-driving cars is that it's reasonable for them to be "partially autonomous," with the expectation that some sort of alarm could ring in a bad situation and the human driver could take over to handle things. To my mind, that's why people seem disturbed when propositions for autonomous cars without steering wheels, etc. are mentioned... everyone thinks, "Well, **I** could take over and avoid an accident."
But it's just not reasonable to expect humans to take over in a split-second and avoid an accident. Google's drivers (from what I understand) have had lots of practice and basically know the kinds of situations where it's useful to disengage the self-driving mechanism in advance (e.g., unusual traffic patterns with ad hoc changes like novel constructions zones, police directing traffic, bad weather, etc.). If you look at Google's reports, you'll see that the supposed "self-driving" cars actually spend rough of 1/3 of their mileage in manual mode with a human driver.
This may or may not have anything to do with this specific situation, but the reality is that expecting a human driver -- even one tasked with paying attention -- to take over controls suddenly and intervene to prevent a split-second collision is just not feasible in many situations.
(Score: 2) by Spamalope on Monday March 27 2017, @03:40PM
"partially autonomous" is reasonable, but a bit differently. Fully able to drive on a well lit freeway during the day, only capable of providing driver aids (lane change warnings, assisted braking, collision warnings etc) raining at night on a country lane, with full driving areas increasing each generation is reasonable if the tech becomes safe for some conditions first.
Automated parking would be spectacular too. Robo Valet is perfect! It'd be wonderful especially in the rain.
(Score: 4, Interesting) by bradley13 on Monday March 27 2017, @05:17PM
I'd just like to add that driver attention span is a problem. If you are actually driving, you are involved in the process, and paying attention is easy. If you are not driving, but just observing, it is psychologically much more difficult to keep your attention focused on the (entirely potential) task. This is well-known, for example, from airline pilots and autopilot - aircraft designers know not to expect split-second reactions from pilots.
Everyone is somebody else's weirdo.