El Reg reports
[January 23] a Tesla Model S slammed into a stationary firetruck at around 65mph on Interstate 405 in Culver City, California. The car was driven under the fire engine, although the driver was able to walk away from the crash uninjured and refused an offer of medical treatment.
The motorist claimed the Model S was driving with Autopilot enabled when it crammed itself under the truck. Autopilot is Tesla's super-cruise-control system. It's not a fully autonomous driving system.
[...] The fire truck was parked in the carshare lane of the road with its lights flashing. None of the fire crew were hurt, although Powell noted that if his team had been in their usual position at the back of the truck then there "probably would not have been a very good outcome."
Tesla will no doubt be going over the car's computer logs to determine exactly what happened, something the California Highway Patrol will also be interested in. If this was a case of the driver sticking on Autopilot, and forgetting their responsibility to watch the road ahead it wouldn't be the first time.
In 2016, a driver was killed after both he and the Tesla systems missed a lorry pulling across the highway. A subsequent investigation by the US National Transportation Safety Board found the driver was speeding and had been warned by the car six times to keep his hands on the wheel.
Tesla has since beefed up the alerts the car will give a driver if it feels they aren't paying full attention to the road. The safety board did note in its report that the introduction of Tesla's Autosteer software had cut collisions by 40 per cent.
Previous: Tesla's Semiautonomous System Contributed to Fatal Crash
(Score: 2) by AthanasiusKircher on Thursday January 25 2018, @03:12PM (2 children)
Before I reply, note first that I mostly agree with your general point -- the Tesla "autopilot" stuff seems like it probably does a lot more good than bad. However, I'd argue about the standards you're using a bit...
That shouldn't be the relevant standard for "good." One should also consider whether or not the system causes accidents that would not have occurred in the first place. A "good" system might cause a few new incidents in unexpected scenarios, but usually our standards shouldn't just about overall accident rate.
To put this in a different context, say you had a daily supplement that "cuts fatal heart attacks by 40%." Sounds great, right? As a pure stat, it certainly sounds promising. But say I told you that in a given population, there were generally 1000 fatal heart attacks. And this supplement seemed to prevent all 1000 of those high-risk folks from having a heart attack. But it also CAUSED 600 fatal heart attacks to happen in otherwise relatively low-risk folks. Overall, it cut heart attack incidence in the population by 40%, but I don't think we'd call this a "good" drug... it's killing large numbers of people, even while saving others. The side effects may not be worth the benefit.
To be clear, I'm NOT saying that's true here with Tesla. But it's an important reason why we should pay attention to cases that appear like they might be a failure of the Autopilot system to have reasonable behavior.
I'll also agree with you if you argue that Tesla gets a lot of negative bad press for such incidents. But they also invite it. Musk tries to get all the media attention he can, and that means he's also going to get negative stuff when something bad happens. He also tends to get really defensive at any criticism. AND (perhaps most importantly), Tesla steadfastly refuses to alter the name of their "Autopilot" feature, despite the fact that it's clear huge numbers of people who hear that name misunderstand that it's mostly just enhanced cruise control. So, you can argue about the idiots who abuse it, but I think fewer idiots would abuse it if it had a different name. But that's Tesla's marketing decision -- they obviously think they'll get more attention and sell more cars with "Autopilot," so they have to suck it up when negative press comes along because idiots misunderstand that name.
Again, statistical comparisons should be done with care. It doesn't make much sense to compare an enhanced cruise control feature that's likely most used in open-highway situations to a general driving stat for humans (which includes high-density traffic situations where most crashes occur).
Perhaps a more apt comparison would be to look at the number of crashes with "Autopilot" vs. the number of crashes in cars with humans using standard cruise control. That would probably be a more like-to-like comparison. I suspect "Autopilot" would do significantly better there too, because normal cruise control (like "Autopilot") tends to lead people to be more distracted while driving... but normal cruise control has no ability to respond, whereas "Autopilot" has more enhanced safety features.
(Score: 2) by GreatAuntAnesthesia on Thursday January 25 2018, @04:29PM (1 child)
Interesting points, but I don't think your heart attack analogy stands up.
In your scenario, you have a clearly defined default position, a "natural" state : Everybody is at risk of a heart attack, some high risk and some low.
I'd argue that in the world of cars, there is no natural state. Human drivers are the default, but only because that technology was invented first. Everybody has a heart1, but not everybody has to have a car. Indeed, one could imagine a society with no cars at all. We choose to have cars in our society, to pay their cost in lives for all the benefits and conveniences they bring. The autopilot isn't disrupting the natural order of things in the way your heart attack drug is. Or if it is, then the human drivers are too, and the only difference between the two options is the number of deaths.
Put it this way: If we existed in some improbable alternate universe where Tesla Autopilot had been invented before manual controls, would we be sat here arguing whether putting humans behind the wheel would rightfully save some lives at the expense of many more?
1Insert obligatory snark here about Rupert Murdoch / Dick Cheney / Donald Trump.
(Score: 2) by AthanasiusKircher on Thursday January 25 2018, @07:26PM
I take your point. But in the very way you just framed that, you automatically are presuming a beneficial outcome. My point wasn't just about Tesla Autopilot (which I explicitly admitted likely prevents a lot more issues than it causes), but about judging such automated technologies in general.
For example, many people who argue about completely autonomous cars phrase it as you did in your previous post -- i.e., we just get to the point that the accident stats are as good as average stats for human drivers to view them as a good alternative. But I don't think that'd be a comfort to someone who was killed by an autonomous car acting in a completely stupid manner because the bugs weren't worked out.
Bottom line is that there will always be side effects to the adoption of new technology, and some of those may be negative. All I'm saying is that it's rational to factor that into judging whether the tech is "better" than humans. Lots of accidents are caused by STUPID human error that is largely preventable (e.g., speeding, following too closely, etc.). I tend to be a much more cautious and conservative driver than average, so quoting average accident rates is not going to convince me to put my safety in the hands of some algorithm.
But even if the algorithm had the stats of a "good driver," I also want to know not only that it would successfully navigate potential accident scenarios better than I would in some cases, but that it's also not going to randomly kill me by doing something completely weird and unpredictable that I, as a driver, would never do. And if such latter scenarios were more than a freak accident -- that they actually occurred with some regularity -- are you really telling me that you'd want to put your safety in the hands of such an algorithm, just based on the promise that it "performs as good as the average human driver" or even slightly better in terms of overall accident stats?
Again, I'm not arguing that Tesla's feature isn't helpful. Only that unexpected negative outcomes should be also be a serious factor to consider, along with summary stats.