Hackers can trick a Tesla into accelerating by 50 miles per hour:
This demonstration from the cybersecurity firm McAfee is the latest indication that adversarial machine learning can potentially wreck autonomous driving systems, presenting a security challenge to those hoping to commercialize the technology.
Mobileye EyeQ3 camera systems read speed limit signs and feed that information into autonomous driving features like Tesla's automatic cruise control, said Steve Povolny and Shivangee Trivedi from McAfee's Advanced Threat Research team.
The researchers stuck a tiny and nearly imperceptible sticker on a speed limit sign. The camera read the sign as 85 instead of 35, and in testing, both the 2016 Tesla Model X and that year's Model S sped up 50 miles per hour.
This is the latest in an increasing mountain of research showing how machine-learning systems can be attacked and fooled in life-threatening situations.
[...] Tesla has since moved to proprietary cameras on newer models, and Mobileye EyeQ3 has released several new versions of its cameras that in preliminary testing were not susceptible to this exact attack.
There are still a sizable number of Tesla cars operating with the vulnerable hardware, Povolny said. He pointed out that Teslas with the first version of hardware cannot be upgraded to newer hardware.
"What we're trying to do is we're really trying to raise awareness for both consumers and vendors of the types of flaws that are possible," Povolny said "We are not trying to spread fear and say that if you drive this car, it will accelerate into through a barrier, or to sensationalize it."
So, it seems this is not so much that a particular adversarial attack was successful (and fixed), but that it was but one instance of a potentially huge set. Obligatory xkcd.
Previously:
Protecting Smart Machines From Smart Attacks
A New Clothing Line Confuses Automated License Plate Readers
A Simple Sticker Tricked Neural Networks Into Classifying Anything as a Toaster
3D Printed Turtles Fool Google Image Classification Algorithm
Slight Street Sign Modifications Can Completely Fool Machine Learning Algorithms
(Score: 2, Informative) by Anonymous Coward on Wednesday February 19 2020, @10:30PM (6 children)
These types of attacks work because the state of the art in ML is to throw different types of neural nets at a problem. The top research is going into making those nets more efficient to train rather than coming up with better algorithms to do specific things. For example, if speed sign reading was done with OCR analyzing...
...I had to stop myself right there after double-checking the article. I had mistakenly assumed they put a static, QR-like sticker on the sign to mess with a neural net. What they actually did was extend the middle of a 3 to make it look like an 8. Not being told ahead of time and being given only a glance at that sign, most people would probably see 85 too. They're basically saying they re-wrote the number on the sign and it fooled the computer. Well of course it did! If you put white tape over the left half of an 8 you could fool everyone into thinking it was a 3 as well.
If we want to continue to argue about this, then a better designed system should have reported both a 3 and an 8 with similar probabilities then some other component of the car could have checked it's surroundings (highway, city road, everyone else doing 40?) to determine which was more likely.
(Score: 3, Informative) by Anonymous Coward on Wednesday February 19 2020, @10:38PM (4 children)
Yes. But people aren't AI, they are real-intelligent: They will pick up that it is unreasonable for the speed limit to suddenly go from 35 to 85 on a little street, and will guess someone fucked with the sign.
(Score: 2, Insightful) by Ethanol-fueled on Wednesday February 19 2020, @10:52PM (1 child)
The same assholes driving in these cars are the same assholes watching Harry Potter with self-driving enabled and totally oblivious to the outside world. They ain't gonna notice shit until they get punched in the face with an airbag and then locked inside to die in a fiery inferno.
(Score: 3, Funny) by c0lo on Wednesday February 19 2020, @11:11PM
FTFY
I would totally pay more taxes if I knew they are gonna be used for installing public, free of charge, "asshole punching and incinerating" facilities where this can happen without endangering the rest of decent humans.
(who am I kidding, tho', the "decent human" race got extinct. Probably with Neanderthals)
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
(Score: 1) by anubi on Thursday February 20 2020, @05:47AM
This is a good example of coding skill.
One can be quite nebulous in telling a human to do something, and still get the point across.
One can have all sorts of leadership skills when working with people, but this rarely works with state machines. If the company depends on their machines to work, a good coder and engineer is worth a helluva lot. But in reality the best often get poor reviews over people skills, as they are not a people person at heart...they deal with machines.
Machines do exactly what they are told to do...not what they think you want. One does not impress a machine by having a corner office, private jet, three piece suit, and a pad of evaluation forms.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 0) by Anonymous Coward on Friday February 21 2020, @12:33AM
And once we move to all autonomous cars, governments will simply have to publish maps in a standard format so the car can look up the correct limits without having to try to parse some marks painted on a piece of metal. Then the car will have access to better information than the human. The virtue of human intuition in knowing not to drive fast in a residential zone only arises from the fact that we are in a transition period where the information is mainly being made for human consumption.
(Score: 2) by c0lo on Wednesday February 19 2020, @11:19PM
No need for better designed systems. Just switch the speed limit signs to use numerals less prone to adversarial attacks (cheaper for driverless car makers too, the change of the limit signs are gonna be supported by public money; a small investment in lobbying can go a long way).
I don't know, maybe use Mandarin numerals? Because sooner or later, those are gonna be lingua franca anyway.
(large grin)
https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford