Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday April 04 2019, @11:42PM   Printer-friendly
from the going-left-of-center dept.

Submitted via IRC for SoyCow1984

Researchers trick Tesla Autopilot into steering into oncoming traffic

Researchers have devised a simple attack that might cause a Tesla to automatically steer into oncoming traffic under certain conditions. The proof-of-concept exploit works not by hacking into the car's onboard computing system. Instead, it works by using small, inconspicuous stickers that trick the Enhanced Autopilot of a Model S 75 into detecting and then following a change in the current lane.

Tesla's Enhanced Autopilot supports a variety of capabilities, including lane-centering, self-parking, and the ability to automatically change lanes with the driver's confirmation. The feature is now mostly called "Autopilot" after Tesla reshuffled the Autopilot price structure. It primarily relies on cameras, ultrasonic sensors, and radar to gather information about its surroundings, including nearby obstacles, terrain, and lane changes. It then feeds the data into onboard computers that use machine learning to make judgements in real time about the best way to respond.

Researchers from Tencent's Keen Security Lab recently reverse-engineered several of Tesla's automated processes to see how they reacted when environmental variables changed. One of the most striking discoveries was a way to cause Autopilot to steer into oncoming traffic. The attack worked by carefully affixing three stickers to the road. The stickers were nearly invisible to drivers, but machine-learning algorithms used by by the Autopilot detected them as a line that indicated the lane was shifting to the left. As a result, Autopilot steered in that direction.

In a detailed, 37-page report, the researchers wrote:

Tesla autopilot module's lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand and dust interference), but it still doesn't handle the situation correctly in our test scenario. This kind of attack is simple to deploy, and the materials are easy to obtain. As we talked in the previous introduction of Tesla's lane recognition function, Tesla uses a pure computer vision solution for lane recognition, and we found in this attack experiment that the vehicle driving decision is only based on computer vision lane recognition results. Our experiments proved that this architecture has security risks and reverse lane recognition is one of the necessary functions for autonomous driving in non-closed roads. In the scene we build, if the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Nuke on Friday April 05 2019, @09:00AM (1 child)

    by Nuke (3162) on Friday April 05 2019, @09:00AM (#824829)

    If someone put stickers on the road to intentionally make a self-driving car crash ...

    I think some people are missing the point. The placing of stickers in the test was deliberate, but incidental artifacts can be seen everywhere on real roads. In cities there are numerous cast iron covers in the surface, ranging from manholes down to valve covers of a few square inches.

    On UK motorways and other main roads the lane markings are often temporarily changed to divert them around road works, and when later restored the temporary lines are painted over in black but still visible enough possibly to fool an SD algorithm that picked them up. In any case the roads around me are heavily patched (often ineptly - it's a national scandal) where pot-holes have been mended or buried water pipes have been repaired (always happening), so the texture is changing all the time; they are like a patchwork quilt.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by acid andy on Friday April 05 2019, @02:14PM

    by acid andy (1683) on Friday April 05 2019, @02:14PM (#824901) Homepage Journal

    You're absolutely right and no, I didn't miss that point. I just found it interesting to focus on areas where the arguments people are applying to the machines can also apply to humans. The biases aren't always immediately obvious but they're worth picking out, even if they're not immediately relevant in this case.

    I guess finding real world versions of the sticker exploit will be a later step of the research. That would certainly make a more compelling PR piece against the fallibility of these software systems.

    --
    If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?