Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday April 04 2019, @11:42PM   Printer-friendly
from the going-left-of-center dept.

Submitted via IRC for SoyCow1984

Researchers trick Tesla Autopilot into steering into oncoming traffic

Researchers have devised a simple attack that might cause a Tesla to automatically steer into oncoming traffic under certain conditions. The proof-of-concept exploit works not by hacking into the car's onboard computing system. Instead, it works by using small, inconspicuous stickers that trick the Enhanced Autopilot of a Model S 75 into detecting and then following a change in the current lane.

Tesla's Enhanced Autopilot supports a variety of capabilities, including lane-centering, self-parking, and the ability to automatically change lanes with the driver's confirmation. The feature is now mostly called "Autopilot" after Tesla reshuffled the Autopilot price structure. It primarily relies on cameras, ultrasonic sensors, and radar to gather information about its surroundings, including nearby obstacles, terrain, and lane changes. It then feeds the data into onboard computers that use machine learning to make judgements in real time about the best way to respond.

Researchers from Tencent's Keen Security Lab recently reverse-engineered several of Tesla's automated processes to see how they reacted when environmental variables changed. One of the most striking discoveries was a way to cause Autopilot to steer into oncoming traffic. The attack worked by carefully affixing three stickers to the road. The stickers were nearly invisible to drivers, but machine-learning algorithms used by by the Autopilot detected them as a line that indicated the lane was shifting to the left. As a result, Autopilot steered in that direction.

In a detailed, 37-page report, the researchers wrote:

Tesla autopilot module's lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand and dust interference), but it still doesn't handle the situation correctly in our test scenario. This kind of attack is simple to deploy, and the materials are easy to obtain. As we talked in the previous introduction of Tesla's lane recognition function, Tesla uses a pure computer vision solution for lane recognition, and we found in this attack experiment that the vehicle driving decision is only based on computer vision lane recognition results. Our experiments proved that this architecture has security risks and reverse lane recognition is one of the necessary functions for autonomous driving in non-closed roads. In the scene we build, if the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by acid andy on Friday April 05 2019, @12:17AM (4 children)

    by acid andy (1683) on Friday April 05 2019, @12:17AM (#824744) Homepage Journal

    I agree with you completely when it comes to self-driving cars. I don't believe it's possible in the near future to solve this problem with enough safety to avoid lethal edge cases.

    I do find it interesting though that they make a big deal out of the fact that this particular exploit used things that affected the machine but that were "nearly invisible" to humans. Yes, they found a flaw but it was also an act of sabotage that doesn't necessarily mean the machine is inferior. You could probably come up with something else that would fool a human but not the AI--maybe something barely detectable by human vision but detectable by LIDAR. People would then say that a human driver could not possibly be held to blame. There's an instinctive bias against anything that does things differently to how we humans do it to the extent that it may be perceived as inferior when really it isn't.

    The current self-driving cars really are inferior, though.

    --
    If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 4, Insightful) by vux984 on Friday April 05 2019, @02:10AM (2 children)

    by vux984 (5045) on Friday April 05 2019, @02:10AM (#824773)

    "There's an instinctive bias against anything that does things differently to how we humans do it to the extent that it may be perceived as inferior when really it isn't."

    a) Given we already have a standard, it is pretty reasonable that alternatives are automatically measured against that same standard.

    b) Your contra-example, of something a machine could avoid but a human wouldn't see does count as a bonus for the machine performance, but the counter argument to that is that if a human couldn't see it, then the human wasn't driving appropriately for the conditions either. Humans absolutely do have accidents, but nearly all of them are deemed avoidable; and we can plausibly and even correctly argue that many humans would not have had a given accident. The accident is judged an education and/or judgement failure of an individual person, rather than a systemic flaw of human beings -- contrast with a Tesla autopilot flaw which it can be argued would affect the entire fleet.

    c) We've spent a few generations working out signage and conventions and rules and road construction etc precisely to communicate as effectively with humans as we can. Could we develop a system from scratch that was optimized for communicating with machines? Absolutely we could, but that's not the system in place -- machines have to drive on our roads, designed for us; so holding them to the standard we hold for humans makes perfect sense.

    The converse problem of humans having to drive on road networks designed for machines... doesn't currently exist. And it really doesn't matter how much better AIs would be on such a system. Although it is a good point that we could (and surely will) augment our signage and road construction with markings to communicate better with machines too over time. But the road system is a megaproject, and a total conversion is absurd.

    Plus humans failover pretty gracefully as the 'formal system' comes apart... we can cope fine with dirt tracks with no signage at all... just a couple ruts. Makeshift overflow parking lots on empty fields. Temporary detours during construction or after accidents... etc.

    • (Score: 2) by MostCynical on Friday April 05 2019, @02:29AM

      by MostCynical (2589) on Friday April 05 2019, @02:29AM (#824776) Journal

      Machines? How about bee [howplantswork.com] vision? [thekidshouldseethis.com]

      Humans don't really have good vision for driving. We "see" a very small cone, asnd our brains stitch the rest of the scene together from memory..

      Also, most people think they are better-than-average drivers, when, in reality, humans cause most crashes.

      --
      "I guess once you start doubting, there's no end to it." -Batou, Ghost in the Shell: Stand Alone Complex
    • (Score: 4, Interesting) by legont on Friday April 05 2019, @04:03AM

      by legont (4179) on Friday April 05 2019, @04:03AM (#824799)

      Just to add to your point, it'd be probably better to enhance humans than try to build a better than human AI's.

      --
      "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
  • (Score: 1) by khallow on Friday April 05 2019, @03:00AM

    by khallow (3766) Subscriber Badge on Friday April 05 2019, @03:00AM (#824787) Journal

    I do find it interesting though that they make a big deal out of the fact that this particular exploit used things that affected the machine but that were "nearly invisible" to humans. Yes, they found a flaw but it was also an act of sabotage that doesn't necessarily mean the machine is inferior.

    It's not a good sign however. It usually takes a good bit of effort to create something that would fool human drivers in good seeing conditions (unlike, say, maliciously placing reflective tabs on a wet road at night). I guess the problem is that neural nets are taking scale invariance to an extreme so that a pattern in a minute part of the field of view triggers detection of the condition that the neural net is looking for. I bet there will be ways to fix that, perhaps like how human vision does it.