Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday February 01 2020, @08:06PM   Printer-friendly
from the now-you-see-it-now-you-don't dept.

How a $300 projector can fool Tesla's Autopilot:

Six months ago, Ben Nassi, a PhD student at Ben-Gurion University advised by Professor Yuval Elovici, carried off a set of successful spoofing attacks against a Mobileye 630 Pro Driver Assist System using inexpensive drones and battery-powered projectors. Since then, he has expanded the technique to experiment—also successfully—with confusing a Tesla Model X and will be presenting his findings at the Cybertech Israel conference in Tel Aviv.

The spoofing attacks largely rely on the difference between human and AI image recognition. For the most part, the images Nassi and his team projected to troll the Tesla would not fool a typical human driver—in fact, some of the spoofing attacks were nearly steganographic, relying on the differences in perception not only to make spoofing attempts successful but also to hide them from human observers.

Nassi created a video outlining what he sees as the danger of these spoofing attacks, which he called "Phantom of the ADAS," and a small website offering the video, an abstract outlining his work, and the full reference paper itself. We don't necessarily agree with the spin Nassi puts on his work—for the most part, it looks to us like the Tesla responds pretty reasonably and well to these deliberate attempts to confuse its sensors. We do think this kind of work is important, however, as it demonstrates the need for defensive design of semi-autonomous driving systems.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by Anonymous Coward on Sunday February 02 2020, @03:19PM (1 child)

    by Anonymous Coward on Sunday February 02 2020, @03:19PM (#952717)

    This is dumb. We've become hyper-conservative with regards to technology. Think about the immense damage one could cause by simply placing a large rock on a highway street at night. Or imagine all the countless ways you could cause immense harm toying with electric lines. And you don't need some absurdly complex dynamically adjusting image projecting spoofing system, rocks and sharp things would work just fine. And likely cause vastly more devastation. This attack in particularly is especially stupid. It is literally the AI equivalent of taking a right turn sign and replacing it with a left turn one. And yeah in some locations you could indeed cause some serious harm, if not deaths, there. Only difference is the real life version one is a million times easier to do and can be done completely remotely without any direct association with the victims - vastly more dangerous.

    The standard of technology should not be to preempt every single imaginable way it can be used to hurt people. Not only is that infinite but our society is already predicated on a huge number of technologies that are vastly more vulnerable, yet somehow we get along just fine. I think this is the exact reason that China is starting to pull ahead of the United States in some technological areas, particularly in ones with hardware involved. Stop practical and likely to be regularly exploited technologies, don't bother with fanciful potential exploits.

    Starting Score:    0  points
    Moderation   +3  
       Insightful=3, Total=3
    Extra 'Insightful' Modifier   0  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Sunday February 02 2020, @08:10PM

    by Anonymous Coward on Sunday February 02 2020, @08:10PM (#952820)

    "It is literally the AI equivalent of taking a right turn sign and replacing it with a left turn one."

    and messing with signs can also equally confuse a human and cause them to go the wrong way onto an onramp against oncoming traffic just as well. We don't ban human drivers because they can be tricked in some hypothetical scenario. We try to make sure such a scenario doesn't occur or minimize the extent that it does/can occur.