Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday February 01 2020, @08:06PM   Printer-friendly
from the now-you-see-it-now-you-don't dept.

How a $300 projector can fool Tesla's Autopilot:

Six months ago, Ben Nassi, a PhD student at Ben-Gurion University advised by Professor Yuval Elovici, carried off a set of successful spoofing attacks against a Mobileye 630 Pro Driver Assist System using inexpensive drones and battery-powered projectors. Since then, he has expanded the technique to experiment—also successfully—with confusing a Tesla Model X and will be presenting his findings at the Cybertech Israel conference in Tel Aviv.

The spoofing attacks largely rely on the difference between human and AI image recognition. For the most part, the images Nassi and his team projected to troll the Tesla would not fool a typical human driver—in fact, some of the spoofing attacks were nearly steganographic, relying on the differences in perception not only to make spoofing attempts successful but also to hide them from human observers.

Nassi created a video outlining what he sees as the danger of these spoofing attacks, which he called "Phantom of the ADAS," and a small website offering the video, an abstract outlining his work, and the full reference paper itself. We don't necessarily agree with the spin Nassi puts on his work—for the most part, it looks to us like the Tesla responds pretty reasonably and well to these deliberate attempts to confuse its sensors. We do think this kind of work is important, however, as it demonstrates the need for defensive design of semi-autonomous driving systems.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Sunday February 02 2020, @08:10PM

    by Anonymous Coward on Sunday February 02 2020, @08:10PM (#952820)

    "It is literally the AI equivalent of taking a right turn sign and replacing it with a left turn one."

    and messing with signs can also equally confuse a human and cause them to go the wrong way onto an onramp against oncoming traffic just as well. We don't ban human drivers because they can be tricked in some hypothetical scenario. We try to make sure such a scenario doesn't occur or minimize the extent that it does/can occur.