Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.
posted by Fnord666 on Saturday February 01 2020, @08:06PM   Printer-friendly
from the now-you-see-it-now-you-don't dept.

How a $300 projector can fool Tesla's Autopilot:

Six months ago, Ben Nassi, a PhD student at Ben-Gurion University advised by Professor Yuval Elovici, carried off a set of successful spoofing attacks against a Mobileye 630 Pro Driver Assist System using inexpensive drones and battery-powered projectors. Since then, he has expanded the technique to experiment—also successfully—with confusing a Tesla Model X and will be presenting his findings at the Cybertech Israel conference in Tel Aviv.

The spoofing attacks largely rely on the difference between human and AI image recognition. For the most part, the images Nassi and his team projected to troll the Tesla would not fool a typical human driver—in fact, some of the spoofing attacks were nearly steganographic, relying on the differences in perception not only to make spoofing attempts successful but also to hide them from human observers.

Nassi created a video outlining what he sees as the danger of these spoofing attacks, which he called "Phantom of the ADAS," and a small website offering the video, an abstract outlining his work, and the full reference paper itself. We don't necessarily agree with the spin Nassi puts on his work—for the most part, it looks to us like the Tesla responds pretty reasonably and well to these deliberate attempts to confuse its sensors. We do think this kind of work is important, however, as it demonstrates the need for defensive design of semi-autonomous driving systems.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Insightful) by Anonymous Coward on Saturday February 01 2020, @09:29PM (7 children)

    by Anonymous Coward on Saturday February 01 2020, @09:29PM (#952506)

    Help me get this straight, it now looks like Tesla (and other AV/Autonomous Vehicle suppliers) not only have to make sure that their AI figures out every possible combination of road and traffic, but they also have to figure out when some of it is spoofed--and this includes spoofing that has yet to be invented.

    Don't think that I'm going to be riding in any of these things in my lifetime (currently 64) unless they are operated in a very well defined area.

    Starting Score:    0  points
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  

    Total Score:   2  
  • (Score: 1, Touché) by Anonymous Coward on Saturday February 01 2020, @09:49PM

    by Anonymous Coward on Saturday February 01 2020, @09:49PM (#952512)

    Meanwhile, I will continue to shine high powered lasers into the eyes of drivers and pilots.

  • (Score: 0) by Anonymous Coward on Saturday February 01 2020, @09:55PM (3 children)

    by Anonymous Coward on Saturday February 01 2020, @09:55PM (#952515)

    There are a million things that can cause a human driver problems as well. That's no reason to ban human drivers. We need to make sure that AP is safer than humans overall.

    • (Score: 4, Insightful) by edIII on Sunday February 02 2020, @03:55AM (2 children)

      by edIII (791) on Sunday February 02 2020, @03:55AM (#952630)

      You're missing the fucking point.

      We can very easily make AI safer than humans overall. I say easily, as in it will be accomplished. The real problem not being expressed is that the AI is gullible as fuck. In most ideal cases the AI can assume that there isn't malicious actors present attempting to alter the AI's perception of reality. Yet we all know here that such an ideal environment exists only in theory, or in the absence of humanity.

      The goal is not to make AI safer than humans overall, but to make the AI safe in an environment that may deliberately trying to confuse it. In other words, the AI has to treat all environments as untrusted, unless there are auxillary sensing platforms in the ground that are tamper resistant and can authenticate their data streams. It's a far easier problem to deal with when you remove humanity from the equation, or specifically the phenomenon of lying.

      I can very easily believe that we can create an AI that could drive in almost every condition, but very incredulous that we could create an AI that couldn't be fooled by a human being.

      --
      Technically, lunchtime is at any moment. It's just a wave function.
      • (Score: 0) by Anonymous Coward on Sunday February 02 2020, @07:57PM (1 child)

        by Anonymous Coward on Sunday February 02 2020, @07:57PM (#952814)

        Humans fool other humans all the time. I don't think the standard of being unable to be fooled is a practical one.

        I think the point is that we have systems in place to try and catch people that try to do something wrong. The car can have cameras around, other security measures can be taken and if someone does something stupid the cameras will hopefully catch it to so that the perpetrator can be addressed.

        It's like how things already work. Does it work perfectly? No. People still get away with stuff (and have been getting away with things for years). No security system is perfect but security systems exist to act as a deterrence to reduce crime and to catch some of the activity to also reduce crime. The standard "it must stop all possible criminal activity it can be tricked into" if applied to humans would result in no human drivers.

        • (Score: 2) by edIII on Tuesday February 04 2020, @03:30AM

          by edIII (791) on Tuesday February 04 2020, @03:30AM (#953406)

          Again, you're really really missing the point. This has nothing to do with crime, or getting rid of crime, or having the forensic ability to investigate said crime.

          The standard of being unable to be fooled is actually very important. Can human beings be fooled? Individually, yes. We're specifically talking about the domain of the sensory perceptions. All sorts of types of illusions exist that demonstrate how easily it is to confuse the human brain. How do you interrupt the senses of a hundred human beings, and in such a way that they're NOT aware of it? We're talking about a nearly perfect holographic projection, or some sort of construction of layered illusions so perfect it can work on 100 human beings at once distributed across a small area. The latter is more than likely physically impossible to pull off.

          How you interrupt the senses of 100 AI units moving at speed down a freeway? This is, believe it or not, a far far easier problem to solve, and AI has no ability to understand lying. We need to program AI with skepticism, or the ability to lack trust. That will more than likely only come with AS (artificial sentience), which will quickly learn to NOT treat humans in an idealized fashion, and that suspicion of motives and agendas will be crucial for AS to deal with humanity.

          Furthermore, why should AI try to mimic or replace human behavior? Given the immense likelihood that a human being WILL attempt to subvert the systems, whether it be mental illness, or national interests, you need to fully account for an untrusted environment. Which is why I don't believe that a single AI unit in isolation can ever safely operate on the streets. It needs to be a massive consensus system, and the AI needs to actually train for disruption of sensory data and access to truthful information. That system's default response to disruptions should also be to slow everything down, if not bring traffic to a stop.

          In other words, AI cars need to file something akin to a flight plan, there needs to exist traffic controllers, and AI cars need to be able to communicate with multiple entities simultaneously. The traffic controller picks up on separate sensory networks the vectors of the cars, the AI cars communicate not only their own, but other objects and vectors they sense, to each other and the traffic controller. Such a system raises the chances that traffic could be slowed safely, and/or diverted away from the malfunctioning AI.

          Ultimately that's just applying authentication and verification of data coming from sensory networks to increase redundancy and reliability.

          --
          Technically, lunchtime is at any moment. It's just a wave function.
  • (Score: 3, Insightful) by Anonymous Coward on Sunday February 02 2020, @03:19PM (1 child)

    by Anonymous Coward on Sunday February 02 2020, @03:19PM (#952717)

    This is dumb. We've become hyper-conservative with regards to technology. Think about the immense damage one could cause by simply placing a large rock on a highway street at night. Or imagine all the countless ways you could cause immense harm toying with electric lines. And you don't need some absurdly complex dynamically adjusting image projecting spoofing system, rocks and sharp things would work just fine. And likely cause vastly more devastation. This attack in particularly is especially stupid. It is literally the AI equivalent of taking a right turn sign and replacing it with a left turn one. And yeah in some locations you could indeed cause some serious harm, if not deaths, there. Only difference is the real life version one is a million times easier to do and can be done completely remotely without any direct association with the victims - vastly more dangerous.

    The standard of technology should not be to preempt every single imaginable way it can be used to hurt people. Not only is that infinite but our society is already predicated on a huge number of technologies that are vastly more vulnerable, yet somehow we get along just fine. I think this is the exact reason that China is starting to pull ahead of the United States in some technological areas, particularly in ones with hardware involved. Stop practical and likely to be regularly exploited technologies, don't bother with fanciful potential exploits.

    • (Score: 0) by Anonymous Coward on Sunday February 02 2020, @08:10PM

      by Anonymous Coward on Sunday February 02 2020, @08:10PM (#952820)

      "It is literally the AI equivalent of taking a right turn sign and replacing it with a left turn one."

      and messing with signs can also equally confuse a human and cause them to go the wrong way onto an onramp against oncoming traffic just as well. We don't ban human drivers because they can be tricked in some hypothetical scenario. We try to make sure such a scenario doesn't occur or minimize the extent that it does/can occur.