Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by Fnord666 on Friday November 29 2019, @01:55AM   Printer-friendly
from the fighting-back dept.

Machines' ability to learn by processing data gleaned from sensors underlies automated vehicles, medical devices and a host of other emerging technologies. But that learning ability leaves systems vulnerable to hackers in unexpected ways, researchers at Princeton University have found.

In a series of recent papers, a research team has explored how adversarial tactics applied to artificial intelligence (AI) could, for instance, trick a traffic-efficiency system into causing gridlock or manipulate a health-related AI application to reveal patients' private medical history. As an example of one such attack, the team altered a driving robot's perception of a road sign from a speed limit to a "Stop" sign, which could cause the vehicle to dangerously slam the brakes at highway speeds; in other examples, they altered Stop signs to be perceived as a variety of other traffic instructions.


Original Submission

Related Stories

Hackers Can Trick a Tesla into Accelerating by 50 Miles Per Hour 41 comments

Hackers can trick a Tesla into accelerating by 50 miles per hour:

This demonstration from the cybersecurity firm McAfee is the latest indication that adversarial machine learning can potentially wreck autonomous driving systems, presenting a security challenge to those hoping to commercialize the technology.

Mobileye EyeQ3 camera systems read speed limit signs and feed that information into autonomous driving features like Tesla's automatic cruise control, said Steve Povolny and Shivangee Trivedi from McAfee's Advanced Threat Research team.

The researchers stuck a tiny and nearly imperceptible sticker on a speed limit sign. The camera read the sign as 85 instead of 35, and in testing, both the 2016 Tesla Model X and that year's Model S sped up 50 miles per hour.

This is the latest in an increasing mountain of research showing how machine-learning systems can be attacked and fooled in life-threatening situations.

[...] Tesla has since moved to proprietary cameras on newer models, and Mobileye EyeQ3 has released several new versions of its cameras that in preliminary testing were not susceptible to this exact attack.

There are still a sizable number of Tesla cars operating with the vulnerable hardware, Povolny said. He pointed out that Teslas with the first version of hardware cannot be upgraded to newer hardware.

"What we're trying to do is we're really trying to raise awareness for both consumers and vendors of the types of flaws that are possible," Povolny said "We are not trying to spread fear and say that if you drive this car, it will accelerate into through a barrier, or to sensationalize it."

So, it seems this is not so much that a particular adversarial attack was successful (and fixed), but that it was but one instance of a potentially huge set. Obligatory xkcd.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Friday November 29 2019, @02:16AM (3 children)

    by Anonymous Coward on Friday November 29 2019, @02:16AM (#925862)

    > the team altered a driving robot's perception of a road sign from a speed limit to a "Stop" sign,

    That sounds hard, have to get into the training set somehow(?)

    Easy way--make a cardboard "Stop" sign and tape it to the speed limit sign.

    • (Score: 2) by rigrig on Friday November 29 2019, @11:33AM (2 children)

      by rigrig (5129) Subscriber Badge <soylentnews@tubul.net> on Friday November 29 2019, @11:33AM (#925982) Homepage

      Not the actual training set, just the output.
      What they did was basically:

      1. Feed a speed limit sign image to the AI, which outputs a (high) "this is a speed limit sign" confidence value and a (low) "this is a stop sign" value
      2. Add random noise to the input and see what happens to the confidences
      3. Optimize the noise for high "stop sign" and low "speed sign" confidence, and low human noticeability
      4. Spray-paint the noise on a speed sign
      --
      No one remembers the singer.
      • (Score: 0) by Anonymous Coward on Friday November 29 2019, @06:23PM (1 child)

        by Anonymous Coward on Friday November 29 2019, @06:23PM (#926081)

        Thanks for the more complete explanation. It still sounds hard, even if it doesn't require getting into the training set.

        Someone still has to physically alter the speed sign, but it might stay altered longer if the mods truly were difficult for a human to see.

        • (Score: 0) by Anonymous Coward on Saturday November 30 2019, @02:00AM

          by Anonymous Coward on Saturday November 30 2019, @02:00AM (#926231)

          Assuming it is visible to humans at all it probably just looks like dirt smudges. I don't know about where you live but nobody washes street signs around here so something like that could easily go unnoticed.

  • (Score: 2) by legont on Friday November 29 2019, @06:06AM (7 children)

    by legont (4179) on Friday November 29 2019, @06:06AM (#925941)

    Slamming on breaks on highway because of a stop sign is artificial stupidity. If so called AI does it, we should revoke it's driving privileges, period.

    More general, no AI should be allowed on the streets until all the AI's are better than the best human.

    --
    "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
    • (Score: 3, Insightful) by maxwell demon on Friday November 29 2019, @09:09AM (4 children)

      by maxwell demon (1608) on Friday November 29 2019, @09:09AM (#925971) Journal

      More general, no AI should be allowed on the streets until all the AI's are better than the best human.

      With that standard, you should also only allow humans on the road if they are better than the best human. Congratulations, you've just revoked all driving licenses. ;-)

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by rigrig on Friday November 29 2019, @11:25AM (1 child)

        by rigrig (5129) Subscriber Badge <soylentnews@tubul.net> on Friday November 29 2019, @11:25AM (#925981) Homepage

        You mean all except mine. — 83% of all drivers

        --
        No one remembers the singer.
        • (Score: 3, Informative) by maxwell demon on Friday November 29 2019, @02:33PM

          by maxwell demon (1608) on Friday November 29 2019, @02:33PM (#926007) Journal

          Sorry, the post said better than the best human. So even if you are the absolutely best human driver of the world, you still get your license revoked, because you're not better than yourself.

          --
          The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by legont on Friday November 29 2019, @11:27PM (1 child)

        by legont (4179) on Friday November 29 2019, @11:27PM (#926176)

        No, AI's don't have human rights; not yet anyway.
        One can own, and once owned make, torture, and kill AI's as one pleases.

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
        • (Score: 2) by maxwell demon on Saturday November 30 2019, @07:05AM

          by maxwell demon (1608) on Saturday November 30 2019, @07:05AM (#926310) Journal

          I'm not aware that having a driving license has been declared a human right.

          Indeed, it is possible to lose your driving license because of repeated traffic violations. I don't think anyone has ever claimed that to be a human rights violation.

          --
          The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 0) by Anonymous Coward on Friday November 29 2019, @06:29PM (1 child)

      by Anonymous Coward on Friday November 29 2019, @06:29PM (#926085)

      > no AI should be allowed on the streets until all the AI's are better than the best human.

      Interesting. I set the standard a little lower, wanting the AI to be as good as my demographic (which is very good--don't drive impaired, no smart/cell phone distraction, manual transmission, etc, etc).

      The highways are remarkably safe if you are a decent & alert driver. Given that, it's easy to be lulled into false security. I try to not kid myself that the stats are on my side. I could make a mistake, or get taken out by someone else's mistake.

      • (Score: 2) by legont on Friday November 29 2019, @11:33PM

        by legont (4179) on Friday November 29 2019, @11:33PM (#926179)

        My favorite argument, which nobody dares to take so far, is how AI driver should priotarise between a child in the car and a child running across the street in a life and death situation. Will it be disclosed by manufacturer? Would AI owners (car owners) are able to adjust this setting? Will hacking it be legal?
        Currently human behind the wheel makes this decision and accepts consequences. For example, when I see a small "animal" running across the road I typically brake. When I have a child inside my car, I run the animal over.

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(1)