Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday November 29 2019, @01:55AM   Printer-friendly
from the fighting-back dept.

Machines' ability to learn by processing data gleaned from sensors underlies automated vehicles, medical devices and a host of other emerging technologies. But that learning ability leaves systems vulnerable to hackers in unexpected ways, researchers at Princeton University have found.

In a series of recent papers, a research team has explored how adversarial tactics applied to artificial intelligence (AI) could, for instance, trick a traffic-efficiency system into causing gridlock or manipulate a health-related AI application to reveal patients' private medical history. As an example of one such attack, the team altered a driving robot's perception of a road sign from a speed limit to a "Stop" sign, which could cause the vehicle to dangerously slam the brakes at highway speeds; in other examples, they altered Stop signs to be perceived as a variety of other traffic instructions.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday November 29 2019, @02:16AM (3 children)

    by Anonymous Coward on Friday November 29 2019, @02:16AM (#925862)

    > the team altered a driving robot's perception of a road sign from a speed limit to a "Stop" sign,

    That sounds hard, have to get into the training set somehow(?)

    Easy way--make a cardboard "Stop" sign and tape it to the speed limit sign.

  • (Score: 2) by rigrig on Friday November 29 2019, @11:33AM (2 children)

    by rigrig (5129) Subscriber Badge <soylentnews@tubul.net> on Friday November 29 2019, @11:33AM (#925982) Homepage

    Not the actual training set, just the output.
    What they did was basically:

    1. Feed a speed limit sign image to the AI, which outputs a (high) "this is a speed limit sign" confidence value and a (low) "this is a stop sign" value
    2. Add random noise to the input and see what happens to the confidences
    3. Optimize the noise for high "stop sign" and low "speed sign" confidence, and low human noticeability
    4. Spray-paint the noise on a speed sign
    --
    No one remembers the singer.
    • (Score: 0) by Anonymous Coward on Friday November 29 2019, @06:23PM (1 child)

      by Anonymous Coward on Friday November 29 2019, @06:23PM (#926081)

      Thanks for the more complete explanation. It still sounds hard, even if it doesn't require getting into the training set.

      Someone still has to physically alter the speed sign, but it might stay altered longer if the mods truly were difficult for a human to see.

      • (Score: 0) by Anonymous Coward on Saturday November 30 2019, @02:00AM

        by Anonymous Coward on Saturday November 30 2019, @02:00AM (#926231)

        Assuming it is visible to humans at all it probably just looks like dirt smudges. I don't know about where you live but nobody washes street signs around here so something like that could easily go unnoticed.