Machines' ability to learn by processing data gleaned from sensors underlies automated vehicles, medical devices and a host of other emerging technologies. But that learning ability leaves systems vulnerable to hackers in unexpected ways, researchers at Princeton University have found.
In a series of recent papers, a research team has explored how adversarial tactics applied to artificial intelligence (AI) could, for instance, trick a traffic-efficiency system into causing gridlock or manipulate a health-related AI application to reveal patients' private medical history. As an example of one such attack, the team altered a driving robot's perception of a road sign from a speed limit to a "Stop" sign, which could cause the vehicle to dangerously slam the brakes at highway speeds; in other examples, they altered Stop signs to be perceived as a variety of other traffic instructions.
(Score: 0) by Anonymous Coward on Friday November 29 2019, @06:23PM (1 child)
Thanks for the more complete explanation. It still sounds hard, even if it doesn't require getting into the training set.
Someone still has to physically alter the speed sign, but it might stay altered longer if the mods truly were difficult for a human to see.
(Score: 0) by Anonymous Coward on Saturday November 30 2019, @02:00AM
Assuming it is visible to humans at all it probably just looks like dirt smudges. I don't know about where you live but nobody washes street signs around here so something like that could easily go unnoticed.