Machines' ability to learn by processing data gleaned from sensors underlies automated vehicles, medical devices and a host of other emerging technologies. But that learning ability leaves systems vulnerable to hackers in unexpected ways, researchers at Princeton University have found.
In a series of recent papers, a research team has explored how adversarial tactics applied to artificial intelligence (AI) could, for instance, trick a traffic-efficiency system into causing gridlock or manipulate a health-related AI application to reveal patients' private medical history. As an example of one such attack, the team altered a driving robot's perception of a road sign from a speed limit to a "Stop" sign, which could cause the vehicle to dangerously slam the brakes at highway speeds; in other examples, they altered Stop signs to be perceived as a variety of other traffic instructions.
(Score: 2) by legont on Friday November 29 2019, @11:33PM
My favorite argument, which nobody dares to take so far, is how AI driver should priotarise between a child in the car and a child running across the street in a life and death situation. Will it be disclosed by manufacturer? Would AI owners (car owners) are able to adjust this setting? Will hacking it be legal?
Currently human behind the wheel makes this decision and accepts consequences. For example, when I see a small "animal" running across the road I typically brake. When I have a child inside my car, I run the animal over.
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.