Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday November 29 2019, @01:55AM   Printer-friendly
from the fighting-back dept.

Machines' ability to learn by processing data gleaned from sensors underlies automated vehicles, medical devices and a host of other emerging technologies. But that learning ability leaves systems vulnerable to hackers in unexpected ways, researchers at Princeton University have found.

In a series of recent papers, a research team has explored how adversarial tactics applied to artificial intelligence (AI) could, for instance, trick a traffic-efficiency system into causing gridlock or manipulate a health-related AI application to reveal patients' private medical history. As an example of one such attack, the team altered a driving robot's perception of a road sign from a speed limit to a "Stop" sign, which could cause the vehicle to dangerously slam the brakes at highway speeds; in other examples, they altered Stop signs to be perceived as a variety of other traffic instructions.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by maxwell demon on Saturday November 30 2019, @07:05AM

    by maxwell demon (1608) on Saturday November 30 2019, @07:05AM (#926310) Journal

    I'm not aware that having a driving license has been declared a human right.

    Indeed, it is possible to lose your driving license because of repeated traffic violations. I don't think anyone has ever claimed that to be a human rights violation.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2