Slash Boxes

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by Fnord666 on Friday November 29 2019, @01:55AM   Printer-friendly
from the fighting-back dept.

Machines' ability to learn by processing data gleaned from sensors underlies automated vehicles, medical devices and a host of other emerging technologies. But that learning ability leaves systems vulnerable to hackers in unexpected ways, researchers at Princeton University have found.

In a series of recent papers, a research team has explored how adversarial tactics applied to artificial intelligence (AI) could, for instance, trick a traffic-efficiency system into causing gridlock or manipulate a health-related AI application to reveal patients' private medical history. As an example of one such attack, the team altered a driving robot's perception of a road sign from a speed limit to a "Stop" sign, which could cause the vehicle to dangerously slam the brakes at highway speeds; in other examples, they altered Stop signs to be perceived as a variety of other traffic instructions.

Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday November 29 2019, @06:29PM (1 child)

    by Anonymous Coward on Friday November 29 2019, @06:29PM (#926085)

    > no AI should be allowed on the streets until all the AI's are better than the best human.

    Interesting. I set the standard a little lower, wanting the AI to be as good as my demographic (which is very good--don't drive impaired, no smart/cell phone distraction, manual transmission, etc, etc).

    The highways are remarkably safe if you are a decent & alert driver. Given that, it's easy to be lulled into false security. I try to not kid myself that the stats are on my side. I could make a mistake, or get taken out by someone else's mistake.

  • (Score: 2) by legont on Friday November 29 2019, @11:33PM

    by legont (4179) on Friday November 29 2019, @11:33PM (#926179)

    My favorite argument, which nobody dares to take so far, is how AI driver should priotarise between a child in the car and a child running across the street in a life and death situation. Will it be disclosed by manufacturer? Would AI owners (car owners) are able to adjust this setting? Will hacking it be legal?
    Currently human behind the wheel makes this decision and accepts consequences. For example, when I see a small "animal" running across the road I typically brake. When I have a child inside my car, I run the animal over.

    "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.