Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Saturday November 04 2017, @02:37PM   Printer-friendly
from the terrapins-tortoises-and-turtles...-oh-my! dept.

MIT researchers have fooled a Google image classification algorithm into thinking that a turtle is a rifle and a baseball is an espresso:

The team built on a concept known as an "adversarial image". That's a picture created from the ground-up to fool an AI into classifying it as something completely different from what it shows: for instance, a picture of a tabby cat recognised with 99% certainty as a bowl of guacamole.

Such tricks work by carefully adding visual noise to the image so that the bundle of signifiers an AI uses to recognise its contents get confused, while a human doesn't notice any difference.

But while there's a lot of theoretical work demonstrating the attacks are possible, physical demonstrations of the same technique are thin on the ground. Often, simply rotating the image, messing with the colour balance, or cropping it slightly, can be enough to ruin the trick.

The MIT researchers have pushed the idea further than ever before, by manipulating not a simple 2D image, but the surface texture of a 3D-printed turtle. The resulting shell pattern looks trippy, but still completely recognisable as a turtle – unless you are Google's public object detection AI, in which case you are 90% certain it's a rifle.

The researchers also 3D printed a baseball with pattering to make it appear to the AI like an espresso, with marginally less success – the AI was able to tell it was a baseball occasionally, though still wrongly suggested espresso most of the time.

The researchers had access to the algorithm, making the task significantly easier.

Also at The Verge.


Original Submission

Related Stories

Hackers Can Trick a Tesla into Accelerating by 50 Miles Per Hour 41 comments

Hackers can trick a Tesla into accelerating by 50 miles per hour:

This demonstration from the cybersecurity firm McAfee is the latest indication that adversarial machine learning can potentially wreck autonomous driving systems, presenting a security challenge to those hoping to commercialize the technology.

Mobileye EyeQ3 camera systems read speed limit signs and feed that information into autonomous driving features like Tesla's automatic cruise control, said Steve Povolny and Shivangee Trivedi from McAfee's Advanced Threat Research team.

The researchers stuck a tiny and nearly imperceptible sticker on a speed limit sign. The camera read the sign as 85 instead of 35, and in testing, both the 2016 Tesla Model X and that year's Model S sped up 50 miles per hour.

This is the latest in an increasing mountain of research showing how machine-learning systems can be attacked and fooled in life-threatening situations.

[...] Tesla has since moved to proprietary cameras on newer models, and Mobileye EyeQ3 has released several new versions of its cameras that in preliminary testing were not susceptible to this exact attack.

There are still a sizable number of Tesla cars operating with the vulnerable hardware, Povolny said. He pointed out that Teslas with the first version of hardware cannot be upgraded to newer hardware.

"What we're trying to do is we're really trying to raise awareness for both consumers and vendors of the types of flaws that are possible," Povolny said "We are not trying to spread fear and say that if you drive this car, it will accelerate into through a barrier, or to sensationalize it."

So, it seems this is not so much that a particular adversarial attack was successful (and fixed), but that it was but one instance of a potentially huge set. Obligatory xkcd.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by opinionated_science on Saturday November 04 2017, @03:03PM (3 children)

    by opinionated_science (4031) on Saturday November 04 2017, @03:03PM (#592181)

    In principle, this is obvious - a 3D printer can reproduce to the resolution limit exactly (almost!) what a model describes.

    Image recognition has the major problem that wetware (us) doesn't have the cognitive processor that allows us to weight many things we "know" against a pattern we are presented with.

    Evolutionarily speaking, the ancestor that did not get "tiger" from the noise in the bushes, probably didn't have offspring...

    The pattern recognition of human is *so* good, often things are seen that aren't there.....

    Seen anyone in the toast recently?

    • (Score: 2) by Tara Li on Saturday November 04 2017, @03:14PM

      by Tara Li (6248) on Saturday November 04 2017, @03:14PM (#592186)

      From what I was seeing in the article, it's not just a static photo - the object is printed and colored, and is erroneously recognized from most if not all angles. At least, the turtle is, apparently the baseball doesn't work quite as well.

    • (Score: 2) by FatPhil on Sunday November 05 2017, @02:21AM (1 child)

      by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Sunday November 05 2017, @02:21AM (#592353) Homepage
      No, not a static photo.

      You have the right to read the article and view the video before you post again. Chose a more appropriate subject line next time.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 2) by opinionated_science on Monday November 06 2017, @03:09AM

        by opinionated_science (4031) on Monday November 06 2017, @03:09AM (#592787)

        my point was just that video a load of static frames. They could all be faked and made perfect.

        Sorry for the ambiguity , working on this for a project right now...

  • (Score: 2) by Tara Li on Saturday November 04 2017, @03:05PM

    by Tara Li (6248) on Saturday November 04 2017, @03:05PM (#592183)

    Yeah, yeah, I know it's SciFi - but really, patterns that will make an AI see something that looks obviously like one thing is another - something majorly different - well, I would have thought that was SciFi.

    http://ansible.uk/sfx/sfx255.html [ansible.uk] - or - http://www.lightspeedmagazine.com/fiction/different-kinds-of-darkness/ [lightspeedmagazine.com]

    Just don't think about Roku's Basilisk.

  • (Score: 1, Interesting) by Anonymous Coward on Saturday November 04 2017, @03:19PM (4 children)

    by Anonymous Coward on Saturday November 04 2017, @03:19PM (#592188)

    So what does a self-driving car see during a ticker-tape parade? Or with various kinds of visual noise added to a scene -- wind blown leaves, confetti, etc? Do these cars "hallucinate" in whiteouts (blowing snow)?

    • (Score: 2) by takyon on Saturday November 04 2017, @03:29PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday November 04 2017, @03:29PM (#592193) Journal
      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0, Flamebait) by Ethanol-fueled on Saturday November 04 2017, @05:59PM (1 child)

      by Ethanol-fueled (2792) on Saturday November 04 2017, @05:59PM (#592236) Homepage

      Self-driving cars are shit and people who suggest they be used are also shit. SHIT!

      • (Score: 1, Informative) by Anonymous Coward on Saturday November 04 2017, @06:36PM

        by Anonymous Coward on Saturday November 04 2017, @06:36PM (#592256)

        Sounds like something didn't agree with you last night, Eth, hope the diarrhea clears up soon!

    • (Score: 0) by Anonymous Coward on Sunday November 05 2017, @02:43AM

      by Anonymous Coward on Sunday November 05 2017, @02:43AM (#592360)

      I don't believe that ticker tape parades are a thing any more. Years ago most areas decided that the cost of clean up wasn't worth it. Even when local teams have won championships, there was nothing like ticker tape being thrown.

      But, more likely, it would be handled the way that cars handle snow or significant rain. Use other sensors and drive slower to make up for the poor conditions. If it's really bad, make the human drive.

  • (Score: 2) by maxwell demon on Saturday November 04 2017, @03:23PM

    by maxwell demon (1608) on Saturday November 04 2017, @03:23PM (#592191) Journal

    It's rifles all the way down.

    --
    The Tao of math: The numbers you can count are not the real numbers.
  • (Score: -1, Troll) by Anonymous Coward on Saturday November 04 2017, @05:45PM (2 children)

    by Anonymous Coward on Saturday November 04 2017, @05:45PM (#592225)

    I don't know what a turtle looks like, but the photo looks like photos I've seen of rifles. This one looks like it's plastic.

  • (Score: 2) by darkfeline on Tuesday November 07 2017, @05:47AM

    by darkfeline (1030) on Tuesday November 07 2017, @05:47AM (#593508) Homepage

    No shit? If you design something specifically to deceive humans, humans will be deceived by it. Consider camouflage and optical illusions for example.

    I believe at this point that AI is roughly within an order of magnitude as intelligent as humans. What people don't consider is that AI intelligence is fundamentally different than human intelligence. Just like how humans are really good at some things and really bad at other things, so too AI is really good at some things and bad at other things.

    A human might look at toast and see Jesus, an AI sees only toast. A human might look at a Go board and see only losing moves, an AI sees a winning move.

    Humans make mistakes visually identifying things all the time, it's just that the mistakes an AI would make is so different from what we would make, and we are stupid and unable to comprehend non-human ways of thinking.

    This is why all of our aliens are so human-like. We wouldn't recognize alien intelligence if it hit us on the head.

    --
    Join the SDF Public Access UNIX System today!
(1)