MIT researchers [labsix.org] have fooled a Google image classification algorithm into thinking that a turtle is a rifle and a baseball is an espresso [theguardian.com]:
The team built on a concept known as an "adversarial image". That's a picture created from the ground-up to fool an AI into classifying it as something completely different from what it shows: for instance, a picture of a tabby cat recognised with 99% certainty as a bowl of guacamole.
Such tricks work by carefully adding visual noise to the image so that the bundle of signifiers an AI uses to recognise its contents get confused, while a human doesn't notice any difference.
But while there's a lot of theoretical work demonstrating the attacks are possible, physical demonstrations of the same technique are thin on the ground. Often, simply rotating the image, messing with the colour balance, or cropping it slightly, can be enough to ruin the trick.
The MIT researchers have pushed the idea further than ever before, by manipulating not a simple 2D image, but the surface texture of a 3D-printed turtle. The resulting shell pattern looks trippy, but still completely recognisable as a turtle – unless you are Google's public object detection AI, in which case you are 90% certain it's a rifle.
The researchers also 3D printed a baseball with pattering to make it appear to the AI like an espresso, with marginally less success – the AI was able to tell it was a baseball occasionally, though still wrongly suggested espresso most of the time.
The researchers had access to the algorithm [googleblog.com], making the task significantly easier.
Also at The Verge [theverge.com].