Researchers from the California Institute of Technology in Pasadena have built smart glasses [newscientist.com] that translate video into sound [newscientist.com]:
The device, called vOICe (OIC stands for "Oh! I See"), is a pair of dark glasses with an attached camera, connected to a computer. It's based on an algorithm of the same name [newscientist.com] developed in 1992 by Dutch engineer Peter Meijer. The system converts pixels in the camera's video feed into sound, mapping brightness and vertical location to an associated pitch and volume.
A cluster of dark pixels at the bottom of the frame sounds quiet and has a low pitch, while a bright patch at the top would sound loud and high-pitched. The way a sound changes over time is governed by how the image looks when scanned left to right across the frame. Headphones sends the processed sound into the wearer's ear.
[...]Tested on the device, blind people with no experience of using it were able to match the shapes to the sounds as often as those who had been trained – both groups performed 33 per cent better than by chance. But when the encoding was reversed, so that a high part of the image became a low pitch and a bright part of the image became a quiet sound, volunteers found it harder to match image to sound.
Originally spotted on The Eponymous Pickle [blogspot.com].