Researchers from the California Institute of Technology in Pasadena have built smart glasses that translate video into sound:
The device, called vOICe (OIC stands for "Oh! I See"), is a pair of dark glasses with an attached camera, connected to a computer. It's based on an algorithm of the same name developed in 1992 by Dutch engineer Peter Meijer. The system converts pixels in the camera's video feed into sound, mapping brightness and vertical location to an associated pitch and volume.
A cluster of dark pixels at the bottom of the frame sounds quiet and has a low pitch, while a bright patch at the top would sound loud and high-pitched. The way a sound changes over time is governed by how the image looks when scanned left to right across the frame. Headphones sends the processed sound into the wearer's ear.
[...] Tested on the device, blind people with no experience of using it were able to match the shapes to the sounds as often as those who had been trained – both groups performed 33 per cent better than by chance. But when the encoding was reversed, so that a high part of the image became a low pitch and a bright part of the image became a quiet sound, volunteers found it harder to match image to sound.
Originally spotted on The Eponymous Pickle.
(Score: 3, Insightful) by wonkey_monkey on Wednesday November 04 2015, @04:58PM
Tested on the device, blind people with no experience of using it were able to match the shapes to the sounds as often as those who had been trained
So what you're saying is... the training was useless?
systemd is Roko's Basilisk
(Score: 2) by DeathMonkey on Wednesday November 04 2015, @06:26PM
So what you're saying is... the training was useless?
That would be the pessimist view, yes. The optimist would likely say it's so intuitive the training was unnecessary.