Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.

Submission Preview

Link to Story

The Sound of Pixels

Accepted submission by DannyB at 2018-09-25 14:12:56 from the musical chairs dept.
News

In this article [mit.edu] the authors introduce . . .

PixelPlayer, a system that, by watching large amounts of unlabeled videos, learns to locate image regions which produce sounds and separate the input sounds into a set of components that represents the sound from each pixel. Our approach capitalizes on the natural synchronization of the visual and audio modalities to learn models that jointly parse sounds and images, without requiring additional manual supervision.

The system is trained with a large number of videos containing people playing instruments in different combinations, including solos and duets. No supervision is provided on what instruments are present on each video, where they are located, or how they sound. During test time, the input to the system is a video showing people playing different instruments, and the mono auditory input. Our system performs audio-visual source separation and localization, splitting the input sound signal into N sound channels, each one corresponding to a different instrument category. In addition, the system can localize the sounds and assign a different audio wave to each pixel in the input video.

A video [youtube.com] is included along with an explanation of several interesting demos, such as pointing at any pixel to hear the sound from that pixel. Or remixing the volume levels of different musical instruments in the video.

The paper [arxiv.org] is included along with the data set [github.com]. It says the code is coming soon.


Original Submission