Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by chromas on Tuesday September 25 2018, @07:43PM   Printer-friendly
from the musical-chairs dept.

In this article the authors introduce . . .

PixelPlayer, a system that, by watching large amounts of unlabeled videos, learns to locate image regions which produce sounds and separate the input sounds into a set of components that represents the sound from each pixel. Our approach capitalizes on the natural synchronization of the visual and audio modalities to learn models that jointly parse sounds and images, without requiring additional manual supervision.

The system is trained with a large number of videos containing people playing instruments in different combinations, including solos and duets. No supervision is provided on what instruments are present on each video, where they are located, or how they sound. During test time, the input to the system is a video showing people playing different instruments, and the mono auditory input. Our system performs audio-visual source separation and localization, splitting the input sound signal into N sound channels, each one corresponding to a different instrument category. In addition, the system can localize the sounds and assign a different audio wave to each pixel in the input video.

A video is included along with an explanation of several interesting demos, such as pointing at any pixel to hear the sound from that pixel. Or remixing the volume levels of different musical instruments in the video.

The paper is included along with the data set. It says the code is coming soon.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by FatPhil on Tuesday September 25 2018, @08:44PM (1 child)

    by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Tuesday September 25 2018, @08:44PM (#739860) Homepage
    It seems it only takes a flat 2D video as input, and mono sound, and then splits on frequencies. Wouldn't it be better to have 2 or 3 cameras, and 2 or 3 microphones, then build a 3D spacial image of the places of movement correlated to the sounds that change there using techniques akin to interferometry. Then you could select the sounds, at all frequencies, that appear to be sourced from near a point in 3D space, and isolate/remove/whatever them?
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by BsAtHome on Tuesday September 25 2018, @09:07PM

    by BsAtHome (889) on Tuesday September 25 2018, @09:07PM (#739875)

    That would be technically sound. However, the currently heard hype is neural networks. No money for simple engineering of waves. This is the buzz of played words that has to be split into a network of big computers using more power achieving less. That is where we hear the bells ring.

    Now, picture that!