2.5D Visual Sound

            Ruohan Gao*                      Kristen Grauman

The University of Texas at Austin     Facebook AI Research

[arXiv Preprint]


Binaural audio provides a listener with 3D sound sensation, allowing a rich perceptual experience of the scene. However, binaural recordings are scarcely available and require nontrivial expertise and equipment to obtain. We propose to convert common monaural audio into binaural audio by leveraging video. The key idea is that visual frames reveal significant spatial cues that, while explicitly lacking in the accompanying single-channel audio, are strongly linked to it. Our multi-modal approach recovers this link from unlabeled video. We devise a deep convolutional neural network that learns to decode the monaural (single-channel) soundtrack into its binaural counterpart by injecting visual information about object and scene configurations. We call the resulting output 2.5D visual sound---the visual stream helps "lift" the flat single channel audio into spatialized sound. In addition to sound generation, we show the self-supervised representation learned by our network benefits audio-visual source separation.

*Work done during an internship at Facebook AI Research.
†On leave from The University of Texas at Austin.
Qualitative Video

In the qualitative video, we show (a) examples of our professional recorded binaural audios, (b) example results of binaural audio prediction, and (c) example results of audio-visual source separation. Please wear a headset or earphones (both left and right) to watch the video.


R. Gao and K. Grauman. "2.5D Visual Sound". In arXiv, 2018. [bibtex]

  title = {2.5D-Visual-Sound},
  author = {Gao, Ruohan and Grauman, Kristen},
  journal = {arXiv preprint arXiv:1812.04204},
  year = {2018}


We would like to thank Tony Miller, Jacob Donley, Pablo Hoffmann, Vladimir Tourbabin, Vamsi Ithapu, Varun Nair, Abesh Thakur, Jaime Morales and Chetan Gupta from Facebook for helpful discussions.