Novel-View Acoustic Synthesis

Changan Chen1,3, Alexander Richard2, Roman Shapovalov3, Vamsi Krishna Ithapu2, Natalia Neverova3, Kristen Grauman1,3, Andrea Vedaldi3
1University of Texas at Austin, 2Reality Labs Research at Meta, 3FAIR, Meta AI

arXiv 2023


We introduce the novel-view acoustic synthesis (NVAS) task: given the sight and sound observed at a source viewpoint, can we synthesize the sound of that scene from an unseen target viewpoint? We propose a neural rendering approach: Visually-Guided Acoustic Synthesis (ViGAS) network that learns to synthesize the sound of an arbitrary point in space by analyzing the input audio-visual cues. To benchmark this task, we collect two first-of-their-kind large-scale multi-view audio-visual datasets, one synthetic and one real. We show that our model successfully reasons about the spatial cues and synthesizes faithful audio on both datasets. To our knowledge, this work represents the very first formulation, dataset, and approach to solve the novel-view acoustic synthesis task, which has exciting potential applications ranging from AR/VR to art and design. Unlocked by this work, we believe that the future of novel-view synthesis is in multi-modal learning from videos.


Supplementary Video

Audio-visual examples of novel-view acoustic synthesis on both synthetic data and real videos.


Acknowledgements

UT Austin is supported in part by DARPA Lifelong Learning Machines.

Copyright © 2020 University of Texas at Austin