Audio-Visual Embodied Navigation

Changan Chen*1            Unnat Jain*24†            Carl Schissler3            Sebastià V. Amengual Garí3           
Ziad Al-Halah1            Vamsi Krishna Ithapu3            Philip Robinson3            Kristen Grauman1,4

1UT Austin, 2UIUC, 3Facebook Reality Labs, 4Facebook AI Research

* indicates equal contributions, order chosen by coin-flip
work done as an intern at Facebook AI Research



Moving around in the world is naturally a multisensory experience, but today's embodied agents are deaf---restricted to solely their visual perception of the environment. We introduce audio-visual navigation for complex, acoustically and visually realistic 3D environments. By both seeing and hearing, the agent must learn to navigate to an audio-based target. We develop a multi-modal deep reinforcement learning pipeline to train navigation policies end-to-end from a stream of egocentric audio-visual observations, allowing the agent to (1) discover elements of the geometry of the physical space indicated by the reverberating audio and (2) detect and follow sound-emitting targets. We further introduce audio renderings based on geometrical acoustic simulations for a set of publicly available 3D assets and instrument AI-Habitat to support the new sensor, making it possible to insert arbitrary sound sources in an array of apartment, office, and hotel environments. Our results show that audio greatly benefits embodied visual navigation in 3D spaces.

Spotlight Introduction

In this two-minute spotlight introduction video, we briefly walk you over our audio-visual navigation project with highlights on our motivation and contributions.

Further Explanation

In this fourteen-minute video, we show more demos of the acoustic simulation as well as the agents' navigation performance.