Sim2Real Transfer for Audio-Visual Navigation with Frequency-Adaptive Acoustic Field Prediction

Changan Chen1,2, Jordi Ramos1, Anshul Tomar1, Kristen Grauman1,2
1The University of Texas at Austin, 2FAIR, Meta AI

Abstract

Sim2real transfer has received increasing attention lately due to the success of learning robotic tasks in simulation end-to-end. While there has been a lot of progress in transferring vision-based navigation policies, the existing sim2real strategy for audio-visual navigation performs data augmentation empirically without measuring the acoustic gap. The sound differs from light in that it spans across much wider frequencies and thus requires a different solution for sim2real. We propose the first treatment of sim2real for audio-visual navigation by disentangling it into acoustic field prediction (AFP) and waypoint navigation. We first validate our design choice in the SoundSpaces simulator and show improvement on the Continuous AudioGoal navigation benchmark. We then collect real-world data to measure the spectral difference between the simulation and the real world by training AFP models that only take a specific frequency subband as input. We further propose a frequency-adaptive strategy that intelligently selects the best frequency band for prediction based on both the measured spectral difference and the energy distribution of the received audio, which improves the performance on the real data. Lastly, we build a real robot platform and show that the transferred policy can successfully navigate to sounding objects. This work demonstrates the potential of building intelligent agents that can see, hear, and act entirely from simulation, and transferring them to the real world.

Demo 1 - Telephone Sound

Here is one example of our real robot navigating to a ringing telephone.

Demo 2 - Radio Noise Sound

Another example of our real robot navigating to some radio noise sound.

Demo 3 - Simulation

Below we show one example of our model navigating successfully in an unknown environment to find the sound source in simulation.

BibTeX

@article{chen2024sim2real,
                title={Sim2Real Transfer for Audio-Visual Navigation with Frequency-Adaptive Acoustic Field Prediction},
                author={Chen, Changan and Ramos, Jordi and Tomar, Anshul and Grauman, Kristen},
                journal={arXiv preprint arXiv:2405.02821},
                year={2024}
              }