SoundingActions: Learning How Actions Sound from Narrated Egocentric Videos

1University of Texas at Austin, 2FAIR, Meta AI
teaser

We propose a mutlimodal consenus coding method leveraging text data to learn how actions sound from in-the-wild egocentric videos.

Presentation Video

Abstract

We propose a novel self-supervised embedding to learn how actions sound from narrated in-the-wild egocentric videos. Whereas existing methods rely on curated data with known audio-visual correspondence, our multimodal contrastive-consensus coding (MC3) embedding reinforces the associations between audio, language, and vision when all modality pairs agree, while diminishing those associations when any one pair does not. We show our approach can successfully discover how the long tail of human actions sound from egocentric video, outperforming an array of recent multimodal embedding techniques on two datasets (Ego4D and EPIC-Sounds) and multiple cross-modal tasks.

teaser

Discover Sounding Actions

Our model discovers sounding actions from in-the-wild egocentric videos. Below we show the top and bottom test examples sorted by our audio-visual similarity scores.

Top examples. Note how the visual activity causes sound in each example.



Bottom exampls. Even though these videos have lots of background sounds correlated with the visual environment, our model successfully assigns them lower scores, accounting for how the sounds are not produced by the action itself.



Visual clusters. In the examples below, we cluster videos based on their visual embeedings and compare to baselines. We show our learned visual embeddings tend of capture how actions sound regardless of their visual environments.


Crossmodal Retrieval

With our learned embeddings, we can perform crossmodal retrieval between audio, language, and video.

Video-to-audio retrieval

Audio-to-Language retrieval

Supplementary Video

In this video, we include examples of Ego4D clips, qualitative examples of sounding action discovery, and examples of sounding action retrieval. Wear headphones to hear the sound.

BibTeX

@inproceedings{chen2024soundingactions,
    title = {SoundingActions: Learning How Actions Sound from Narrated Egocentric Videos},
    author = {Changan Chen and Kumar Ashutosh and Rohit Girdhar and David Harwath and Kristen Grauman},
    year = {2024},
    booktitle = {CVPR},
    }