In the face of the video data deluge, today's expensive clip-level classifiers are increasingly impractical. We propose a framework for efficient action recognition in untrimmed video that uses audio as a preview mechanism to eliminate both short-term and long-term visual redundancies. First, we devise an ImgAud2Vid framework that hallucinates clip-level features by distilling from lighter modalities—a single frame and its accompanying audio—reducing short-term temporal redundancy for efficient clip-level recognition. Second, building on ImgAud2Vid, we further propose ImgAud-Skimming, an attention-based long short-term memory network that iteratively selects useful moments in untrimmed videos, reducing long-term temporal redundancy for efficient video-level recognition. Extensive experiments on four action recognition datasets demonstrate that our method achieves the state-of-the-art in terms of both recognition accuracy and speed.
In the qualitative video, we show show examples of (a) the visually useful moments selected by our method using the visual modality versus those obtained by uniform sampling, and (b) the acoustically useful moments selected by our method using the audio modality versus those obtained by uniform sampling.
R. Gao, T. Oh, K. Grauman, L. Torresani. "Listen to Look: Action Recognition by Previewing Audio". In CVPR, 2020. [bibtex]
@inproceedings{gao2020listentolook,
title = {Listen to Look: Action Recognition by Previewing Audio},
author = {Gao, Ruohan and Oh, Tae-Hyun, and Grauman, Kristen and Torresani, Lorenzo},
booktitle = {CVPR},
year = {2020}
}