Kumar Ashutosh1,2, Rohit Girdhar2, Lorenzo Torresani2, Kristen Grauman1,2 |
1UT Austin, 2Meta AI arXiv 2023 |
[arXiv]
[Code (coming soon!)]
|
|
Narrated "how-to" videos have emerged as a promising data source for a wide range of learning problems, from learning visual representations to training robot policies. However, this data is extremely noisy, as the narrations do not always describe the actions demonstrated in the video. To address this problem we introduce the novel task of visual narration detection, which entails determining whether a narration is visually depicted by the actions in the video. We propose "What You Say is What You Show" (WYS^2), a method that leverages multi-modal cues and pseudo-labeling to learn to detect visual narrations with only weakly labeled data. We further generalize our approach to operate on only audio input, learning properties of the narrator's voice that hint if they are currently doing what they describe. Our model successfully detects visual narrations in in-the-wild videos, outperforming strong baselines, and we demonstrate its impact for state-of-the-art summarization and alignment of instructional video. |
@article{ashutosh2023you, title={What You Say Is What You Show: Visual Narration Detection in Instructional Videos}, author={Ashutosh, Kumar and Girdhar, Rohit and Torresani, Lorenzo and Grauman, Kristen}, journal={arXiv preprint arXiv:2301.02307}, year={2023} } |
TBA |
Copyright © 2023 University of Texas at Austin |