HierVL: Learning Hierarchical Video-Language Embeddings

Kumar Ashutosh1, Rohit Girdhar2, Lorenzo Torresani2, Kristen Grauman1,2
1UT Austin, 2FAIR, Meta AI

Accepted to CVPR 2023
Highlight Paper (Top 2.5%)


Method Overview TSNE Visualization

Video-language embeddings are a promising avenue for injecting semantics into visual representations, but existing methods capture only short-term associations between seconds-long video clips and their accompanying text. We propose HierVL, a novel hierarchical video-language embedding that simultaneously accounts for both long-term and short-term associations. As training data, we take videos accompanied by timestamped text descriptions of human actions, together with a high-level text summary of the activity throughout the long video (as are available in Ego4D). We introduce a hierarchical contrastive training objective that encourages text-visual alignment at both the clip level and video level. While the clip-level constraints use the step-by-step descriptions to capture what is happening in that instant, the video-level constraints use the summary text to capture why it is happening, i.e., the broader context for the activity and the intent of the actor. Our hierarchical scheme yields a clip representation that outperforms its single-level counterpart as well as a long-term video representation that achieves SotA results on tasks requiring long-term video modeling. HierVL successfully transfers to multiple challenging downstream tasks (in EPIC-KITCHENS-100, Charades-Ego, HowTo100M) in both zero-shot and fine-tuned settings.

Project Overview


Citation





@article{ashutosh2023hiervl,
  title={HierVL: Learning Hierarchical Video-Language Embeddings},
  author={Ashutosh, Kumar and Girdhar, Rohit and Torresani, Lorenzo and Grauman, Kristen},
  journal={arXiv preprint arXiv:2301.02311},
  year={2023}
}
		
Acknowledgements

We thank Ziad Al-Halah and Tushar Nagarajan for feedback on the manuscript. KG is paid as a research scientist at Meta. UT Austin is supported in part by the IFML NSF AI Institute and NSF-CCRI.


Copyright © 2023 University of Texas at Austin