Sagnik Majumder1,2, Tushar Nagarajan2, Ziad Al-Halah3, Reina Pradhan1, Kristen Grauman1,2 |
1UT Austin,2FAIR at Meta,3U. Utah In submission. |
|
Given a multi-view video, which viewpoint is most informative for a human observer? Existing methods rely on heuristics or expensive "best-view" supervision to answer this question, limiting their applicability. We propose a weakly supervised approach that leverages language accompanying an instructional multi-view video as a means to recover its most informative viewpoint(s). Our key hypothesis is that the more accurately an individual view can predict a view-agnostic text summary, the more informative it is. To put this into action, we propose a framework that uses the relative accuracy of view-dependent caption predictions as a proxy for best view pseudo-labels. Then, those pseudo-labels are used to train a view selector, together with an auxiliary camera pose predictor that enhances view-sensitivity. During inference, our model takes as input only a multi-view video--no language or camera poses--and returns the best viewpoint to watch at each timestep. On two challenging datasets comprised of diverse multi-camera setups and how-to activities, our model consistently outperforms state-of-the-art baselines, both with quantitative metrics and human evaluation. |
Task and model description, prediction examples and failure cases.
|
|
@article{majumder2024viewpoint, title={Which Viewpoint Shows it Best? Language for Weakly Supervising View Selection in Multi-view Videos}, author={Majumder, Sagnik and Nagarajan, Tushar and Al-Halah, Ziad and Pradhan, Reina and Grauman, Kristen}, journal={arXiv preprint arXiv:2411.08753}, year={2024} } |
Copyright © 2024 University of Texas at Austin |