Learning Skill-Attributes for Transferable Assessment in Video

Kumar Ashutosh, Kristen Grauman
UT Austin

NeurIPS 2025


Method Overview

Skill assessment from video entails rating the quality of a person's physical performance and explaining what could be done better. Today's models specialize for an individual sport, and suffer from the high cost and scarcity of expert-level supervision across the long tail of sports. Towards closing that gap, we explore transferable video representations for skill assessment. Our CrossTrainer approach discovers skill-attributes-such as balance, control, and hand positioning-whose meaning transcends the boundaries of any given sport, then trains a multimodal language model to generate actionable feedback for a novel video, e.g., "lift hands more to generate more power" as well as its proficiency level, e.g., early expert. We validate the new model on multiple datasets for both cross-sport (transfer) and intra-sport (in-domain) settings, where it achieves gains up to 60% relative to the state of the art. By abstracting out the shared behaviors indicative of human skill, the proposed video representation generalizes substantially better than an array of existing techniques, enriching today's multimodal large language models.


Project Overview


Citation


@misc{ashutosh2025learningskillattributestransferableassessment,
      title={Learning Skill-Attributes for Transferable Assessment in Video},
      author={Kumar Ashutosh and Kristen Grauman},
      year={2025},
      eprint={2511.13993},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2511.13993},
}
            
Acknowledgements

Thanks to the anonymous NeurIPS reviewers for their valuable feedback. This research is supported in part by the UT Austin IFML AI Institute. Compute is from the Vista GPU Cluster through the Center for Generative AI (CGAI) and the Texas Advanced Computing Center (TACC) at the University of Texas at Austin.


Copyright © 2025 University of Texas at Austin