Logo When Thinking Drifts:
Evidential Grounding for Robust Video Reasoning

NeurIPS, 2025


1 UT Austin   2 UC Berkeley
   3 Bespoke Labs

TL;DR: CoT reasoning can hurt video understanding due to "Visual Thinking Drift" phenomenon—a divergence from actual visual evidence. We propose Visual Evidence Reward (VER) to address this issue.


Abstract: Video reasoning, the task of enabling machines to infer from dynamic visual content through multi-step logic, is crucial for advanced AI. While the Chain-of-Thought (CoT) mechanism has enhanced reasoning in text-based tasks, its application to video understanding remains underexplored. This paper presents a systematic analysis revealing that CoT often degrades performance in video reasoning, generating verbose but misleading internal monologues, and leading to hallucinated visual details and overridden correct intuitions—a phenomenon we term "visual thinking drift." We explain this drift through a Bayesian lens, positing that CoT traces often diverge from actual visual evidence, instead amplifying internal biases or language priors, causing models to storytell rather than engage in grounded reasoning. To counteract this, we introduce Visual Evidence Reward (VER), a novel reinforcement learning framework that explicitly rewards the generation of reasoning traces that are verifiably grounded in visual evidence. Comprehensive evaluation across 10 diverse video understanding benchmarks demonstrates that our \modelname~consistently achieves top performance. Our work sheds light on the distinct challenges of video-centric reasoning and encourages the development of AI that robustly grounds its inferences in visual evidence---for large multimodal models that not only "think before answering", but also "see while thinking".

A 6-minute introduction video

BibTeX

@article{luo2025videover,
      title={When Thinking Drifts: Evidential Grounding for Robust Video Reasoning},
      author={Luo, Mi and Xue, Zihui and Dimakis, Alex and Grauman, Kristen},
      journal={arXiv preprint},
      year={2025}
    }
  }