Im2Flow: Motion Hallucination from
Static Images for Action Recognition

Ruohan Gao           Bo Xiong           Kristen Grauman

The University of Texas at Austin

In CVPR 2018

[Main] [Supp] [Bibtex] [GitHub]

feature
Abstract

Existing methods to recognize actions in static images take the images at their face value, learning the appearances—objects, scenes, and body poses—that distinguish each action class. However, such models are deprived of the rich dynamic structure and motions that also define human activity. We propose an approach that hallucinates the unobserved future motion implied by a single snapshot to help static-image action recognition. The key idea is to learn a prior over short-term dynamics from thousands of unlabeled videos, infer the anticipated optical flow on novel static images, and then train discriminative models that exploit both streams of information. Our main contributions are twofold. First, we devise an encoder-decoder convolutional neural network and a novel optical flow encoding that can translate a static image into an accurate flow map. Second, we show the power of hallucinated flow for recognition, successfully transferring the learned motion into a standard two-stream network for activity recognition. On seven datasets, we demonstrate the power of the approach. It not only achieves state-of-the-art accuracy for dense optical flow prediction, but also consistently enhances recognition of actions and dynamic scenes.

Qualitative Video

In the qualitative video, we show some motion prediction results of video sequences using our Im2Flow framework. We predict for each independent frame as if it were a static image, and then just concatenate the frames as laid on the video for visualization. There is no temporal smoothing between frames’ estimates. Our Im2Flow network can predict motion in a variety of contexts, and the prediction is pretty fine-grained.

Downloads

Our code is available at GitHub.

Acknowledgement

This research was supported in part by an ONR PECASE Award N00014-15-1-2291 and an IBM Faculty Award and IBM Open Collaboration Award. We also gratefully acknowledge a GPU donation from Facebook.