Abstract:
In this paper, we study the problem of learning compact (low-dimensional) representations for sequential data that captures its implicit spatio-temporal cues. To maximize extraction of such informative cues from the data, we set the problem within the context of contrastive representation learning and to that end propose a novel objective via optimal transport. Specifically, our formulation seeks a low-dimensional subspace representation of the data that jointly (i) maximizes the distance of the data (embedded in this subspace) from an adversarial data distribution under the optimal transport, a.k.a. the Wasserstein distance, (ii) captures the temporal order, and (iii) minimizes the data distortion. To generate the adversarial distribution, we propose to use a Generative Adversarial Network (GAN) with novel regularizers. Our full objective can be cast as a subspace learning problem on the Grassmann manifold, and can be solved efficiently via Riemannian optimization. To empirically study our formulation, we provide elaborate experiments on the task of human action recognition in video sequences. Our results demonstrate state-of-the-art performance against challenging baselines.