Abstract:
This paper explores a new research problem of unsupervised transfer learning across multiple spatiotemporal prediction tasks. Unlike most existing transfer learning methods that focus on fixing the discrepancy between supervised tasks, we study how to transfer knowledge from a zoo of unsupervisedly learned models towards another predictive network. Our motivation is that models from different sources are expected to understand the complex spatiotemporal dynamics from different perspectives, and thus provide an effective supplement to the new task, even if this task already has sufficient training data. Technically, we propose a differentiable framework named transferable memory. It adaptively distills knowledge from a bank of memory states of predictive networks, and then applies it to the target network with a novel recurrent structure called transferable memory unit (TMU). Compared with finetuning, our approach yields significant improvements on three benchmarks for spatiotemporal prediction, and benefits the target task even from less relevant pretext tasks.