Abstract:
Generative moment matching networks (GMMN) present a theoretically sound approach to learning deep generative mod-els. However, such methods are typically limited by the high sample complexity, thereby impractical in generating complex data. In this paper, we present a new strategy to train GMMN with a low sample complexity while retaining the theoretical soundness. Our method introduces some auxiliary variables, whose values are provided by a pre-trained model such as an encoder network in practice. Conditioned on these variables, we partition the distribution into a set of conditional distributions, which can be effectively matched with a low sample complexity. We instantiate this strategy by presenting an amortized network called GMMN-DP with shared auxiliary variable information for the data generation task, as well as developing an efficient stochastic training algorithm.The experimental results show that GMMN-DP can generate complex samples on datasets such as CelebA and CIFAR-10, where the vanilla GMMN fails.