Abstract:
Whilst Generative Adversarial Networks (GANs) have gained a reputation as powerful generative models, they are notoriously difficult to train and suffer from instability in optimisation. Recent methods for tackling this drawback have typically approached it by inducing better behaviour on the discriminator component of the GAN; these include loss function modification, gradient regularisation and weight normalisation to create a discriminator that is well-behaved from a Lipschitz perspective. In this paper, we propose a novel and orthogonal contribution which modifies the architecture of a GAN. Our method embeds the powerful discriminating capabilities inherent in decision forests within the discriminator of a GAN. Empirically, we test the effectiveness of our approach on the CIFAR-10, Oxford Flowers and CUB Birds datasets. We show that our technique is easy to incorporate into existing GAN baselines and offers improvements on Frechet-Inception Distance (FID) scores by as high as 56.1% over several GAN baselines.