Abstract:
Stochastic particle-optimization sampling (SPOS) is a recently-developed
scalable Bayesian sampling framework that unifies stochastic gradient MCMC
(SG-MCMC) and Stein variational gradient descent (SVGD) algorithms based
on Wasserstein gradient flows. With a rigorous non-asymptotic convergence
theory developed recently, SPOS avoids the particle-collapsing pitfall of SVGD.
Nevertheless, variance reduction in SPOS has never been studied. In this
paper, we bridge the gap by presenting several variance-reduction techniques
for SPOS. Specifically, we propose three variants of variance-reduced SPOS,
called SAGA particle-optimization sampling (SAGA-POS), SVRG particle optimization sampling (SVRG-POS) and a variant of SVRG-POS which avoids
full gradient computations, denoted as SVRG-POS+. Importantly, we provide
non-asymptotic convergence guarantees for these algorithms in terms of 2-
Wasserstein metric and analyze their complexities. Remarkably, the results
show our algorithms yield better convergence rates than existing variance reduced variants of stochastic Langevin dynamics, even though more space
is required to store the particles in training. Our theory well aligns with
experimental results on both synthetic and real datasets.