12/07/2020

Stochastic Hamiltonian Gradient Methods for Smooth Games

Nicolas Loizou, Hugo Berard, Alexia Jolicoeur-Martineau, Pascal Vincent, Simon Lacoste-Julien, Ioannis Mitliagkas

Keywords: Optimization - Non-convex

Abstract: The analysis of smooth games has attracted attention, motivated by the success of adversarial formulations. The Hamiltonian method is a lightweight second-order approach that recasts the problem in terms of a minimization objective. Consensus optimization can be seen as a generalization: it mixes a Hamiltonian term with the original game dynamics. This family of Hamiltonian methods has shown promise in literature. However, they come with no guarantees for stochastic games. Classic stochastic extragradient and mirror-prox methods require averaging over a compact domain to achieve convergence. Recent variance-reduced first-order schemes focus on unbounded domains, but stop short of proving last-iterate convergence for bilinear matrix games. We analyze the stochastic Hamiltonian method and a novel variance-reduced variant of it and provide the first set of last-iterate convergence guarantees for stochastic unbounded bilinear games. More generally, we provide convergence guarantees for a family of stochastic games, notably including some non-convex ones. We supplement our analysis with experiments on a stochastic bilinear game, where our theory is shown to be tight, and simple adversarial machine learning formulations.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers