06/12/2021

The Lazy Online Subgradient Algorithm is Universal on Strongly Convex Domains

Daron Anderson, Douglas Leith

Keywords: optimization, machine learning, online learning

Abstract: We study Online Lazy Gradient Descent for optimisation on a strongly convex domain. The algorithm is known to achieve $O(\sqrt N)$ regret against adversarial opponents; here we show it is universal in the sense that it also achieves $O(\log N)$ expected regret against i.i.d opponents. This improves upon the more complex meta-algorithm of Huang et al \cite{FTLBall} that only gets $O(\sqrt {N \log N})$ and $ O(\log N)$ bounds. In addition we show that, unlike for the simplex, order bounds for pseudo-regret and expected regret are equivalent for strongly convex domains.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at NeurIPS 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers