12/07/2020

Improved Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance

Blair Bilodeau, Dylan Foster, Daniel Roy

Keywords: Online Learning, Active Learning, and Bandits

Abstract: We study the classical problem of forecasting under logarithmic loss while competing against an arbitrary class of experts. We present a novel approach to bounding the minimax regret that exploits the self-concordance property of logarithmic loss. Our regret bound depends on the metric entropy of the expert class and matches previous best known results for arbitrary expert classes. We improve the dependence on the time horizon for classes with metric entropy under the supremum norm of order $\Omega(\gamma^{-p})$ when $p>1$, which includes, for example, Lipschitz functions of dimension greater than 1.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers