Abstract:
We study the classical problem of forecasting under logarithmic loss while competing against an arbitrary class of experts. We present a novel approach to bounding the minimax regret that exploits the self-concordance property of logarithmic loss. Our regret bound depends on the metric entropy of the expert class and matches previous best known results for arbitrary expert classes. We improve the dependence on the time horizon for classes with metric entropy under the supremum norm of order $\Omega(\gamma^{-p})$ when $p>1$, which includes, for example, Lipschitz functions of dimension greater than 1.