12/07/2020

Accountable Off-Policy Evaluation via a Kernelized Bellman Statistics

Yihao Feng, Tongzheng Ren, Ziyang Tang, Qiang Liu

Keywords: Reinforcement Learning - General

Abstract: Off-policy evaluation plays an important role in modern reinforcement learning. However, most of the existing off-policy evaluation only focus on the value estimation, without providing an accountable confidence interval, that can reflect the uncertainty caused by limited observed data and algorithmic errors. Recently, Feng et al. (2019) proposed a novel kernel loss for learning value functions, which can also be used to test whether the learned value function satisfies the Bellman equation. In this work, we investigate the statistical properties of the kernel loss, which allows us to find a feasible set that contains the true value function with high probability. We further utilize this set to construct an accountable confidence interval for off-policy value estimation, and a post-hoc diagnosis for existing estimators. Empirical results show that our methods yield a tight yet accountable confidence interval in different settings, which demonstrate the effectiveness of our method.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers