13/04/2021

Near-optimal provable uniform convergence in offline policy evaluation for reinforcement learning

Ming Yin, Yu Bai, Yu-Xiang Wang

Keywords:

Abstract: The problem of <i>Offline Policy Evaluation</i> (OPE) in Reinforcement Learning (RL) is a critical step towards applying RL in real life applications. Existing work on OPE mostly focus on evaluating a <i>fixed</i> target policy \pi, which does not provide useful bounds for offline policy learning as \pi will then be data-dependent. We address this problem by <i>simultaneously</i> evaluating all policies in a policy class \Pi — uniform convergence in OPE — and obtain nearly optimal error bounds for a number of global / local policy classes. Our results imply that the model-based planning achieves an optimal episode complexity of \widetilde{O}(H^3/d_m\epsilon^2) in identifying an \epsilon-optimal policy under the <i>time-inhomogeneous episodic</i> MDP model (H is the planning horizon, d_m is a quantity that reflects the exploration of the logging policy \mu). To the best of our knowledge, this is the first time the optimal rate is shown to be possible for the offline RL setting and the paper is the first that systematically investigates the uniform convergence in OPE.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at AISTATS 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers