12/07/2020

Learning Near Optimal Policies with Low Inherent Bellman Error

Andrea Zanette, Alessandro Lazaric, Mykel Kochenderfer, Emma Brunskill

Keywords: Reinforcement Learning - Theory

Abstract: We study the exploration problem with approximate linear action-value functions in episodic reinforcement learning under the notion of low inherent Bellman error, a condition normally employed to show convergence of approximate value iteration. We relate this condition to other common frameworks and show that it is strictly more general than the low rank (or linear) MDP assumption of prior work. We provide an algorithm with a rate optimal regret bound for this setting. While computational tractability questions remain open, this enriches the class of MDPs with a linear representation for the action-value function where statistically efficient reinforcement learning is possible.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers