12/07/2020

A Game Theoretic Perspective on Model-Based Reinforcement Learning

Aravind Rajeswaran, Igor Mordatch, Vikash Kumar

Keywords: Reinforcement Learning - Deep RL

Abstract: We illustrate how game theory is a good framework to understand model-based reinforcement learning (MBRL). We point out that a large class of MBRL algorithms can be viewed as a game between two players: (1) a policy player, which attempts to maximize rewards under the learned model; (2) a model player, which attempts to fit the real-world data collected by the policy player. Their goals need not be aligned, and are often conflicting. We show that stable algorithms for MBRL can be derived by considering a Stackelberg game between the two players. This formulation gives rise to two natural schools of MBRL algorithms based on which player is chosen as the leader in the Stackelberg game, and together encapsulate many existing MBRL algorithms. Through experiments on a suite of continuous control tasks, we validate that algorithms based on our framework lead to stable and sample efficient learning.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers