Abstract:
We illustrate how game theory is a good framework to understand model-based reinforcement learning (MBRL). We point out that a large class of MBRL algorithms can be viewed as a game between two players: (1) a policy player, which attempts to maximize rewards under the learned model; (2) a model player, which attempts to fit the real-world data collected by the policy player. Their goals need not be aligned, and are often conflicting. We show that stable algorithms for MBRL can be derived by considering a Stackelberg game between the two players. This formulation gives rise to two natural schools of MBRL algorithms based on which player is chosen as the leader in the Stackelberg game, and together encapsulate many existing MBRL algorithms. Through experiments on a suite of continuous control tasks, we validate that algorithms based on our framework lead to stable and sample efficient learning.