12/07/2020

Reducing Sampling Error in Batch Temporal Difference Learning

Brahma Pavse, Ishan Durugkar, Josiah Hanna, Peter Stone

Keywords: Reinforcement Learning - General

Abstract: Temporal difference (TD) learning is one of the main foundations of modern reinforcement learning. This paper studies the use of TD(0) to estimate the value function of a given \textit{evaluation} policy from a batch of data. In this batch setting, we show that TD(0) may converge to an inaccurate value function because the update following an action is weighted according to the number of times that action occurred in the batch -- not the true probability of the action under the evaluation policy. To address this limitation, we introduce \textit{policy sampling error corrected}-TD(0) (PSEC-TD(0)). PSEC-TD(0) first estimates the empirical distribution of actions in each state in the batch and then uses importance sampling to correct for the mismatch between the empirical weighting and the correct weighting for updates following each action. We refine the concept of a certainty-equivalence estimate and argue that PSEC-TD(0) converges to a more desirable fixed-point than TD(0) for a fixed batch of data. Finally, we conduct an empirical evaluation of PSEC-TD(0) on two batch value function learning tasks and show that PSEC-TD(0) produces value function estimates with lower mean squared error than the standard TD(0) algorithm in both discrete and continuous control tasks.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers