Abstract:
In model-based reinforcement learning, planning with an imperfect model of the environment has the potential to harm learning progress.
But even when a model is imperfect, it may still contain information that is useful for planning.
In this paper, we investigate the idea of using an imperfect model selectively.
The agent should plan in parts of the state space where the model would be helpful but refrain from using the model where it would be harmful.
An effective selective planning mechanism requires estimating predictive uncertainty, which arises out of aleatoric uncertainty and epistemic uncertainty.
Prior work has focused on parameter uncertainty, a particular kind of epistemic uncertainty, for selective planning.
In this work, we emphasize the importance of structural uncertainty, a distinct kind of epistemic uncertainty that signals the errors due to limited capacity or a misspecified model class.
We show that heteroscedastic regression, under an isotropic Gaussian assumption, can signal structural uncertainty that is complementary to that which is detected by methods designed to detect parameter uncertainty, indicating that considering both parameter and structural uncertainty may be a more promising direction for effective selective planning than either in isolation.