Abstract:
Motivated by high-stakes decision-making domains like personalized
medicine where user information is inherently sensitive, we design
privacy preserving exploration policies for episodic reinforcement
learning (RL). We first provide a meaningful privacy formulation using
the notion of joint differential privacy (JDP)--a strong variant of
differential privacy for settings where each user receives their own
sets of output (e.g., policy recommendations). We then develop a
private optimism-based learning algorithm that simultaneously achieves
strong PAC and regret bounds, and enjoys a JDP guarantee. Our
algorithm only pays for a moderate privacy cost on exploration: in
comparison to the non-private bounds, the privacy parameter only
appears in lower-order terms. Finally, we present lower bounds on
sample complexity and regret for reinforcement learning subject to
JDP.