Abstract:
We propose a novel framework for structured bandits, which we call influence diagram bandit. Our framework captures complicated statistical dependencies between actions, latent variables, and observations; and unifies and extends many existing models, such as combinatorial semi-bandits, cascading bandits, and low-rank bandits. We develop novel online learning algorithms that allow us to act efficiently in our models. The key idea is to track a structured posterior distribution of model parameters, either exactly or approximately. To act, we sample model parameters from their posterior and then use the structure of the influence diagram to find the most optimistic actions under the sampled parameters. We experiment with three structured bandit problems: cascading bandits, online learning to rank in the position-based model, and rank-1 bandits. Our algorithms achieve up to about 3 times higher cumulative reward than baselines.