22/09/2020

Cascading hybrid bandits: Online learning to rank for relevance and diversity

Chang Li, Haoyun Feng, Maarten Rijke

Keywords: recommender system, contextual bandits, Online learning to rank, result diversification

Abstract: Relevance ranking and result diversification are two core areas in modern recommender systems. Relevance ranking aims at building a ranked list sorted in decreasing order of item relevance, while result diversification focuses on generating a ranked list of items that covers a broad range of topics. In this paper, we study an online learning setting that aims to recommend a ranked list with K items that maximizes the ranking utility, i.e., a list whose items are relevant and whose topics are diverse. We formulate it as the cascade hybrid bandits (CHB) problem. CHB assumes the cascading user behavior, where a user browses the displayed list from top to bottom, clicks the first attractive item, and stops browsing the rest. We propose a hybrid contextual bandit approach, called , for solving this problem. models item relevance and topical diversity using two independent functions and simultaneously learns those functions from user click feedback. We conduct experiments to evaluate on two real-world recommendation datasets: MovieLens and Yahoo music datasets. Our experimental results show that outperforms the baselines. In addition, we prove theoretical guarantees on the n-step performance demonstrating the soundness of .

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at RECSYS 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers