25/07/2020

A reinforcement learning framework for relevance feedback

Ali Montazeralghaem, Hamed Zamani, James Allan

Keywords: reinforcement learning, query language model, relevance feedback model, neural network

Abstract: We present RML, the first known general reinforcement learning framework for relevance feedback that directly optimizes any desired retrieval metric, including precision-oriented, recall-oriented, and even diversity metrics: RML can be easily extended to directly optimize any arbitrary user satisfaction signal. Using the RML framework, we can select effective feedback terms and weight them appropriately, improving on past methods that fit parameters to feedback algorithms using heuristic approaches or methods that do not directly optimize for retrieval performance. Learning an effective relevance feedback model is not trivial since the true feedback distribution is unknown. Experiments on standard TREC collections compare RML to existing feedback algorithms, demonstrate the effectiveness of RML at optimizing for MAP and α-n DCG, and show the impact on related measures.

The video of this talk cannot be embedded. You can watch it here:
https://dl.acm.org/doi/10.1145/3397271.3401099#sec-supp
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at SIGIR 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers