19/10/2020

Incorporating user feedback into sequence to sequence model training

Michaeel Kazi, Weiwei Guo, Huiji Gao, Bo Long

Keywords: sequence to sequence, query suggestion, click aware, online inference, learning to rank

Abstract: As the largest professional network, LinkedIn hosts millions of user profiles and job postings. Users effectively find what they need by entering search queries. However, finding what they are looking for can be a challenge, especially if they are unfamiliar with specific keywords from their industry. Query Suggestion is a popular feature where a search engine can suggest alternate, related queries. At LinkedIn, we have productionized a deep learning Seq2Seq model to transform an input query into several alternatives. This model is trained by examining search history directly typed by users. Once online, we can determine whether or not users clicked on suggested queries. This new feedback data indicates which suggestions caught the user’s attention. In this work, we propose training a model with both the search history and user feedback datasets. We examine several ways to incorporate feedback without any architectural change, including adding a novel pairwise ranking loss term during training. The proposed new training technique produces the best combined score out of several alternatives in offline metrics. Deployed in the LinkedIn search engine, it significantly outperforms the control model with respect to key business metrics.

The video of this talk cannot be embedded. You can watch it here:
https://dl.acm.org/doi/10.1145/3340531.3412714#sec-supp
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at CIKM 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers