25/07/2020

Improving contextual language models for response retrieval in multi-turn conversation

Junyu Lu, Xiancong Ren, Yazhou Ren, Ao Liu, Zenglin Xu

Keywords: pre-trained language model, augmentation, response retrieval

Abstract: As an important branch of current dialogue systems, retrieval-based chatbots leverage information retrieval to select proper predefined responses. Various promising architectures have been designed for boosting response retrieval, however, few researches exploit the effectiveness of the pre-trained contextual language models. In this paper, we propose two approaches to adapt contextual language models in dialogue response selection task. In detail, the Speaker Segmentation approach is designed to discriminate different speakers to fully utilize speaker characteristics. Besides, we propose the Dialogue Augmentation approach, i.e., cutting off real conversations at different time points, to enlarge the training corpora. Compared with previous works which use utterance-level representations, our augmented contextual language models are able to obtain top-hole contextual dialogue representations for deeper semantic understanding. Evaluation on three large-scale datasets has demonstrated that our proposed approaches yield better performance than existing models.

The video of this talk cannot be embedded. You can watch it here:
https://dl.acm.org/doi/10.1145/3397271.3401255#sec-supp
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at SIGIR 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers