08/12/2020

Re-framing Incremental Deep Language Models for Dialogue Processing with Multi-task Learning

Morteza Rohanian, Julian Hough

Keywords:

Abstract: We present a multi-task learning framework to enable the training of one universal incremental dialogue processing model with four tasks of disfluency detection, language modelling, part-of-speech tagging and utterance segmentation in a simple deep recurrent setting. We show that these tasks provide positive inductive biases to each other with optimal contribution of each one relying on the severity of the noise from the task. Our live multi-task model outperforms similar individual tasks, delivers competitive performance and is beneficial for future use in conversational agents in psychiatric treatment.

The video of this talk cannot be embedded. You can watch it here:
https://underline.io/lecture/6393-re-framing-incremental-deep-language-models-for-dialogue-processingwith-multi-task-learning
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at COLING 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers