08/12/2020

Autoregressive Affective Language Forecasting: A Self-Supervised Task

Matthew Matero, H. Andrew Schwartz

Keywords:

Abstract: Human natural language is mentioned at a specific point in time while human emotions change over time. While much work has established a strong link between language use and emotional states, few have attempted to model emotional language in time. Here, we introduce the task of <i>affective language forecasting</i> – predicting future change in language based on past changes of language, a task with real-world applications such as treating mental health or forecasting trends in consumer confidence. We establish some of the fundamental autoregressive characteristics of the task (necessary history size, static versus dynamic length, varying time-step resolutions) and then build on popular sequence models for <i>words</i> to instead model sequences of <i>language-based emotion in time</i>. Over a novel Twitter dataset of 1,900 users and weekly + daily scores for 6 emotions and 2 additional linguistic attributes, we find a novel dual-sequence GRU model with decayed hidden states achieves best results (<span class="tex-math">r = .66</span>) significantly out-predicting, e.g., a moving averaging based on the past time-steps (<span class="tex-math">r = .49</span>). We make our anonymized dataset as well as task setup and evaluation code available for others to build on.

The video of this talk cannot be embedded. You can watch it here:
https://underline.io/lecture/6192-autoregressive-affective-language-forecasting-a-self-supervised-task
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at COLING 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers