02/02/2021

Knowledge-aware Leap-LSTM: Integrating Prior Knowledge into Leap-LSTM towards Faster Long Text Classification

Jinhua Du, Yan Huang, Karo Moilanen

Keywords:

Abstract: While widely used in industry, recurrent neural networks (RNNs) are known to have deficiencies in dealing with long sequences (e.g. slow inference, vanishing gradients etc.). Recent research has attempted to accelerate RNN models by developing mechanisms to skip irrelevant words in input. Due to the lack of labelled data, it remains as a challenge to decide which words to skip, especially for low-resource classification tasks. In this paper, we propose Knowledge-AwareLeap-LSTM (KALL), a novel architecture which integrates prior human knowledge (created either manually or automatically) like in-domain keywords, terminologies or lexicons into Leap-LSTM to partially supervise the skipping process. More specifically, we propose a knowledge-oriented cost function for KALL; furthermore, we propose two strategies to integrate the knowledge: (1) the Factored KALL approach involves a keyword indicator as a soft constraint for the skip-ping process, and (2) the Gated KALL enforces the inclusion of keywords while maintaining a differentiable network in training. Experiments on different public datasets show that our approaches are1.1x~2.6x faster than LSTM with better accuracy and 23.6x faster than XLNet in a resource-limited CPU-only environment.

The video of this talk cannot be embedded. You can watch it here:
https://slideslive.com/38949249
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at AAAI 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers

 4:52