05/12/2020

Explaining word embeddings via disentangled representation

Keng-Te Liao, Cheng-Syuan Lee, Zhong-Yu Huang, Shou-de Lin

Keywords:

Abstract: Disentangled representations have attracted increasing attention recently. However, how to transfer the desired properties of disentanglement to word representations is unclear. In this work, we propose to transform typical dense word vectors into disentangled embeddings featuring improved interpretability via encoding polysemous semantics separately. We also found the modular structure of our disentangled word embeddings helps generate more efficient and effective features for natural language processing tasks.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at AACL 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers