02/02/2021

Self-supervised Bilingual Syntactic Alignment for Neural Machine Translation

Tianfu Zhang, Heyan Huang, Chong Feng, Longbing Cao

Keywords:

Abstract: While various neural machine translation (NMT) methods have integrated mono-lingual syntax knowledge into the linguistic representation of sequence-to-sequence, no research is available on aligning the syntactic structures of target language with the corresponding source language syntactic structures. This work shows the first attempt of a source-target bilingual syntactic alignment approach SyntAligner by mutual information maximization-based self-supervised neural deep modeling. Building on the word alignment for NMT, our SyntAligner firstly aligns the syntactic structures of source and target sentences and then maximizes their mutual dependency by introducing a lower bound on their mutual information. In SyntAligner, the syntactic structure of span granularity is represented by transforming source or target word hidden state into a source or target syntactic span vector. A border-sensitive span attention mechanism then captures the correlation between the source and target syntactic span vectors, which also captures the self-attention between span border-words as alignment bias. Lastly, a self-supervised bilingual syntactic mutual information maximization-based learning objective dynamically samples the aligned syntactic spans to maximize their mutual dependency. Experiment results on three typical NMT tasks: WMT'14 English to German, IWSLT'14 German to English, and NC'11 English to French show the SyntAligner effectiveness and universality of syntactic alignment.

The video of this talk cannot be embedded. You can watch it here:
https://slideslive.com/38947800
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at AAAI 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers