16/11/2020

Systematic Comparison of Neural Architectures and Training Approaches for Open Information Extraction

Patrick Hohenecker, Frank Mtumbuka, Vid Kocijan, Thomas Lukasiewicz

Keywords: open extraction, oie, neural architectures, training approaches

Abstract: The goal of open information extraction (OIE) is to extract facts from natural language text, and to represent them as structured triples of the form \textlesssubject,predicate, object\textgreater. For example, given the sentence ``Beethoven composed the Ode to Joy.″, we are expected to extract the triple \textlessBeethoven, composed, Ode to Joy\textgreater. In this work, we systematically compare different neural network architectures and training approaches, and improve the performance of the currently best models on the OIE16 benchmark (Stanovsky and Dagan, 2016) by 0.421 F1 score and 0.420 AUC-PR, respectively, in our experiments (i.e., by more than 200% in both cases). Furthermore, we show that appropriate problem and loss formulations often affect the performance more than the network architecture.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at EMNLP 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers