08/12/2020

On the Systematicity of Probing Contextualized Word Representations: The Case of Hypernymy in BERT

Abhilasha Ravichander, Eduard Hovy, Kaheer Suleman, Adam Trischler, Jackie Chi Kit Cheung

Keywords:

Abstract: Contextualized word representations have become a driving force in NLP, motivating widespread interest in understanding their capabilities and the mechanisms by which they operate. Particularly intriguing is their ability to identify and encode conceptual abstractions. Past work has probed BERT representations for this competence, finding that BERT can correctly retrieve noun hypernyms in cloze tasks. In this work, we ask the question: <i>do probing studies shed light on systematic knowledge in BERT representations?</i> As a case study, we examine hypernymy knowledge encoded in BERT representations. In particular, we demonstrate through a simple consistency probe that the ability to correctly retrieve hypernyms in cloze tasks, as used in prior work, does not correspond to systematic knowledge in BERT. Our main conclusion is cautionary: even if BERT demonstrates high probing accuracy for a particular competence, it does not necessarily follow that BERT ‘understands’ a concept, and it cannot be expected to systematically generalize across applicable contexts.

The video of this talk cannot be embedded. You can watch it here:
https://underline.io/lecture/6663-on-the-systematicity-of-probing-contextualized-word-representations-the-case-of-hypernymy-in-bert
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at COLING Workshops 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers