“You are grounded!”: Latent name artifacts in pre-trained language models

Vered Shwartz, Rachel Rudinger, Oyvind Tafjord

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

39 Scopus citations

Abstract

Pre-trained language models (LMs) may perpetuate biases originating in their training corpus to downstream models. We focus on artifacts associated with the representation of given names (e.g., Donald), which, depending on the corpus, may be associated with specific entities, as indicated by next token prediction (e.g., Trump). While helpful in some contexts, grounding happens also in underspecified or inappropriate contexts. For example, endings generated for 'Donald is a' substantially differ from those of other names, and often have more-than-average negative sentiment. We demonstrate the potential effect on downstream tasks with reading comprehension probes where name perturbation changes the model answers. As a silver lining, our experiments suggest that additional pre-training on different corpora may mitigate this bias.

Original languageEnglish
Title of host publicationEMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
PublisherAssociation for Computational Linguistics (ACL)
Pages6850-6861
Number of pages12
ISBN (Electronic)9781952148606
DOIs
StatePublished - 2020
Externally publishedYes
Event2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020 - Virtual, Online
Duration: 16 Nov 202020 Nov 2020

Publication series

NameEMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference

Conference

Conference2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020
CityVirtual, Online
Period16/11/2020/11/20

Bibliographical note

Publisher Copyright:
© 2020 Association for Computational Linguistics

Funding

This research was supported in part by NSF (IIS-1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), and DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031).

FundersFunder number
National Science FoundationIIS-1714566, IIS-1524371
Army Research OfficeW911NF-15-1-0543
Defense Advanced Research Projects Agency
Naval Information Warfare Center PacificN66001-19-2-4031

    Fingerprint

    Dive into the research topics of '“You are grounded!”: Latent name artifacts in pre-trained language models'. Together they form a unique fingerprint.

    Cite this