Olmpics-on what language model pre-training captures

Alon Talmor, Yanai Elazar, Yoav Goldberg, Jonathan Berant

Research output: Contribution to journalArticlepeer-review

178 Scopus citations

Abstract

Recent success of pre-trained language models (LMs) has spurred widespread interest in the language capabilities that they possess. However, efforts to understand whether LM representations are useful for symbolic reasoning tasks have been limited and scattered. In this work, we propose eight reasoning tasks, which conceptually require operations such as comparison, conjunction, and composition. A fundamental challenge is to understand whether the performance of a LM on a task should be attributed to the pre-trained representations or to the process of fine-tuning on the task data. To address this, we propose an evaluation protocol that includes both zero-shot evaluation (no fine-tuning), as well as comparing the learning curve of a fine-tuned LM to the learning curve of multiple controls, which paints a rich picture of the LM capabilities. Our main findings are that: (a) different LMs exhibit qualitatively different reasoning abilities, e.g., ROBERTA succeeds in reasoning tasks where BERT fails completely; (b) LMs do not reason in an abstract manner and are context-dependent, e.g., while ROBERTA can compare ages, it can do so only when the ages are in the typical range of human ages; (c) On half of our reasoning tasks all models fail completely. Our findings and infrastructure can help future work on designing new datasets, models, and objective functions for pre-training.

Original languageEnglish
Pages (from-to)743-758
Number of pages16
JournalTransactions of the Association for Computational Linguistics
Volume8
DOIs
StatePublished - 2020

Bibliographical note

Publisher Copyright:
© 2020 Association for Computational Linguistics.

Funding

This work was completed in partial fulfillment for the PhD degree of the first author. We thank our colleagues at The Allen Institute of AI, especially Kyle Richardson, Asaf Amrami, Mor Pipek, Myle Ott, Hillel Taub-Tabib, and Reut Tsarfaty. This research was partially supported by The Israel Science Foundation grant 942/16, The Blavatnik Computer Science Research Fund and The Yandex Initiative for Machine Learning, and the European Union’s Seventh Framework Programme (FP7) under grant agreements no. 802774-ERC-iEXTRACT and no. 802800-DELPHI.

FundersFunder number
Blavatnik Computer Science Research Fund
Yandex Initiative for Machine Learning
Horizon 2020 Framework Programme802774, 802800
Seventh Framework Programme
ALLEN INSTITUTE
Israel Science Foundation942/16

    Fingerprint

    Dive into the research topics of 'Olmpics-on what language model pre-training captures'. Together they form a unique fingerprint.

    Cite this