Unsupervised commonsense question answering with self-talk

Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

163 Scopus citations

Abstract

Natural language understanding involves reading between the lines with implicit background knowledge. Current systems either rely on pretrained language models as the sole implicit source of world knowledge, or resort to external knowledge bases (KBs) to incorporate additional relevant knowledge. We propose an unsupervised framework based on self-talk as a novel alternative to multiple-choice commonsense tasks. Inspired by inquiry-based discovery learning (Bruner, 1961), our approach inquires language models with a number of information seeking questions such as “what is the definition of...” to discover additional background knowledge. Empirical results demonstrate that the self-talk procedure substantially improves the performance of zero-shot language model baselines on four out of six commonsense benchmarks, and competes with models that obtain knowledge from external KBs. While our approach improves performance on several benchmarks, the self-talk induced knowledge even when leading to correct answers is not always seen as helpful by human judges, raising interesting questions about the inner-workings of pre-trained language models for commonsense reasoning.

Original languageEnglish
Title of host publicationEMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
PublisherAssociation for Computational Linguistics (ACL)
Pages4615-4629
Number of pages15
ISBN (Electronic)9781952148606
StatePublished - 2020
Externally publishedYes
Event2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020 - Virtual, Online
Duration: 16 Nov 202020 Nov 2020

Publication series

NameEMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference

Conference

Conference2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020
CityVirtual, Online
Period16/11/2020/11/20

Bibliographical note

Publisher Copyright:
© 2020 Association for Computational Linguistics

Funding

This research was supported in part by NSF (IIS-1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), and DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031).

FundersFunder number
National Science FoundationIIS-1714566, IIS-1524371
Army Research OfficeW911NF-15-1-0543
Defense Advanced Research Projects Agency
Naval Information Warfare Center PacificN66001-19-2-4031

    Fingerprint

    Dive into the research topics of 'Unsupervised commonsense question answering with self-talk'. Together they form a unique fingerprint.

    Cite this