Abstract
The Winograd Schema (WS) has been proposed as a test for measuring commonsense capabilities of models. Recently, pre-trained language model-based approaches have boosted performance on some WS benchmarks but the source of improvement is still not clear. This paper suggests that the apparent progress on WS may not necessarily reflect progress in commonsense reasoning. To support this claim, we first show that the current evaluation method of WS is sub-optimal and propose a modification that uses twin sentences for evaluation. We also propose two new baselines that indicate the existence of artifacts in WS benchmarks. We then develop a method for evaluating WS-like sentences in a zero-shot setting to account for the commonsense reasoning abilities acquired during the pretraining and observe that popular language models perform randomly in this setting when using our more strict evaluation. We conclude that the observed progress is mostly due to the use of supervision in training WS models, which is not likely to successfully support all the required commonsense reasoning skills and knowledge.
Original language | English |
---|---|
Title of host publication | EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 10486-10500 |
Number of pages | 15 |
ISBN (Electronic) | 9781955917094 |
State | Published - 2021 |
Event | 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021 - Virtual, Punta Cana, Dominican Republic Duration: 7 Nov 2021 → 11 Nov 2021 |
Publication series
Name | EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings |
---|
Conference
Conference | 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021 |
---|---|
Country/Territory | Dominican Republic |
City | Virtual, Punta Cana |
Period | 7/11/21 → 11/11/21 |
Bibliographical note
Publisher Copyright:© 2021 Association for Computational Linguistics
Funding
We would like to thank Vered Shwartz, Keisuke Sakaguchi, Rotem Dror, Niket Tandon, Vid Kocijan and Ernest Davis for helpful discussions and comments on early versions of this paper. We also thank the anonymous reviewers for their valuable suggestions. Yanai Elazar is grateful to be supported by the PBC fellowship for outstanding PhD candidates in Data Science and the Google PhD fellowship. This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT) and from contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA). Yanai Elazar is grateful to be supported by the PBC fellowship for outstanding PhD candidates in Data Science and the Google PhD fellowship. This project has received funding from the Europoean Research Council (ERC) under the Europoean Union’s Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEX-TRACT) and from contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA).
Funders | Funder number |
---|---|
Europoean Union's Horizon 2020 research and innovation programme | |
Europoean Union’s Horizon 2020 research and innovation programme | 802774, FA8750-19-2-1004 |
Defense Advanced Research Projects Agency | |
European Commission | |
Planning and Budgeting Committee of the Council for Higher Education of Israel |