Abstract
Probing neural models for the ability to perform downstream tasks using their activation patterns is often used to localize what parts of the network specialize in performing what tasks. However, little work addressed potential mediating factors in such comparisons. As a test-case mediating factor, we consider the prediction’s context length, namely the length of the span whose processing is minimally required to perform the prediction. We show that not controlling for context length may lead to contradictory conclusions as to the localization patterns of the network, depending on the distribution of the probing dataset. Indeed, when probing BERT with seven tasks, we find that it is possible to get 196 different rankings between them when manipulating the distribution of context lengths in the probing dataset. We conclude by presenting best practices for conducting such comparisons in the future.
Original language | English |
---|---|
Title of host publication | NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics |
Subtitle of host publication | Human Language Technologies, Proceedings of the Conference |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 86-93 |
Number of pages | 8 |
ISBN (Electronic) | 9781954085466 |
State | Published - 2021 |
Externally published | Yes |
Event | 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021 - Virtual, Online Duration: 6 Jun 2021 → 11 Jun 2021 |
Publication series
Name | NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference |
---|
Conference
Conference | 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021 |
---|---|
City | Virtual, Online |
Period | 6/06/21 → 11/06/21 |
Bibliographical note
Publisher Copyright:© 2021 Association for Computational Linguistics.
Funding
This work was supported by the Israel Science Foundation (grant no. 929/17). We would also like to thank Amir Feder for his very insightful feedback on our paper.
Funders | Funder number |
---|---|
Israel Science Foundation | 929/17 |