Abstract
We introduce ZeroSCROLLS, a zero-shot benchmark for natural language understanding over long texts, which contains only test and small validation sets, without training data. We adapt six tasks from the SCROLLS benchmark, and add four new datasets, including two novel information fusing tasks, such as aggregating the percentage of positive reviews. Using ZeroSCROLLS, we conduct a comprehensive evaluation of both open-source and closed large language models, finding that Claude outperforms ChatGPT, and that GPT-4 achieves the highest average score. However, there is still room for improvement on multiple open challenges in ZeroSCROLLS, such as aggregation tasks, where models struggle to pass the naive baseline. As the state of the art is a moving target, we invite researchers to evaluate their ideas on the live ZeroSCROLLS leaderboard.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics |
Subtitle of host publication | EMNLP 2023 |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 7977-7989 |
Number of pages | 13 |
ISBN (Electronic) | 9798891760615 |
State | Published - 2023 |
Event | 2023 Findings of the Association for Computational Linguistics: EMNLP 2023 - Singapore, Singapore Duration: 6 Dec 2023 → 10 Dec 2023 |
Publication series
Name | Findings of the Association for Computational Linguistics: EMNLP 2023 |
---|
Conference
Conference | 2023 Findings of the Association for Computational Linguistics: EMNLP 2023 |
---|---|
Country/Territory | Singapore |
City | Singapore |
Period | 6/12/23 → 10/12/23 |
Bibliographical note
Publisher Copyright:© 2023 Association for Computational Linguistics.
Funding
This research is supported by the Yandex Initiative in Machine Learning. The benchmark is released by Tel Aviv University. All experiments were conducted by Tel Aviv University.
Funders | Funder number |
---|---|
Yandex Initiative in Machine Learning | |
Tel Aviv University |