Abstract
Prompting language models to provide step-by-step answers (e.g., “Chain-of-Thought”) is the prominent approach for complex reasoning tasks, where more accurate reasoning chains typically improve downstream task performance. Recent literature discusses automatic methods to verify reasoning steps to evaluate and improve their correctness. However, no fine-grained step-level datasets are available to enable thorough evaluation of such verification methods, hindering progress in this direction. We introduce REVEAL: Reasoning Verification Evaluation, a new dataset to benchmark automatic verifiers of complex Chain-of-Thought reasoning in open-domain question answering settings. REVEAL includes comprehensive labels for the relevance, attribution to evidence passages, and logical correctness of each reasoning step in a language model's answer, across a wide variety of datasets and state-of-the-art language models. Available at reveal-dataset.github.io.
Original language | English |
---|---|
Title of host publication | Long Papers |
Editors | Lun-Wei Ku, Andre F. T. Martins, Vivek Srikumar |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 4615-4634 |
Number of pages | 20 |
ISBN (Electronic) | 9798891760943 |
State | Published - 2024 |
Event | 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 - Bangkok, Thailand Duration: 11 Aug 2024 → 16 Aug 2024 |
Publication series
Name | Proceedings of the Annual Meeting of the Association for Computational Linguistics |
---|---|
Volume | 1 |
ISSN (Print) | 0736-587X |
Conference
Conference | 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 |
---|---|
Country/Territory | Thailand |
City | Bangkok |
Period | 11/08/24 → 16/08/24 |
Bibliographical note
Publisher Copyright:© 2024 Association for Computational Linguistics.