Abstract
The common practice for assessing automatic evaluation metrics is to measure the correlation between their induced system rankings and those obtained by reliable human evaluation, where a higher correlation indicates a better metric. Yet, an intricate setting arises when an NLP task is evaluated by multiple Quality Criteria (QCs), like for text summarization where prominent criteria include relevance, consistency, fluency and coherence. In this paper, we challenge the soundness of this methodology when multiple QCs are involved, concretely for the summarization case. First, we show that the allegedly best metrics for certain QCs actually do not perform well, failing to detect even drastic summary corruptions with respect to the considered QC. To explain this, we show that some of the high correlations obtained in the multi-QC setup are spurious. Finally, we propose a procedure that may help detect this effect. Overall, our findings highlight the need for further investigating metric evaluation methodologies for the multiple-QC case.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics |
Subtitle of host publication | EMNLP 2023 |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 13829-13838 |
Number of pages | 10 |
ISBN (Electronic) | 9798891760615 |
DOIs | |
State | Published - 2023 |
Event | 2023 Findings of the Association for Computational Linguistics: EMNLP 2023 - Singapore, Singapore Duration: 6 Dec 2023 → 10 Dec 2023 |
Publication series
Name | Findings of the Association for Computational Linguistics: EMNLP 2023 |
---|
Conference
Conference | 2023 Findings of the Association for Computational Linguistics: EMNLP 2023 |
---|---|
Country/Territory | Singapore |
City | Singapore |
Period | 6/12/23 → 10/12/23 |
Bibliographical note
Publisher Copyright:© 2023 Association for Computational Linguistics.
Funding
We would like to thank Rotem Dror, Amir Feder and the paper reviewers for their comments, and Alex Fabbri for his support in reconstructing the SummEval results. The work described herein was supported in part by the Katz Fellowship for Excellent PhD Candidates in Natural and Exact Sciences and by the Israel Science Foundation (grant no. 2827/21).
Funders | Funder number |
---|---|
Israel Science Foundation | 2827/21 |