Re-Examining Summarization Evaluation across Multiple Quality Criteria

Ori Ernst, Ori Shapira, Ido Dagan, Ran Levy

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

The common practice for assessing automatic evaluation metrics is to measure the correlation between their induced system rankings and those obtained by reliable human evaluation, where a higher correlation indicates a better metric. Yet, an intricate setting arises when an NLP task is evaluated by multiple Quality Criteria (QCs), like for text summarization where prominent criteria include relevance, consistency, fluency and coherence. In this paper, we challenge the soundness of this methodology when multiple QCs are involved, concretely for the summarization case. First, we show that the allegedly best metrics for certain QCs actually do not perform well, failing to detect even drastic summary corruptions with respect to the considered QC. To explain this, we show that some of the high correlations obtained in the multi-QC setup are spurious. Finally, we propose a procedure that may help detect this effect. Overall, our findings highlight the need for further investigating metric evaluation methodologies for the multiple-QC case.

Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics
Subtitle of host publicationEMNLP 2023
PublisherAssociation for Computational Linguistics (ACL)
Pages13829-13838
Number of pages10
ISBN (Electronic)9798891760615
DOIs
StatePublished - 2023
Event2023 Findings of the Association for Computational Linguistics: EMNLP 2023 - Singapore, Singapore
Duration: 6 Dec 202310 Dec 2023

Publication series

NameFindings of the Association for Computational Linguistics: EMNLP 2023

Conference

Conference2023 Findings of the Association for Computational Linguistics: EMNLP 2023
Country/TerritorySingapore
CitySingapore
Period6/12/2310/12/23

Bibliographical note

Publisher Copyright:
© 2023 Association for Computational Linguistics.

Funding

We would like to thank Rotem Dror, Amir Feder and the paper reviewers for their comments, and Alex Fabbri for his support in reconstructing the SummEval results. The work described herein was supported in part by the Katz Fellowship for Excellent PhD Candidates in Natural and Exact Sciences and by the Israel Science Foundation (grant no. 2827/21).

FundersFunder number
Israel Science Foundation2827/21

    Fingerprint

    Dive into the research topics of 'Re-Examining Summarization Evaluation across Multiple Quality Criteria'. Together they form a unique fingerprint.

    Cite this