Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline

Ori Ernst, Ori Shapira, Ramakanth Pasunuru, Michael Lepioshkin, Jacob Goldberger, Mohit Bansal, Ido Dagan

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

19 Scopus citations

Abstract

Aligning sentences in a reference summary with their counterparts in source documents was shown as a useful auxiliary summarization task, notably for generating training data for salience detection. Despite its assessed utility, the alignment step was mostly approached with heuristic unsupervised methods, typically ROUGE-based, and was never independently optimized or evaluated. In this paper, we propose establishing summary-source alignment as an explicit task, while introducing two major novelties: (1) applying it at the more accurate proposition span level, and (2) approaching it as a supervised classification task. To that end, we created a novel training dataset for proposition-level alignment, derived automatically from available summarization evaluation data. In addition, we crowdsourced dev and test datasets, enabling model development and proper evaluation. Utilizing these data, we present a supervised proposition alignment baseline model, showing improved alignment-quality over the unsupervised approach.

Original languageEnglish
Title of host publicationCoNLL 2021 - 25th Conference on Computational Natural Language Learning, Proceedings
EditorsArianna Bisazza, Omri Abend
PublisherAssociation for Computational Linguistics (ACL)
Pages310-322
Number of pages13
ISBN (Electronic)9781955917056
StatePublished - 2021
Event25th Conference on Computational Natural Language Learning, CoNLL 2021 - Virtual, Online
Duration: 10 Nov 202111 Nov 2021

Publication series

NameCoNLL 2021 - 25th Conference on Computational Natural Language Learning, Proceedings

Conference

Conference25th Conference on Computational Natural Language Learning, CoNLL 2021
CityVirtual, Online
Period10/11/2111/11/21

Bibliographical note

Publisher Copyright:
© 2021 Association for Computational Linguistics.

Funding

We thank the anonymous reviewers for their constructive comments. This work was supported in part by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1); by the Israel Science Foundation (grant 1951/17); by a grant from the Israel Ministry of Science and Technology; and by grants from Intel Labs. MB and RP were supported by NSF-CAREER Award 1846185 and a Microsoft PhD Fellowship.

FundersFunder number
DIPDA 1600/1-1
German-Israeli Project Cooperation
Intel Labs
NSF-CAREER1846185
Microsoft
Deutsche Forschungsgemeinschaft
Israel Science Foundation1951/17
Ministry of science and technology, Israel

    Fingerprint

    Dive into the research topics of 'Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline'. Together they form a unique fingerprint.

    Cite this