Abstract
Abstractive summarization has enjoyed renewed interest in recent years, thanks to pretrained language models and the availability of large-scale datasets. Despite promising results, current models still suffer from generating factually inconsistent summaries, reducing their utility for real-world application. Several recent efforts attempt to address this by devising models that automatically detect factual inconsistencies in machine generated summaries. However, they focus exclusively on English, a language with abundant resources. In this work, we leverage factual consistency evaluation models to improve multilingual summarization. We explore two intuitive approaches to mitigate hallucinations based on the signal provided by a multilingual NLI model, namely data filtering and controlled generation. Experimental results in the 45 languages from the XLSum dataset show gains over strong baselines in both automatic and human evaluation. We release models and human judgements of summaries to foster progress towards more factually consistent multilingual summarization.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics, ACL 2023 |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 3562-3591 |
Number of pages | 30 |
ISBN (Electronic) | 9781959429623 |
State | Published - 2023 |
Externally published | Yes |
Event | 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 - Toronto, Canada Duration: 9 Jul 2023 → 14 Jul 2023 |
Publication series
Name | Proceedings of the Annual Meeting of the Association for Computational Linguistics |
---|---|
ISSN (Print) | 0736-587X |
Conference
Conference | 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 |
---|---|
Country/Territory | Canada |
City | Toronto |
Period | 9/07/23 → 14/07/23 |
Bibliographical note
Publisher Copyright:© 2023 Association for Computational Linguistics.
Funding
We thank Ankur Parikh, Sebastian Gehrmann, Dipanjan Das and William Cohen for their feedback on this work. The human rating process was managed by Muqthar Mohammad, Kiranmai Chennuru, Aishwarya Gomatam, Raghava Ram Pamidigantam and Mahesh Maddinala, without them this work would not have been possible. Thanks for invaluable support from Sheila de Guia and Suneet Dhingra.
Funders | Funder number |
---|---|
Sheila de Guia and Suneet Dhingra |