Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?

Alon Jacovi, Yoav Goldberg

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

291 Scopus citations

Abstract

With the growing popularity of deep-learning based NLP models, comes a need for interpretable systems. But what is interpretability, and what constitutes a high-quality interpretation? In this opinion piece we reflect on the current state of interpretability evaluation research. We call for more clearly differentiating between different desired criteria an interpretation should satisfy, and focus on the faithfulness criteria. We survey the literature with respect to faithfulness evaluation, and arrange the current approaches around three assumptions, providing an explicit form to how faithfulness is “defined” by the community. We provide concrete guidelines on how evaluation of interpretation methods should and should not be conducted. Finally, we claim that the current binary definition for faithfulness sets a potentially unrealistic bar for being considered faithful. We call for discarding the binary notion of faithfulness in favor of a more graded one, which we believe will be of greater practical utility.

Original languageEnglish
Title of host publicationACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference
PublisherAssociation for Computational Linguistics (ACL)
Pages4198-4205
Number of pages8
ISBN (Electronic)9781952148255
DOIs
StatePublished - 2020
Event58th Annual Meeting of the Association for Computational Linguistics, ACL 2020 - Virtual, Online, United States
Duration: 5 Jul 202010 Jul 2020

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN (Print)0736-587X

Conference

Conference58th Annual Meeting of the Association for Computational Linguistics, ACL 2020
Country/TerritoryUnited States
CityVirtual, Online
Period5/07/2010/07/20

Bibliographical note

Publisher Copyright:
© 2020 Association for Computational Linguistics

Funding

This project has received funding from the Eu-ropoean Research Council (ERC) under the Eu-ropoean Union’s Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT). We thank Yanai Elazar for welcome input on the presentation and organization of the paper. We also thank the reviewers for additional feedback and pointing to relevant literature in HCI and IUI. This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT).

FundersFunder number
Eu-ropoean Research Council
Europoean Union's Horizon 2020 research and innovation programme
Horizon 2020 Framework Programme
European Commission
Horizon 2020802774

    Fingerprint

    Dive into the research topics of 'Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?'. Together they form a unique fingerprint.

    Cite this