Aligning faithful interpretations with their social attribution

Alon Jacovi, Yoav Goldberg

Research output: Contribution to journalArticlepeer-review

23 Scopus citations


We find that the requirement of model interpretations to be faithful is vague and incomplete. With interpretation by textual highlights as a case study, we present several failure cases. Borrowing concepts from social science, we identify that the problem is a misalignment between the causal chain of decisions (causal attribution) and the attribution of human behavior to the interpretation (social attribution). We reformulate faithfulness as an accurate attribution of causality to the model, and introduce the concept of aligned faithfulness: faithful causal chains that are aligned with their expected social behavior. The two steps of causal attribution and social attribution together complete the process of explaining behavior. With this formalization, we characterize various failures of misaligned faithful highlight interpretations, and propose an alternative causal chain to remedy the issues. Finally, we implement highlight explanations of the proposed causal format using contrastive explanations.

Original languageEnglish
Pages (from-to)294-310
Number of pages17
JournalTransactions of the Association for Computational Linguistics
StatePublished - 1 Feb 2021

Bibliographical note

Funding Information:
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation program, grant agreement no. 802774 (iEXTRACT).

Publisher Copyright:
© 2021, MIT Press Journals. All rights reserved.


Dive into the research topics of 'Aligning faithful interpretations with their social attribution'. Together they form a unique fingerprint.

Cite this