We find that the requirement of model interpretations to be faithful is vague and incomplete. With interpretation by textual highlights as a case study, we present several failure cases. Borrowing concepts from social science, we identify that the problem is a misalignment between the causal chain of decisions (causal attribution) and the attribution of human behavior to the interpretation (social attribution). We reformulate faithfulness as an accurate attribution of causality to the model, and introduce the concept of aligned faithfulness: faithful causal chains that are aligned with their expected social behavior. The two steps of causal attribution and social attribution together complete the process of explaining behavior. With this formalization, we characterize various failures of misaligned faithful highlight interpretations, and propose an alternative causal chain to remedy the issues. Finally, we implement highlight explanations of the proposed causal format using contrastive explanations.
|Number of pages||17|
|Journal||Transactions of the Association for Computational Linguistics|
|State||Published - 1 Feb 2021|
Bibliographical noteFunding Information:
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation program, grant agreement no. 802774 (iEXTRACT).
© 2021, MIT Press Journals. All rights reserved.