Human Interpretation of Saliency-based Explanation Over Text

Hendrik Schuff, Alon Jacovi, Heike Adel, Yoav Goldberg, Ngoc Thang Vu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

20 Scopus citations

Abstract

While a lot of research in explainable AI focuses on producing effective explanations, less work is devoted to the question of how people understand and interpret the explanation. In this work, we focus on this question through a study of saliency-based explanations over textual data. Feature-attribution explanations of text models aim to communicate which parts of the input text were more influential than others towards the model decision. Many current explanation methods, such as gradient-based or Shapley value-based methods, provide measures of importance which are well-understood mathematically. But how does a person receiving the explanation (the explainee) comprehend it? And does their understanding match what the explanation attempted to communicate? We empirically investigate the effect of various factors of the input, the feature-attribution explanation, and visualization procedure, on laypeople's interpretation of the explanation. We query crowdworkers for their interpretation on tasks in English and German, and fit a GAMM model to their responses considering the factors of interest. We find that people often mis-interpret the explanations: superficial and unrelated factors, such as word length, influence the explainees' importance assignment despite the explanation communicating importance directly. We then show that some of this distortion can be attenuated: we propose a method to adjust saliencies based on model estimates of over- and under-perception, and explore bar charts as an alternative to heatmap saliency visualization. We find that both approaches can attenuate the distorting effect of specific factors, leading to better-calibrated understanding of the explanation.

Original languageEnglish
Title of host publicationProceedings of 2022 5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
PublisherAssociation for Computing Machinery
Pages611-636
Number of pages26
ISBN (Electronic)9781450393522
DOIs
StatePublished - 21 Jun 2022
Event5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022 - Virtual, Online, Korea, Republic of
Duration: 21 Jun 202224 Jun 2022

Publication series

NameACM International Conference Proceeding Series

Conference

Conference5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022
Country/TerritoryKorea, Republic of
CityVirtual, Online
Period21/06/2224/06/22

Bibliographical note

Publisher Copyright:
© 2022 ACM.

Keywords

  • cognitive bias
  • explainability
  • feature attribution
  • generalized additive mixed model
  • human
  • interpretability
  • perception
  • saliency
  • text

Fingerprint

Dive into the research topics of 'Human Interpretation of Saliency-based Explanation Over Text'. Together they form a unique fingerprint.

Cite this