Better metrics for evaluating explainable artificial intelligence

Avi Rosenfeld

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

41 Scopus citations

Abstract

This paper presents objective metrics for how explainable artificial intelligence (XAI) can be quantified. Through an overview of current trends, we show that many explanations are generated post-hoc and independent of the agent's logical process, which in turn creates explanations with limited meaning as they lack transparency and fidelity. While user studies are a known basis for evaluating XAI, studies that do not consider objective metrics for evaluating XAI may have limited meaning and may suffer from confirmation bias, particularly if they use low fidelity explanations unnecessarily. To avoid this issue, this paper suggests a paradigm shift in evaluating XAI that focuses on metrics that quantify the explanation itself and its appropriateness given the XAI goal. We suggest four such metrics based on performance differences, D, between the explanation's logic and the agent's actual performance, the number of rules, R, outputted by the explanation, the number of features, F, used to generate that explanation, and the stability, S, of the explanation. We believe that user studies that focus on these metrics in their evaluations are inherently more valid and should be integrated in future XAI research.

Original languageEnglish
Title of host publication20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages45-50
Number of pages6
ISBN (Electronic)9781713832621
StatePublished - 2021
Externally publishedYes
Event20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021 - Virtual, Online
Duration: 3 May 20217 May 2021

Publication series

NameProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume1
ISSN (Print)1548-8403
ISSN (Electronic)1558-2914

Conference

Conference20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021
CityVirtual, Online
Period3/05/217/05/21

Bibliographical note

Publisher Copyright:
© 2021 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.

Keywords

  • Explainable artificial intelligence
  • Human-agent systems
  • Interpretable machine learning
  • System evaluation

Fingerprint

Dive into the research topics of 'Better metrics for evaluating explainable artificial intelligence'. Together they form a unique fingerprint.

Cite this