Explainability in human–agent systems

Avi Rosenfeld, Ariella Richardson

Research output: Contribution to journalArticlepeer-review

150 Scopus citations

Abstract

This paper presents a taxonomy of explainability in human–agent systems. We consider fundamental questions about the Why, Who, What, When and How of explainability. First, we define explainability, and its relationship to the related terms of interpretability, transparency, explicitness, and faithfulness. These definitions allow us to answer why explainability is needed in the system, whom it is geared to and what explanations can be generated to meet this need. We then consider when the user should be presented with this information. Last, we consider how objective and subjective measures can be used to evaluate the entire system. This last question is the most encompassing as it will need to evaluate all other issues regarding explainability.

Original languageEnglish
Pages (from-to)673-705
Number of pages33
JournalAutonomous Agents and Multi-Agent Systems
Volume33
Issue number6
DOIs
StatePublished - 1 Nov 2019
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2019, Springer Science+Business Media, LLC, part of Springer Nature.

Keywords

  • Human–agent systems
  • Machine learning interpretability
  • Machine learning transparency
  • XAI

Fingerprint

Dive into the research topics of 'Explainability in human–agent systems'. Together they form a unique fingerprint.

Cite this