Abstract
We introduce and evaluate an eXplainable Goal Recognition (XGR) model that uses the Weight of Evidence (WoE) framework to explain goal recognition problems. Our model provides human-centered explanations that answer 'why?' and 'why not?' questions. We computationally evaluate the performance of our system over eight different domains. Using a human behavioral study to obtain the ground truth from human annotators, we further show that the XGR model can successfully generate human-like explanations. We then report on a study with 60 participants who observe agents playing Sokoban game and then receive explanations of the goal recognition output. We investigate participants' understanding obtained by explanations through task prediction, explanation satisfaction, and trust.
Original language | English |
---|---|
Pages (from-to) | 7-16 |
Number of pages | 10 |
Journal | Proceedings International Conference on Automated Planning and Scheduling, ICAPS |
Volume | 33 |
Issue number | 1 |
DOIs | |
State | Published - 2023 |
Externally published | Yes |
Event | 33rd International Conference on Automated Planning and Scheduling, ICAPS 2023 - Prague, Czech Republic Duration: 8 Jul 2023 → 13 Jul 2023 |
Bibliographical note
Publisher Copyright:Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.