Designing and implementing explainable systems is seen as the next step towards increasing user trust in, acceptance of and reliance on Artificial Intelligence (AI) systems. While explaining choices made by black-box algorithms such as machine learning and deep learning has occupied most of the limelight, systems that attempt to explain decisions (even simple ones) in the context of social choice are steadily catching up. In this paper, we provide a comprehensive survey of explainability in mechanism design, a domain characterized by economically motivated agents and often having no single choice that maximizes all individual utility functions. We discuss the main properties and goals of explainability in mechanism design, distinguishing them from those of Explainable AI in general. This discussion is followed by a thorough review of the challenges one may face when working on Explainable Mechanism Design and propose a few solution concepts to those.
|Title of host publication
|Multi-Agent Systems - 19th European Conference, EUMAS 2022, Proceedings
|Dorothea Baumeister, Jörg Rothe
|Springer Science and Business Media Deutschland GmbH
|Number of pages
|Published - 2022
|19th European Conference on Multi-Agent Systems, EUMAS 2022 - Düsseldorf, Germany
Duration: 14 Sep 2022 → 16 Sep 2022
|Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
|19th European Conference on Multi-Agent Systems, EUMAS 2022
|14/09/22 → 16/09/22
Bibliographical notePublisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
- Mechanism design