Explaining Decisions of Agents in Mixed-Motive Games.

Maayan Orner, Oleg Maksimov, Akiva Kleinerman, Charles Ortiz, Sarit Kraus

Research output: Other contribution

Abstract

In recent years, agents have become capable of communicating seamlessly via natural language and navigating in environments that involve cooperation and competition, a fact that can introduce social dilemmas. Due to the interleaving of cooperation and competition, understanding agents' decision-making in such environments is challenging, and humans can benefit from obtaining explanations. However, such environments and scenarios have rarely been explored in the context of explainable AI. While some explanation methods for cooperative environments can be applied in mixed-motive setups, they do not address inter-agent competition, cheap-talk, or implicit communication by actions. In this work, we design explanation methods to address these issues. Then, we proceed to demonstrate their effectiveness and usefulness for humans, using a non-trivial mixed-motive game as a test case. Lastly, we establish generality and demonstrate the applicability of the methods to other games, including one where we mimic human game actions using large language models.
Original languageEnglish
PublisherCornell University Library, arXiv.org
Volumeabs/2407.15255
DOIs
StatePublished - 2024

Bibliographical note

DBLP's bibliographic metadata records provided through http://dblp.org/search/publ/api are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.

Fingerprint

Dive into the research topics of 'Explaining Decisions of Agents in Mixed-Motive Games.'. Together they form a unique fingerprint.

Cite this