Abstract
As autonomous agents proliferate in the real world, both in software and robotic settings, they will increasingly need to band together for cooperative activities with previously unfamiliar teammates. In such ad hoc team settings, team strategies cannot be developed a priori. Rather, an agent must be prepared to cooperate with many types of teammates: it must collaborate without pre-coordination. This article defines two aspects of collaboration in two-player teams, involving either simultaneous or sequential decision making. In both cases, the ad hoc agent is more knowledgeable of the environment, and attempts to influence the behavior of its teammate such that they will attain the optimal possible joint utility.
Original language | English |
---|---|
Pages (from-to) | 35-65 |
Number of pages | 31 |
Journal | Artificial Intelligence |
Volume | 203 |
DOIs | |
State | Published - 2013 |
Bibliographical note
Funding Information:Thanks to Michael Littman and Jeremy Stober for helpful comments pertaining to Section 2 . Thanks to Yonatan Aumann, Vincent Conitzer, Reshef Meir, Daniel Stronger, and Leonid Trainer for helpful comments pertaining to Section 3 . Thanks also to the UT Austin Learning Agents Research Group (LARG) for useful comments and suggestions. This work was partially supported by grants from NSF ( IIS-0917122 , IIS-0705587 ), DARPA ( FA8650-08-C-7812 ), ONR ( N00014-09-1-0658 ), FHWA ( DTFH61-07-H-00030 ), Army Research Lab ( W911NF-08-1-0144 ), ISF ( 1357/07 , 898/05 ), Israel Ministry of Science and Technology ( 3-6797 ), ERC (# 267523 ), MURI ( W911NF-08-1-0144 ) and the Fulbright and Guggenheim Foundations .
Keywords
- Autonomous agents
- Game theory
- Multiagent systems
- Teamwork
- k-armed bandits