Abstract
Increasingly, multi-agent systems are being designed for a variety of complex, dynamic domains. Effective agent interactions in such domains raise some of the most fundamental research challenges for agent-based systems, in teamwork, multi-agent learning and agent modelling. The RoboCup research initiative, particularly the simulation league, has been proposed to pursue such multi-agent research challenges, using the common testbed of simulation soccer. Despite the significant popularity of RoboCup within the research community, general lessons have not often been extracted from participation in RoboCup. This is what we attempt to do here. We have fielded two teams, ISIS97 and ISIS98, in RoboCup competitions. These teams have been in the top four teams in these competitions. We compare the teams, and attempt to analyze and generalize the lessons learned. This analysis reveals several surprises, pointing out lessons for teamwork and for multi-agent learning.
Original language | English |
---|---|
Pages (from-to) | 115-129 |
Number of pages | 15 |
Journal | Autonomous Agents and Multi-Agent Systems |
Volume | 4 |
Issue number | 1-2 |
DOIs | |
State | Published - 2001 |
Externally published | Yes |
Bibliographical note
Funding Information:This research is supported in part by NSF grant IRI-9711665, and in part by a generous gift from the Intel Corporation.
Funding
This research is supported in part by NSF grant IRI-9711665, and in part by a generous gift from the Intel Corporation.
Funders | Funder number |
---|---|
National Science Foundation | IRI-9711665 |
Intel Corporation |
Keywords
- Agent learning
- Multi-agents
- RoboCup soccer
- Teamwork