This paper studies how automated agents can persuade humans to behave in certain ways. The motivation behind such agent's behavior resides in the utility function that the agent's designer wants to maximize and which may be different from the user's utility function. Specifically, in the strategic settings studied, the agent provides correct yet partial information about a state of the world that is unknown to the user but relevant to his decision. Persuasion games were designed to study interactions between automated players where one player sends state information to the other to persuade it to behave in a certain way. We show that this game theory based model is not sufficient to model human-agent interactions, since people tend to deviate from the rational choice. We use machine learning to model such deviation in people from this game theory based model. The agent generates a probabilistic description of the world state that maximizes its benefit and presents it to the users. The proposed model was evaluated in an extensive empirical study involving road selection tasks that differ in length, costs and congestion. Results showed that people's behavior indeed deviated significantly from the behavior predicted by the game theory based model. Moreover, the agent developed in our model performed better than an agent that followed the behavior dictated by the game-theoretical models.
|Title of host publication||Proceedings of the 25th AAAI Conference on Artificial Intelligence, AAAI 2011|
|Number of pages||7|
|State||Published - 11 Aug 2011|
|Event||25th AAAI Conference on Artificial Intelligence, AAAI 2011 - San Francisco, United States|
Duration: 7 Aug 2011 → 11 Aug 2011
|Name||Proceedings of the 25th AAAI Conference on Artificial Intelligence, AAAI 2011|
|Conference||25th AAAI Conference on Artificial Intelligence, AAAI 2011|
|Period||7/08/11 → 11/08/11|
Bibliographical notePublisher Copyright:
Copyright © 2011, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.