TY - JOUR
T1 - Modeling human decision making in cliff-edge environments
AU - Katz, Ron
AU - Kraus, Sarit
PY - 2006/11/13
Y1 - 2006/11/13
N2 - In this paper we propose a model for human learning and decision making in environments of repeated Cliff-Edge (CE) interactions. In CE environments, which include common daily interactions, such as sealed-bid auctions and the Ultimatum Game (UG), the probability of success decreases monotonically as the expected reward increases. Thus, CE environments are characterized by an underlying conflict between the strive to maximize profits and the fear of causing the entire deal to fall through. We focus on the behavior of people who repeatedly compete in one-shot CE interactions, with a different opponent in each interaction. Our model, which is based upon the Deviated Virtual Reinforcement Learning (DVRL) algorithm, integrates the Learning Direction Theory with the Reinforcement Learning algorithm. We also examined several other models, using an innovative methodology in which the decision dynamics of the models were compared with the empirical decision patterns of individuals during their interactions. An analysis of human behavior in auctions and in the UG reveals that our model fits the decision patterns of far more subjects than any other model. Copyright © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved.
AB - In this paper we propose a model for human learning and decision making in environments of repeated Cliff-Edge (CE) interactions. In CE environments, which include common daily interactions, such as sealed-bid auctions and the Ultimatum Game (UG), the probability of success decreases monotonically as the expected reward increases. Thus, CE environments are characterized by an underlying conflict between the strive to maximize profits and the fear of causing the entire deal to fall through. We focus on the behavior of people who repeatedly compete in one-shot CE interactions, with a different opponent in each interaction. Our model, which is based upon the Deviated Virtual Reinforcement Learning (DVRL) algorithm, integrates the Learning Direction Theory with the Reinforcement Learning algorithm. We also examined several other models, using an innovative methodology in which the decision dynamics of the models were compared with the empirical decision patterns of individuals during their interactions. An analysis of human behavior in auctions and in the UG reveals that our model fits the decision patterns of far more subjects than any other model. Copyright © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved.
UR - http://www.scopus.com/inward/record.url?scp=33750718805&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
VL - 1
JO - Proceedings of the National Conference on Artificial Intelligence
JF - Proceedings of the National Conference on Artificial Intelligence
ER -