Abstract
Monte-Carlo Tree Search (MCTS) algorithms estimate the value of MDP states based on rewards received by performing multiple random simulations. MCTS algorithms can use different strategies to aggregate these rewards and provide an estimation for the states’ values. The most common aggregation method is to store the mean reward of all simulations. Another common approach stores the best observed reward from each state. Both of these methods have complementary benefits and drawbacks. In this paper, we show that both of these methods are biased estimators for the real expected value of MDP states. We propose an hybrid approach that uses the best reward for states with low noise, and otherwise uses the mean. Experimental results on the Sailing MDP domain show that our method has a considerable advantage when the rewards are drawn from a noisy distribution.
Original language | English |
---|---|
Title of host publication | Proceedings of the 8th Annual Symposium on Combinatorial Search, SoCS 2015 |
Editors | Levi Lelis, Roni Stern |
Publisher | Association for the Advancement of Artificial Intelligence |
Pages | 156-160 |
Number of pages | 5 |
ISBN (Electronic) | 9781577357322 |
DOIs | |
State | Published - 2015 |
Externally published | Yes |
Event | 8th Annual Symposium on Combinatorial Search, SoCS 2015 - Ein Gedi, Israel Duration: 11 Jun 2015 → 13 Jun 2015 |
Publication series
Name | Proceedings of the 8th Annual Symposium on Combinatorial Search, SoCS 2015 |
---|---|
Volume | 2015-January |
Conference
Conference | 8th Annual Symposium on Combinatorial Search, SoCS 2015 |
---|---|
Country/Territory | Israel |
City | Ein Gedi |
Period | 11/06/15 → 13/06/15 |
Bibliographical note
Publisher Copyright:Copyright © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Funding
The research was supported by Israel Science Foundation (ISF) under grant #417/13 to Ariel Felner.
Funders | Funder number |
---|---|
Israel Science Foundation | 417/13 |