TY - JOUR
T1 - Learning in Restless Bandits Under Exogenous Global Markov Process
AU - Gafni, Tomer
AU - Yemini, Michal
AU - Cohen, Kobi
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2022
Y1 - 2022
N2 - We consider an extension to the restless multi-armed bandit (RMAB) problem with unknown arm dynamics, where an unknown exogenous global Markov process governs the rewards distribution of each arm. Under each global state, the rewards process of each arm evolves according to an unknown Markovian rule, which is non-identical among different arms. At each time, a player chooses an arm out of $N$ arms to play, and receives a random reward from a finite set of reward states. The arms are restless, that is, their local state evolves regardless of the player's actions. Motivated by recent studies on related RMAB settings, the regret is defined as the reward loss with respect to a player that knows the dynamics of the problem, and plays at each time $t$ the arm that maximizes the expected immediate value. The objective is to develop an arm-selection policy that minimizes the regret. To that end, we develop the Learning under Exogenous Markov Process (LEMP) algorithm. We analyze LEMP theoretically and establish a finite-sample bound on the regret. We show that LEMP achieves a logarithmic regret order with time. We further analyze LEMP numerically and present simulation results that support the theoretical findings and demonstrate that LEMP significantly outperforms alternative algorithms.
AB - We consider an extension to the restless multi-armed bandit (RMAB) problem with unknown arm dynamics, where an unknown exogenous global Markov process governs the rewards distribution of each arm. Under each global state, the rewards process of each arm evolves according to an unknown Markovian rule, which is non-identical among different arms. At each time, a player chooses an arm out of $N$ arms to play, and receives a random reward from a finite set of reward states. The arms are restless, that is, their local state evolves regardless of the player's actions. Motivated by recent studies on related RMAB settings, the regret is defined as the reward loss with respect to a player that knows the dynamics of the problem, and plays at each time $t$ the arm that maximizes the expected immediate value. The objective is to develop an arm-selection policy that minimizes the regret. To that end, we develop the Learning under Exogenous Markov Process (LEMP) algorithm. We analyze LEMP theoretically and establish a finite-sample bound on the regret. We show that LEMP achieves a logarithmic regret order with time. We further analyze LEMP numerically and present simulation results that support the theoretical findings and demonstrate that LEMP significantly outperforms alternative algorithms.
KW - Markov processes
KW - restless multi-armed bandit
KW - sequential decision making
KW - sequential learning
UR - http://www.scopus.com/inward/record.url?scp=85144086358&partnerID=8YFLogxK
U2 - 10.1109/TSP.2022.3224790
DO - 10.1109/TSP.2022.3224790
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85144086358
SN - 1053-587X
VL - 70
SP - 5679
EP - 5693
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
ER -