We present the problem of reinforcement learning with exogenous termination. We define the Termination Markov Decision Process (TerMDP), an extension of the MDP framework, in which episodes may be interrupted by an external non-Markovian observer. This formulation accounts for numerous real-world situations, such as a human interrupting an autonomous driving agent for reasons of discomfort. We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds. We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret. Motivated by our theoretical analysis, we design and implement a scalable approach, which combines optimism (w.r.t. termination) and a dynamic discount factor, incorporating the termination probability. We deploy our method on high-dimensional driving and MinAtar benchmarks. Additionally, we test our approach on human data in a driving setting. Our results demonstrate fast convergence and significant improvement over various baseline approaches.
|Title of host publication
|Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
|S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh
|Neural information processing systems foundation
|Published - 2022
|36th Conference on Neural Information Processing Systems, NeurIPS 2022 - New Orleans, United States
Duration: 28 Nov 2022 → 9 Dec 2022
|Advances in Neural Information Processing Systems
|36th Conference on Neural Information Processing Systems, NeurIPS 2022
|28/11/22 → 9/12/22
Bibliographical notePublisher Copyright:
© 2022 Neural information processing systems foundation. All rights reserved.