Abstract
Most approaches for goal recognition rely on specifications of the possible dynamics of the actor in the environment when pursuing a goal. These specifications suffer from two key issues. First, encoding these dynamics requires careful design by a domain expert, which is often not robust to noise at recognition time. Second, existing approaches often need costly real-time computations to reason about the likelihood of each potential goal. In this paper, we develop a framework that combines model-free reinforcement learning and goal recognition to alleviate the need for careful, manual domain design, and the need for costly online executions. This framework consists of two main stages: offline learning of policies or utility functions for each potential goal, and online inference. We provide a first instance of this framework using tabular Q-learning for the learning stage, as well as three mechanisms for the inference stage. The resulting instantiation achieves state-of-the-art performance against goal recognizers on standard evaluation domains and superior performance in noisy environments.
Original language | English |
---|---|
Title of host publication | AAAI-22 Technical Tracks 9 |
Publisher | Association for the Advancement of Artificial Intelligence |
Pages | 9644-9651 |
Number of pages | 8 |
ISBN (Electronic) | 1577358767, 9781577358763 |
State | Published - 30 Jun 2022 |
Event | 36th AAAI Conference on Artificial Intelligence, AAAI 2022 - Virtual, Online Duration: 22 Feb 2022 → 1 Mar 2022 |
Publication series
Name | Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 |
---|---|
Volume | 36 |
Conference
Conference | 36th AAAI Conference on Artificial Intelligence, AAAI 2022 |
---|---|
City | Virtual, Online |
Period | 22/02/22 → 1/03/22 |
Bibliographical note
Publisher Copyright:Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.