Multi Task Inverse Reinforcement Learning for Common Sense Reward

Neta Glazer, Aviv Navon, Aviv Shamsian, Ethan Fetaya

Research output: Working paper / PreprintPreprint

6 Downloads (Pure)

Abstract

One of the challenges in applying reinforcement learning in a complex real-world environment lies in providing the agent with a sufficiently detailed reward function. Any misalignment between the reward and the desired behavior can result in unwanted outcomes. This may lead to issues like "reward hacking" where the agent maximizes rewards by unintended behavior. In this work, we propose to disentangle the reward into two distinct parts. A simple task-specific reward, outlining the particulars of the task at hand, and an unknown common-sense reward, indicating the expected behavior of the agent within the environment. We then explore how this common-sense reward can be learned from expert demonstrations. We first show that inverse reinforcement learning, even when it succeeds in training an agent, does not learn a useful reward function. That is, training a new agent with the learned reward does not impair the desired behaviors. We then demonstrate that this problem can be solved by training simultaneously on multiple tasks. That is, multi-task inverse reinforcement learning can be applied to learn a useful reward function.
Original languageAmerican English
StatePublished - 17 Feb 2024

Keywords

  • cs.LG

Fingerprint

Dive into the research topics of 'Multi Task Inverse Reinforcement Learning for Common Sense Reward'. Together they form a unique fingerprint.

Cite this