Reinforcement learning for multi-goal robot manipulation tasks is usually challenging, especially when sparse rewards are provided. It often requires millions of data collected before a stable strategy is learned. Recent algorithms like Hindsight Experience Replay (HER) have accelerated the learning process greatly by replacing the original desired goal with one of the achieved points (substitute goals) alongside the same trajectory. However, the selection of previous experience to learn is naively sampled in HER, in which the trajectory selection and the substitute goal sampling is completely random. In this paper, we discuss an experience prioritization strategy for HER that improves the learning efficiency. We propose the Goal Density-based hindsight experience Prioritization (GDP) method that focuses on utilizing the density distribution of the achieved points and prioritizes achieved points which are rarely seen in the replay buffer. These points are used as substitute goals for HER. In addition, we propose an Prioritization Switching with Ensembling Strategy (PSES) method to switch different experience prioritization algorithms during learning, which allows to select the best performance during each learning stage. We evaluate our method with several OpenAI Gym robotic manipulation tasks. The results show that GDP accelerates the learning process in most tasks and can be improved when combining with other prioritization methods using PSES.
|Title of host publication||29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||6|
|State||Published - Aug 2020|
|Event||29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020 - Virtual, Naples, Italy|
Duration: 31 Aug 2020 → 4 Sep 2020
|Name||29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020|
|Conference||29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020|
|Period||31/08/20 → 4/09/20|
Bibliographical notePublisher Copyright:
© 2020 IEEE.