PDRL: Towards Deeper States and Further Behaviors in Unsupervised Skill Discovery by Progressive Diversity

Ziming He, Chao Song, Jingchen Li, Haobin Shi

Research output: Contribution to journalArticlepeer-review

Abstract

We present Progressive Diversity Reinforcement Learning (PDRL), an unsupervised reinforcement learning (URL) method for discovering diverse skills. PDRL encourages learning behaviors that span multiple steps, particularly by introducing 'deeper states'-states that require a longer sequence of actions to reach without repetition. To address the challenges of weak skill diversity and weak exploration in partially observable environments, PDRL employs two indications for skill learning to foster exploration and skill diversity, emphasizing each observation and sub-trajectory's accuracy compared to its predecessor. Skill latent variables are represented by mappings from states or trajectories, helping to distinguish and recover learned skills. This dual representation promotes exploration and skill diversity without additional modeling or prior knowledge. PDRL also integrates intrinsic rewards through a combination of observations and sub-trajectories, effectively preventing skill duplication. Experiments across multiple benchmarks show that PDRL discovers a broader range of skills compared to existing methods. Additionally, pre-training with PDRL accelerates fine-tuning in goal-conditioned reinforcement learning (GCRL) tasks, as demonstrated in Fetch robotic manipulation tasks.

Original languageEnglish
JournalIEEE Transactions on Cognitive and Developmental Systems
DOIs
StateAccepted/In press - 2024

Keywords

  • Reinforcement learning(RL)
  • goal-conditioned reinforcement learning(GCRL)
  • unsupervised skill discovery

Fingerprint

Dive into the research topics of 'PDRL: Towards Deeper States and Further Behaviors in Unsupervised Skill Discovery by Progressive Diversity'. Together they form a unique fingerprint.

Cite this