A novel policy based on action confidence limit to improve exploration efficiency in reinforcement learning

Fanghui Huang, Xinyang Deng, Yixin He, Wen Jiang

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

Reinforcement learning has been used to solve many intelligent decision-making problems. However, reinforcement learning still faces a challenge of the low exploration efficiency problem in practice, limiting its widespread application. To address this issue, in this paper, a novel exploration policy based on Q value and exploration value is proposed. The exploration value adopts action confidence limit to measure the uncertainty of the action, which guides the agent to adaptively explore the uncertainty region of the environment. This method can improve exploration efficiency and is beneficial for the agent to make optimal decisions. Then, in order to make our proposed policy applicable to discrete and continuous environments, we combine the proposed policy with classic reinforcement learning algorithms (Q-learning and deep Q-network), and propose two novel algorithms, respectively. Moreover, the convergence of the algorithm is analyzed. Furthermore, a deep auto-encoder network model is used to establish the mapping relationship on state-action in discrete environments, which can avoid a large number of state-action pairs stored in Q-learning stage. Our proposed method can achieve adaptive and effective exploration, which is beneficial for the agent to make intelligent decisions. Finally, the results are verified in discrete and continuous simulation environments. Experimental results demonstrate that our method improves the average reward value and reduces the number of catastrophic actions.

Original languageEnglish
Article number119011
JournalInformation Sciences
Volume640
DOIs
StatePublished - Sep 2023

Keywords

  • Action confidence limit
  • Deep auto-encoder network
  • Exploration policy
  • Reinforcement learning
  • Uncertainty of action

Fingerprint

Dive into the research topics of 'A novel policy based on action confidence limit to improve exploration efficiency in reinforcement learning'. Together they form a unique fingerprint.

Cite this