基于改进深度双Q网络的移动机器人路径规划算法

Lei Zhang, Yashuang Mu, Quan Pan

科研成果: 期刊稿件文章同行评审

摘要

To solve the problems of the conventional mobile robot path planning method based on the deep double Q-network (DDQN), such as incomplete search and slow convergence, we propose an improved DDQN (I-DDQN) learning algorithm. First, the proposed I-DDQN algorithm uses the competitive network structure to estimate the value function of the DDQN algorithm. Second, we propose a robot path exploration strategy based on a two-layer controller structure, where the value function of the upper controller is used to explore the local optimal action of the mobile robot and the value function of the lower controller is used to learn the global task strategy. In addition, during algorithm learning, we use the priority experience playback mechanism for data collection and sampling and the small-batch data for network training. Finally, we perform a comparative analysis with the conventional DDQN algorithm and its improved algorithm in two different simulation environments, OpenAI Gym and Gazebo. The experimental results show that the proposed I-DDQN algorithm is superior to the conventional DDQN algorithm and its improved algorithm in terms of various evaluation indicators in the two simulation environments and effectively overcomes the problems of incomplete path search and slow convergence speed in the same complex environment.

投稿的翻译标题Mobile Robot Path Planning Algorithm with Improved Deep Double Q Networks
源语言繁体中文
页(从-至)365-376
页数12
期刊Information and Control
53
3
DOI
出版状态已出版 - 2024

关键词

  • competitive network structure
  • deep learning
  • hierarchical deep reinforcement learning
  • priority experience playback
  • reinforcement learning
  • robot path planning

指纹

探究 '基于改进深度双Q网络的移动机器人路径规划算法' 的科研主题。它们共同构成独一无二的指纹。

引用此