基于改进深度双Q网络的移动机器人路径规划算法

Translated title of the contribution: Mobile Robot Path Planning Algorithm with Improved Deep Double Q Networks

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

To solve the problems of the conventional mobile robot path planning method based on the deep double Q-network (DDQN), such as incomplete search and slow convergence, we propose an improved DDQN (I-DDQN) learning algorithm. First, the proposed I-DDQN algorithm uses the competitive network structure to estimate the value function of the DDQN algorithm. Second, we propose a robot path exploration strategy based on a two-layer controller structure, where the value function of the upper controller is used to explore the local optimal action of the mobile robot and the value function of the lower controller is used to learn the global task strategy. In addition, during algorithm learning, we use the priority experience playback mechanism for data collection and sampling and the small-batch data for network training. Finally, we perform a comparative analysis with the conventional DDQN algorithm and its improved algorithm in two different simulation environments, OpenAI Gym and Gazebo. The experimental results show that the proposed I-DDQN algorithm is superior to the conventional DDQN algorithm and its improved algorithm in terms of various evaluation indicators in the two simulation environments and effectively overcomes the problems of incomplete path search and slow convergence speed in the same complex environment.

Translated title of the contributionMobile Robot Path Planning Algorithm with Improved Deep Double Q Networks
Original languageChinese (Traditional)
Pages (from-to)365-376
Number of pages12
JournalInformation and Control
Volume53
Issue number3
DOIs
StatePublished - 2024

Fingerprint

Dive into the research topics of 'Mobile Robot Path Planning Algorithm with Improved Deep Double Q Networks'. Together they form a unique fingerprint.

Cite this