A path planning method based on deep reinforcement learning for AUV in complex marine environment

An Zhang, Weixiang Wang, Wenhao Bi, Zhanjun Huang

科研成果: 期刊稿件文章同行评审

7 引用 (Scopus)

摘要

The potential of autonomous underwater vehicle (AUV) on future applications is significant due to advancements in autonomy and intelligence. Path planning is a critical technology for AUV to perform operational missions in complex marine environments. To this end, this paper proposes a path planning method for AUV based on deep reinforcement learning. Initially, considering actual requirements, a complex marine environment model containing underwater terrain, sonobuoy detection, and ocean currents is established. Subsequently, the corresponding state space, action space, and reward function are formulated. Furthermore, to address the inherent limitations of existing deep reinforcement learning algorithms in terms of training efficiency, a mixed experience replay (MER) strategy is proposed. This strategy aims to enhance the efficiency of sample learning by integrating prior knowledge and exploration experience. Lastly, a novel HMER-SAC algorithm for AUV path planning is proposed by integrating the Soft Actor–Critic (SAC) algorithm with the hierarchical reinforcement learning strategy and the MER strategy. The results of the simulation and experiment demonstrate that the method is capable of efficiently planning executable paths in complex marine environments and exhibits superior training efficiency, stability, and performance.

源语言英语
文章编号119354
期刊Ocean Engineering
313
DOI
出版状态已出版 - 1 12月 2024

指纹

探究 'A path planning method based on deep reinforcement learning for AUV in complex marine environment' 的科研主题。它们共同构成独一无二的指纹。

引用此