Abstract
The potential of autonomous underwater vehicle (AUV) on future applications is significant due to advancements in autonomy and intelligence. Path planning is a critical technology for AUV to perform operational missions in complex marine environments. To this end, this paper proposes a path planning method for AUV based on deep reinforcement learning. Initially, considering actual requirements, a complex marine environment model containing underwater terrain, sonobuoy detection, and ocean currents is established. Subsequently, the corresponding state space, action space, and reward function are formulated. Furthermore, to address the inherent limitations of existing deep reinforcement learning algorithms in terms of training efficiency, a mixed experience replay (MER) strategy is proposed. This strategy aims to enhance the efficiency of sample learning by integrating prior knowledge and exploration experience. Lastly, a novel HMER-SAC algorithm for AUV path planning is proposed by integrating the Soft Actor–Critic (SAC) algorithm with the hierarchical reinforcement learning strategy and the MER strategy. The results of the simulation and experiment demonstrate that the method is capable of efficiently planning executable paths in complex marine environments and exhibits superior training efficiency, stability, and performance.
| Original language | English |
|---|---|
| Article number | 119354 |
| Journal | Ocean Engineering |
| Volume | 313 |
| DOIs | |
| State | Published - 1 Dec 2024 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 14 Life Below Water
Keywords
- Autonomous underwater vehicle
- Deep reinforcement learning
- Hierarchical reinforcement learning
- Path planning
- Soft Actor–Critic
Fingerprint
Dive into the research topics of 'A path planning method based on deep reinforcement learning for AUV in complex marine environment'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver