Abstract
Rapidly-exploring Random Tree (RRT) algorithms are widely used in path planning of Unmanned Underwater Vehicles (UUVs) due to their high efficiency in exploring high-dimensional spaces. However, their effectiveness and practical applicability are limited by the high computational cost associated with kinodynamic constraints and complex environments. To address these challenges, this paper proposes a reinforcement learning-based fast-dual-tree rapidly-exploring random tree (RL-FDTRRT) algorithm to generate feasible and optimal paths. The proposed planning framework consists of a workspace tree and a state space tree. The workspace tree is guided by a reinforcement learning-based obstacle avoidance strategy to reduce redundant sampling. An improved experience replay strategy is integrated into the training process to accelerate convergence of the reward function. With the workspace tree serving as the heuristic tree, the state-space tree employs a motion-reachability-based parent-seeking strategy to compute executable and optimized paths that satisfy the motion constraints of UUVs. Additionally, a bias factor is introduced during the sampling process to ensure the probabilistic completeness of the algorithm. Finally, the proposed algorithm is validated on both a two-dimensional lake map and a three-dimensional ocean map. Experimental results demonstrate that the RL-FDTRRT algorithm outperforms representative RRT-based algorithms in terms of path feasibility and computational efficiency.
| Original language | English |
|---|---|
| Article number | 122937 |
| Journal | Ocean Engineering |
| Volume | 342 |
| Issue number | P3 |
| DOIs | |
| State | Published - 30 Dec 2025 |
Keywords
- Biased sampling strategy
- Path planning
- Rapidly-exploring random tree
- Reinforcement learning
- Unmanned underwater vehicle
Fingerprint
Dive into the research topics of 'Reinforcement learning-based fast-dual-tree RRT path planning for unmanned underwater vehicles'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver