Deep-reinforcement-learning-based UAV autonomous navigation and collision avoidance in unknown environments

Fei WANG, Xiaoping ZHU, Zhou ZHOU, Yang TANG

科研成果: 期刊稿件文章同行评审

23 引用 (Scopus)

摘要

In some military application scenarios, Unmanned Aerial Vehicles (UAVs) need to perform missions with the assistance of on-board cameras when radar is not available and communication is interrupted, which brings challenges for UAV autonomous navigation and collision avoidance. In this paper, an improved deep-reinforcement-learning algorithm, Deep Q-Network with a Faster R-CNN model and a Data Deposit Mechanism (FRDDM-DQN), is proposed. A Faster R-CNN model (FR) is introduced and optimized to obtain the ability to extract obstacle information from images, and a new replay memory Data Deposit Mechanism (DDM) is designed to train an agent with a better performance. During training, a two-part training approach is used to reduce the time spent on training as well as retraining when the scenario changes. In order to verify the performance of the proposed method, a series of experiments, including training experiments, test experiments, and typical episodes experiments, is conducted in a 3D simulation environment. Experimental results show that the agent trained by the proposed FRDDM-DQN has the ability to navigate autonomously and avoid collisions, and performs better compared to the FR-DQN, FR-DDQN, FR-Dueling DQN, YOLO-based YDDM-DQN, and original FR output-based FR-ODQN.

源语言英语
页(从-至)237-257
页数21
期刊Chinese Journal of Aeronautics
37
3
DOI
出版状态已出版 - 3月 2024

指纹

探究 'Deep-reinforcement-learning-based UAV autonomous navigation and collision avoidance in unknown environments' 的科研主题。它们共同构成独一无二的指纹。

引用此