A learning-based flexible autonomous motion control method for UAV in dynamic unknown environments

Wan Kaifang, Li Bo, Gao Xiaoguang, Hu Zijian, Yang Zhipeng

科研成果: 期刊稿件文章同行评审

20 引用 (Scopus)

摘要

This paper presents a deep reinforcement learning (DRL)-based motion control method to provide unmanned aerial vehicles (UAVs) with additional flexibility while flying across dynamic unknown environments autonomously. This method is applicable in both military and civilian fields such as penetration and rescue. The autonomous motion control problem is addressed through motion planning, action interpretation, trajectory tracking, and vehicle movement within the DRL framework. Novel DRL algorithms are presented by combining two difference-amplifying approaches with traditional DRL methods and are used for solving the motion planning problem. An improved Lyapunov guidance vector field (LGVF) method is used to handle the trajectory-tracking problem and provide guidance control commands for the UAV. In contrast to conventional motion-control approaches, the proposed methods directly map the sensorbased detections and measurements into control signals for the inner loop of the UAV, i.e., an end-to-end control. The training experiment results show that the novel DRL algorithms provide more than a 20% performance improvement over the state-of-the-art DRL algorithms. The testing experiment results demonstrate that the controller based on the novel DRL and LGVF, which is only trained once in a static environment, enables the UAV to fly autonomously in various dynamic unknown environments. Thus, the proposed technique provides strong flexibility for the controller.

源语言英语
页(从-至)1490-1508
页数19
期刊Journal of Systems Engineering and Electronics
32
6
DOI
出版状态已出版 - 1 12月 2021

指纹

探究 'A learning-based flexible autonomous motion control method for UAV in dynamic unknown environments' 的科研主题。它们共同构成独一无二的指纹。

引用此