TY - GEN
T1 - Using Deep Reinforcement Learning to Improve the Robustness of UAV Lateral-Directional Control
AU - Wang, Rui
AU - Zhou, Zhou
AU - Zhu, Xiaoping
AU - Zheng, Liming
N1 - Publisher Copyright:
© (2022) by International Council of Aeronautical Sciences (ICAS) All rights reserved.
PY - 2022
Y1 - 2022
N2 - For a small low-cost Unmanned Aerial Vehicle (UAV), the accurate aerodynamics and flight dynamics characteristics wouldn't be obtained easily, and the control coupling is serious, so the robustness of its flight controller must be considered carefully. In order to solve the problem, a Lateral-Directional (Lat-Dir) flight control method based on Deep Reinforcement Learning (DRL) are proposed in this paper. Firstly, based on the nominal state, three control laws are designed: classical Proportional Integral Derivative (PID) control, Linear Quadratic Gaussian (LQG) control based on modern control theory, and Deep Reinforcement Learning (DRL) control based on Twin Delayed Deep Deterministic Policy Gradient (TD3) method. In order to solve the problem of incomprehensible physical meaning of neural network in DRL, a simplified control strategy network is derived based on the inspiration of PID controller. In order to solve the problem that the reward function of DRL is difficult to be determined, the weights of the optimal quadratic function designed by LQG method are adopted, and the weights of control output considering discretization is added also. Then, the three controller are applied to nominal flight state and deviation state respectively, and the numerical flight simulation is carried out. The results show that, in the nominal state, the performance of DRL is close to the LQG and better than the PID. In the deviation state, which the lateral and directional static stable derivatives are changed artificially from stable to neutral stable, the rise time and adjustment time of the DRL change slightly, while the LQG degrades seriously and appears instable, and it is proved that the proposed DRL control method has better performance robustness.
AB - For a small low-cost Unmanned Aerial Vehicle (UAV), the accurate aerodynamics and flight dynamics characteristics wouldn't be obtained easily, and the control coupling is serious, so the robustness of its flight controller must be considered carefully. In order to solve the problem, a Lateral-Directional (Lat-Dir) flight control method based on Deep Reinforcement Learning (DRL) are proposed in this paper. Firstly, based on the nominal state, three control laws are designed: classical Proportional Integral Derivative (PID) control, Linear Quadratic Gaussian (LQG) control based on modern control theory, and Deep Reinforcement Learning (DRL) control based on Twin Delayed Deep Deterministic Policy Gradient (TD3) method. In order to solve the problem of incomprehensible physical meaning of neural network in DRL, a simplified control strategy network is derived based on the inspiration of PID controller. In order to solve the problem that the reward function of DRL is difficult to be determined, the weights of the optimal quadratic function designed by LQG method are adopted, and the weights of control output considering discretization is added also. Then, the three controller are applied to nominal flight state and deviation state respectively, and the numerical flight simulation is carried out. The results show that, in the nominal state, the performance of DRL is close to the LQG and better than the PID. In the deviation state, which the lateral and directional static stable derivatives are changed artificially from stable to neutral stable, the rise time and adjustment time of the DRL change slightly, while the LQG degrades seriously and appears instable, and it is proved that the proposed DRL control method has better performance robustness.
KW - Deep Reinforcement Learning (DRL)
KW - flight control
KW - reward function
KW - strategy network
KW - Unmanned Aerial Vehicle (UAV)
UR - http://www.scopus.com/inward/record.url?scp=85159620085&partnerID=8YFLogxK
M3 - 会议稿件
AN - SCOPUS:85159620085
T3 - 33rd Congress of the International Council of the Aeronautical Sciences, ICAS 2022
SP - 5489
EP - 5504
BT - 33rd Congress of the International Council of the Aeronautical Sciences, ICAS 2022
PB - International Council of the Aeronautical Sciences
T2 - 33rd Congress of the International Council of the Aeronautical Sciences, ICAS 2022
Y2 - 4 September 2022 through 9 September 2022
ER -