Application of Reinforcement Learning in Deep-Stall Recovery

Ruichen Ming, Xiao Xiong Liu, Xinlong Xu, Yu Li, Weiguo Zhang

Research output: Contribution to journalArticlepeer-review

Abstract

The aircraft deep-stall phenomenon is caused by an angle of attack that stabilizes at an equilibrium point of high angle of attack. This excessive angle of attack leads to a reduction in the lift, as well as a decrease in the elevator efficiency, making it difficult to recover the aircraft out of this very dangerous flight condition. Reinforcement learning methods offer a design approach for such complex nonlinear control problems. However, during deep-stall recovery tasks, the nonlinearity of the aircraft model is high, and the control efficiency is substantially reduced, hence limiting the application of direct reinforcement learning methods. To address this problem, we conduct bifurcation and phase plane analyses on the deep-stall model of the aircraft, and use the results as domain knowledge to construct the reward function. Then, we apply the proximal policy optimization algorithm to the deep-stall strategy. Finally, in the simulation, we compare the method with feedback shaping with the reinforcement learning method without feedback shaping. The simulation results indicate that although the former method recovers the aircraft at the angle of attack, its uncontrollable state renders this an unsuccessful recovery. Meanwhile, the proposed method stably performs deep-stall recovery tasks through a loop maneuver.

Original languageEnglish
JournalIEEE Transactions on Aerospace and Electronic Systems
DOIs
StateAccepted/In press - 2025

Keywords

  • bifurcation analysis
  • deep-stall recovery
  • phase portrait analysis
  • reinforcement learning
  • reward shaping

Cite this