Online Optimal Attitude Stabilization Via Reinforcement Learning for Rigid Spacecraft With Dynamic Uncertainty

Chengfeng Luo, Xin Ning, Rugang Tang

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we discuss an online reinforcement learning (RL) algorithm to solve the optimal control problem for rigid spacecraft attitude control systems with dynamic uncertainty. The RL algorithm adapts in real time, finding the optimal control policy estimation while guaranteeing the stability of the closed-loop system and the algorithm convergence. To address dynamic uncertainty, we introduce a two-phase learning structure implementing recursive computations based on the measurable system state, rather than relying on prior knowledge of the system's dynamic model. A sufficient condition for the algorithm convergence is presented, ensuring that the control policy converges to the optimal controller within finite iterations of the learning process. Comparative simulations are conducted to illustrate the validity and advantages of the proposed algorithm.

Original languageEnglish
JournalIEEE Transactions on Aerospace and Electronic Systems
DOIs
StateAccepted/In press - 2025

Keywords

  • Attitude control
  • optimal control
  • persistence of excitation
  • reinforcement learning
  • system identification

Fingerprint

Dive into the research topics of 'Online Optimal Attitude Stabilization Via Reinforcement Learning for Rigid Spacecraft With Dynamic Uncertainty'. Together they form a unique fingerprint.

Cite this