Reinforcement Learning-Based Nearly Optimal Control for Constrained-Input Partially Unknown Systems Using Differentiator

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

In this article, a synchronous reinforcement-learning-based algorithm is developed for input-constrained partially unknown systems. The proposed control also alleviates the need for an initial stabilizing control. A first-order robust exact differentiator is employed to approximate unknown drift dynamics. Critic, actor, and disturbance neural networks (NNs) are established to approximate the value function, the control policy, and the disturbance policy, respectively. The Hamilton-Jacobi-Isaacs equation is solved by applying the value function approximation technique. The stability of the closed-loop system can be ensured. The state and weight errors of the three NNs are all uniformly ultimately bounded. Finally, the simulation results are provided to verify the effectiveness of the proposed method.

Original languageEnglish
Article number8943132
Pages (from-to)4713-4725
Number of pages13
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume31
Issue number11
DOIs
StatePublished - Nov 2020

Keywords

  • First-order robust exact differentiator (RED)
  • input constraint
  • neural network (NN)
  • reinforcement learning (RL)
  • two-player zero-sum game

Fingerprint

Dive into the research topics of 'Reinforcement Learning-Based Nearly Optimal Control for Constrained-Input Partially Unknown Systems Using Differentiator'. Together they form a unique fingerprint.

Cite this