Abstract
In this article, a synchronous reinforcement-learning-based algorithm is developed for input-constrained partially unknown systems. The proposed control also alleviates the need for an initial stabilizing control. A first-order robust exact differentiator is employed to approximate unknown drift dynamics. Critic, actor, and disturbance neural networks (NNs) are established to approximate the value function, the control policy, and the disturbance policy, respectively. The Hamilton-Jacobi-Isaacs equation is solved by applying the value function approximation technique. The stability of the closed-loop system can be ensured. The state and weight errors of the three NNs are all uniformly ultimately bounded. Finally, the simulation results are provided to verify the effectiveness of the proposed method.
| Original language | English |
|---|---|
| Article number | 8943132 |
| Pages (from-to) | 4713-4725 |
| Number of pages | 13 |
| Journal | IEEE Transactions on Neural Networks and Learning Systems |
| Volume | 31 |
| Issue number | 11 |
| DOIs | |
| State | Published - Nov 2020 |
Keywords
- First-order robust exact differentiator (RED)
- input constraint
- neural network (NN)
- reinforcement learning (RL)
- two-player zero-sum game
Fingerprint
Dive into the research topics of 'Reinforcement Learning-Based Nearly Optimal Control for Constrained-Input Partially Unknown Systems Using Differentiator'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver