Actor-critic-disturbance reinforcement learning algorithm-based fast finite-time stability of multiagent systems

Junsheng Zhao, Yaqi Gu, Xiangpeng Xie, Dengxiu Yu

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

This paper proposes an actor-critic-disturbance (ACD) reinforcement learning algorithm-based fast finite-time stability of multiagent systems (MASs) with time-varying asymmetrical constraints. Initially, a barrier function is designed to facilitate the transformation of the constrained system to an unconstrained one. Notably, the adaptive control strategy discussed in this paper is capable of solving more general dynamic constraints compared with most existing literature. Subsequently, in scenarios where the disturbance affects the system in the worst way, an H optimal control strategy based on the ACD reinforcement learning algorithms is proposed to enhance the robustness of the system and minimize the influence of disturbances. Thirdly, a fast finite-time theory is integrated into the optimal control protocol for MASs, which allows the system to complete the control objective in finite time while converging faster. Lastly, numerical and practical simulation examples confirm the validity of the theoretical results.

Original languageEnglish
Article number121802
JournalInformation Sciences
Volume699
DOIs
StatePublished - May 2025

Keywords

  • Actor-critic-disturbance reinforcement learning
  • Fast finite-time stabilization
  • Time-varying asymmetrical constraint

Fingerprint

Dive into the research topics of 'Actor-critic-disturbance reinforcement learning algorithm-based fast finite-time stability of multiagent systems'. Together they form a unique fingerprint.

Cite this