Actor-critic-disturbance reinforcement learning algorithm-based fast finite-time stability of multiagent systems

Junsheng Zhao, Yaqi Gu, Xiangpeng Xie, Dengxiu Yu

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

This paper proposes an actor-critic-disturbance (ACD) reinforcement learning algorithm-based fast finite-time stability of multiagent systems (MASs) with time-varying asymmetrical constraints. Initially, a barrier function is designed to facilitate the transformation of the constrained system to an unconstrained one. Notably, the adaptive control strategy discussed in this paper is capable of solving more general dynamic constraints compared with most existing literature. Subsequently, in scenarios where the disturbance affects the system in the worst way, an H optimal control strategy based on the ACD reinforcement learning algorithms is proposed to enhance the robustness of the system and minimize the influence of disturbances. Thirdly, a fast finite-time theory is integrated into the optimal control protocol for MASs, which allows the system to complete the control objective in finite time while converging faster. Lastly, numerical and practical simulation examples confirm the validity of the theoretical results.

源语言英语
文章编号121802
期刊Information Sciences
699
DOI
出版状态已出版 - 5月 2025

指纹

探究 'Actor-critic-disturbance reinforcement learning algorithm-based fast finite-time stability of multiagent systems' 的科研主题。它们共同构成独一无二的指纹。

引用此