TY - JOUR
T1 - Cooperative output regulation of heterogeneous directed multi-agent systems
T2 - a fully distributed model-free reinforcement learning framework
AU - Shi, Xiongtao
AU - Li, Yanjie
AU - Du, Chenglong
AU - Li, Huiping
AU - Chen, Chaoyang
AU - Gui, Weihua
N1 - Publisher Copyright:
© Science China Press 2025.
PY - 2025/2
Y1 - 2025/2
N2 - In this paper, the cooperative output regulation (COR) problem of a class of unknown heterogeneous multi-agent systems (MASs) with directed graphs is studied via a model-free reinforcement learning (RL) based fully distributed event-triggered control (ETC) strategy. First, we consider the scenario that the exosystem is accessible globally to all agents, an internal model-based augmented algebraic Riccati equation (AARE) is constructed, and its solution is learned by the proposed model-free RL algorithm via online input-output data. Further, for the scenario that the exosystem is accessible only to its adjacent followers, the distributed observers are designed for each agent to get the state of the exosystem, and an internal modelbased fully distributed adaptive ETC protocol is then synthesized to construct the corresponding AARE, and the feedback gain matrix is learned in a model-free fashion. The model-free RL-based control protocol proposed in this paper can not only remove the prior knowledge of agents’ dynamics, but also release the dependence on global information by the adaptive event-triggered mechanism (ETM) and the new graph-based Lyapunov function. Finally, simulation results are illustrated to show the feasibility and effectiveness of the proposed control scheme.
AB - In this paper, the cooperative output regulation (COR) problem of a class of unknown heterogeneous multi-agent systems (MASs) with directed graphs is studied via a model-free reinforcement learning (RL) based fully distributed event-triggered control (ETC) strategy. First, we consider the scenario that the exosystem is accessible globally to all agents, an internal model-based augmented algebraic Riccati equation (AARE) is constructed, and its solution is learned by the proposed model-free RL algorithm via online input-output data. Further, for the scenario that the exosystem is accessible only to its adjacent followers, the distributed observers are designed for each agent to get the state of the exosystem, and an internal modelbased fully distributed adaptive ETC protocol is then synthesized to construct the corresponding AARE, and the feedback gain matrix is learned in a model-free fashion. The model-free RL-based control protocol proposed in this paper can not only remove the prior knowledge of agents’ dynamics, but also release the dependence on global information by the adaptive event-triggered mechanism (ETM) and the new graph-based Lyapunov function. Finally, simulation results are illustrated to show the feasibility and effectiveness of the proposed control scheme.
KW - directed graph
KW - event-triggered control
KW - fully distributed
KW - model-free reinforcement learning
KW - unknown heterogeneous multi-agent systems
UR - http://www.scopus.com/inward/record.url?scp=85217275069&partnerID=8YFLogxK
U2 - 10.1007/s11432-024-4103-1
DO - 10.1007/s11432-024-4103-1
M3 - 文章
AN - SCOPUS:85217275069
SN - 1674-733X
VL - 68
JO - Science China Information Sciences
JF - Science China Information Sciences
IS - 2
M1 - 122202
ER -