Cooperative output regulation of heterogeneous directed multi-agent systems: a fully distributed model-free reinforcement learning framework

Xiongtao Shi, Yanjie Li, Chenglong Du, Huiping Li, Chaoyang Chen, Weihua Gui

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

In this paper, the cooperative output regulation (COR) problem of a class of unknown heterogeneous multi-agent systems (MASs) with directed graphs is studied via a model-free reinforcement learning (RL) based fully distributed event-triggered control (ETC) strategy. First, we consider the scenario that the exosystem is accessible globally to all agents, an internal model-based augmented algebraic Riccati equation (AARE) is constructed, and its solution is learned by the proposed model-free RL algorithm via online input-output data. Further, for the scenario that the exosystem is accessible only to its adjacent followers, the distributed observers are designed for each agent to get the state of the exosystem, and an internal modelbased fully distributed adaptive ETC protocol is then synthesized to construct the corresponding AARE, and the feedback gain matrix is learned in a model-free fashion. The model-free RL-based control protocol proposed in this paper can not only remove the prior knowledge of agents’ dynamics, but also release the dependence on global information by the adaptive event-triggered mechanism (ETM) and the new graph-based Lyapunov function. Finally, simulation results are illustrated to show the feasibility and effectiveness of the proposed control scheme.

Original languageEnglish
Article number122202
JournalScience China Information Sciences
Volume68
Issue number2
DOIs
StatePublished - Feb 2025

Keywords

  • directed graph
  • event-triggered control
  • fully distributed
  • model-free reinforcement learning
  • unknown heterogeneous multi-agent systems

Fingerprint

Dive into the research topics of 'Cooperative output regulation of heterogeneous directed multi-agent systems: a fully distributed model-free reinforcement learning framework'. Together they form a unique fingerprint.

Cite this