A Decentralized Communication Framework Based on Dual-Level Recurrence for Multiagent Reinforcement Learning

Xuesi Li, Jingchen Li, Haobin Shi, Kao Shing Hwang

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Designing communication channels for multiagent is a feasible method to conduct decentralized learning, especially in partially observable environments or large-scale multiagent systems. In this work, a communication model with dual-level recurrence is developed to provide a more efficient communication mechanism for the multiagent reinforcement learning field. The communications are conducted by a gated-attention-based recurrent network, in which the historical states are taken into account and regarded as the second-level recurrence. We separate communication messages from memories in the recurrent model so that the proposed communication flow can adapt changeable communication objects in the case of limited communication, and the communication results are fair to every agent. We provide a sufficient discussion about our method in both partially observable and fully observable environments. The results of several experiments suggest our method outperforms the existing decentralized communication frameworks and the corresponding centralized training method.

Original languageEnglish
Pages (from-to)640-649
Number of pages10
JournalIEEE Transactions on Cognitive and Developmental Systems
Volume16
Issue number2
DOIs
StatePublished - 1 Apr 2024

Keywords

  • Gated recurrent network
  • multiagent reinforcement learning
  • multiagent system

Fingerprint

Dive into the research topics of 'A Decentralized Communication Framework Based on Dual-Level Recurrence for Multiagent Reinforcement Learning'. Together they form a unique fingerprint.

Cite this