Weighted Mean Field Q-Learning for Large Scale Multiagent Systems

Zhuoying Chen, Huiping Li, Zhaoxu Wang, Bing Yan

Research output: Contribution to journalArticlepeer-review

Abstract

Mean field reinforcement learning (MFRL) addresses the problem of dimensional explosion for largescale multiagent systems. However, MFRL averages the actions of neighbors equally while discarding the diversity and distinct features between individuals, which may lead to poor performance in many application scenarios. In this article, a new MFRL algorithm termed temporal weighted mean filed Q-learning (TWMFQ) is proposed. TWMFQ introduces a temporal compensated multihead attention structure to construct the weighted mean-field framework, which can sort out the complex relationships within the swarm into the interactions between specific agent and the weighted virtual mean agent. This approach allows the mean Q-function to represent the swarm behavior more informatively and comprehensively. In addition, an advanced sampling mechanism called mixed experience replay is established, which enriches the diversity of samples and prevents the algorithm from falling into local optimal solution. The comparison experiments on MAgent and multi-USV platform justify the superior performance of TWMFQ across different population sizes.

Original languageEnglish
JournalIEEE Transactions on Industrial Informatics
DOIs
StateAccepted/In press - 2025

Keywords

  • Experience replay
  • mean field reinforcement learning (MFRL)
  • multi-unmanned surface vehicle (USV)

Fingerprint

Dive into the research topics of 'Weighted Mean Field Q-Learning for Large Scale Multiagent Systems'. Together they form a unique fingerprint.

Cite this