Cournot Policy Model: Rethinking centralized training in multi-agent reinforcement learning

Jingchen Li, Yusen Yang, Ziming He, Huarui Wu, Haobin Shi, Wenbai Chen

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

This work studies Centralized Training and Decentralized Execution (CTDE), which is a powerful mechanism to ease multi-agent reinforcement learning. Although the centralized evaluation ensures unbiased estimates of Q-value, peers with unknown policies make the decentralized policy far from the expectation. To make progress in more stabilized and effective joint policy, we develop a novel game framework, termed Cournot Policy Model, to enhance the CTDE-based multi-agent learning. Combining the game theory and reinforcement learning, we regard the joint decision-making in a single time step as a Cournot duopoly model, and then design a Hetero Variational Auto-Encoder to model the policies of peers in the decentralized execution. With a conditional policy, each agent is guided to a stable mixed-strategy equilibrium even though the joint policy evolves over time. We further demonstrate that such an equilibrium must exist in the case of centralized evaluation. We investigate the improvement of our method on existing centralized learning methods. The experimental results on a comprehensive collection of benchmarks indicate our approach consistently outperforms baseline methods.

Original languageEnglish
Article number120983
JournalInformation Sciences
Volume677
DOIs
StatePublished - Aug 2024

Keywords

  • Machine learning
  • Multi-agent reinforcement learning
  • Multi-agent system

Fingerprint

Dive into the research topics of 'Cournot Policy Model: Rethinking centralized training in multi-agent reinforcement learning'. Together they form a unique fingerprint.

Cite this