Neighborhood curiosity-based exploration in multi-agent reinforcement learning

Shike Yang, Ziming He, Jingchen Li, Haobin Shi, Qingbing Ji, Kao Shing Hwang, Xianshan Li

Research output: Contribution to journalArticlepeer-review

Abstract

Efficient exploration in cooperative multi-agent reinforcement learning is still tricky in complex tasks. In this paper, we propose a novel multi-agent collaborative exploration method called Neighborhood Curiosity-based Exploration (NCE), by which agents can explore not only novel states but also new partnerships. Concretely, we use the attention mechanism in graph convolutional networks to perform a weighted summation of features from neighbors. The calculated attention weights can be regarded as an embodiment of the relationship among agents. Then we use the prediction errors of the aggregated features as intrinsic rewards to facilitate exploration. When agents encounter novel states or new partnerships, NCE will produce large prediction errors, resulting in large intrinsic rewards. In addition, agents are more influenced by their neighbors and only interact directly with them in multi-agent systems. Exploring partnerships between agents and their neighbors can enable agents to capture the most important cooperative relations with other agents. Therefore, NCE can effectively promote collaborative exploration even in environments with a large number of agents. Our experimental results show that NCE achieves significant performance improvements on the challenging StarCraft II Micromanagement (SMAC) benchmark.

Original languageEnglish
JournalIEEE Transactions on Cognitive and Developmental Systems
DOIs
StateAccepted/In press - 2024

Keywords

  • Machine learning
  • Multi-agent reinforcement learning
  • Multi-agent system

Fingerprint

Dive into the research topics of 'Neighborhood curiosity-based exploration in multi-agent reinforcement learning'. Together they form a unique fingerprint.

Cite this