Neighborhood-Curiosity-Based Exploration in Multiagent Reinforcement Learning

Shike Yang, Ziming He, Jingchen Li, Haobin Shi, Qingbing Ji, Kao Shing Hwang, Xianshan Li

科研成果: 期刊稿件文章同行评审

摘要

Efficient exploration in cooperative multiagent reinforcement learning is still tricky in complex tasks. In this article, we propose a novel multiagent collaborative exploration method called neighborhood-curiosity-based exploration (NCE), by which agents can explore not only novel states but also new partnerships. Concretely, we use the attention mechanism in graph convolutional networks to perform a weighted summation of features from neighbors. The calculated attention weights can be regarded as an embodiment of the relationship among agents. Then, we use the prediction errors of the aggregated features as intrinsic rewards to facilitate exploration. When agents encounter novel states or new partnerships, NCE will produce large prediction errors, resulting in large intrinsic rewards. In addition, agents are more influenced by their neighbors and only interact directly with them in multiagent systems. Exploring partnerships between agents and their neighbors can enable agents to capture the most important cooperative relations with other agents. Therefore, NCE can effectively promote collaborative exploration even in environments with a large number of agents. Our experimental results show that NCE achieves significant performance improvements on the challenging StarCraft II micromanagement (SMAC) benchmark.

源语言英语
页(从-至)379-389
页数11
期刊IEEE Transactions on Cognitive and Developmental Systems
17
2
DOI
出版状态已出版 - 2025

指纹

探究 'Neighborhood-Curiosity-Based Exploration in Multiagent Reinforcement Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此