Mix-attention approximation for homogeneous large-scale multi-agent reinforcement learning

Yang Shike, Li Jingchen, Shi Haobin

科研成果: 期刊稿件文章同行评审

3 引用 (Scopus)

摘要

In large-scale multi-agent environments with homogeneous agents, most works provided approximation methods to simplify the interaction among agents. In this work, we propose a new approximation, termed mix-attention approximation, to enhance multi-agent reinforcement learning. The approximation is made by a mix-attention module, used to form consistent consensuses for agents in partially observable environments. We leverage the hard attention to compress the perception of each agent to some more partial regions. These partial regions can engage the attention of several agents at the same time, and the correlation among these partial regions is generated by a soft-attention module. We give the training method for the mix-attention mechanism and discuss the consistency between the mix-attention module and the policy network. Then we analyze the feasibility of this mix-attention-based approximation, attempting to build integrated models of our method into other approximation methods. In large-scale multi-agent environments, the proposal can be embedded into most reinforcement learning methods, and extensive experiments on multi-agent scenarios demonstrate the effectiveness of the proposed approach.

源语言英语
页(从-至)3143-3154
页数12
期刊Neural Computing and Applications
35
4
DOI
出版状态已出版 - 2月 2023

指纹

探究 'Mix-attention approximation for homogeneous large-scale multi-agent reinforcement learning' 的科研主题。它们共同构成独一无二的指纹。

引用此