TY - GEN
T1 - Bio-inspired Decentralized Multi-robot Exploration in Unknown Environments
AU - Wang, Jiayao
AU - Guo, Bin
AU - Zhao, Kaixing
AU - Liu, Sicong
AU - Yu, Zhiwen
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Multi-robot collaborative exploration is a fundamental precondition for a wide range of robot applications. However, in situations where physical communication networks cannot be deployed, existing methods encounter great challenge since collaboration mostly relies on the communication among robots. In this paper, a decentralized exploration approach for multi-robot systems is proposed to achieve efficient collaboration in communication-limited scenarios. Particularly, we present a novel multi-agent reinforcement learning approach called Multi-Agent Proximal Policy Optimization-Pheromone Interaction Mechanism (MAPPO-PIM) to realize self-adaptive and self-organized exploration. Inspired by nature swarms, we model the release and act process of pheromone and apply it into the motion of robots, which naturally forms an implicit interaction network based on the traces of pheromone. Moreover, we introduce the pheromone feedback into the reward shaping, which can be used to indirectly guide robots to explore different unknown areas without the requirements for explicit communication. We compare the performances of the proposed algorithm with several baseline exploration methods, and the experiment results indicate that our work obtains up to 45% collaboration improvement and 19% execution time reduction.
AB - Multi-robot collaborative exploration is a fundamental precondition for a wide range of robot applications. However, in situations where physical communication networks cannot be deployed, existing methods encounter great challenge since collaboration mostly relies on the communication among robots. In this paper, a decentralized exploration approach for multi-robot systems is proposed to achieve efficient collaboration in communication-limited scenarios. Particularly, we present a novel multi-agent reinforcement learning approach called Multi-Agent Proximal Policy Optimization-Pheromone Interaction Mechanism (MAPPO-PIM) to realize self-adaptive and self-organized exploration. Inspired by nature swarms, we model the release and act process of pheromone and apply it into the motion of robots, which naturally forms an implicit interaction network based on the traces of pheromone. Moreover, we introduce the pheromone feedback into the reward shaping, which can be used to indirectly guide robots to explore different unknown areas without the requirements for explicit communication. We compare the performances of the proposed algorithm with several baseline exploration methods, and the experiment results indicate that our work obtains up to 45% collaboration improvement and 19% execution time reduction.
KW - Collaborative exploration
KW - communication
KW - multi-agent reinforcement learning
KW - pheromone mechanism
UR - http://www.scopus.com/inward/record.url?scp=85178520004&partnerID=8YFLogxK
U2 - 10.1109/MASS58611.2023.00032
DO - 10.1109/MASS58611.2023.00032
M3 - 会议稿件
AN - SCOPUS:85178520004
T3 - Proceedings - 2023 IEEE 20th International Conference on Mobile Ad Hoc and Smart Systems, MASS 2023
SP - 204
EP - 210
BT - Proceedings - 2023 IEEE 20th International Conference on Mobile Ad Hoc and Smart Systems, MASS 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 20th IEEE International Conference on Mobile Ad Hoc and Smart Systems, MASS 2023
Y2 - 25 September 2023 through 27 September 2023
ER -