Transfer reinforcement learning for multi-agent pursuit-evasion differential game with obstacles in a continuous environment

Penglin Hu, Quan Pan, Chunhui Zhao, Yaning Guo

科研成果: 期刊稿件文章同行评审

3 引用 (Scopus)

摘要

In this paper, we study the multi-pursuer single-evader pursuit-evasion (MSPE) differential game in a continuous environment with the consideration of obstacles. We propose a novel pursuit-evasion algorithm based on reinforcement learning and transfer learning. In the source task learning stage, we employ the Q-learning and value function approximation method to overcome the challenges posed by the large-scale storage space required by the conventional Q-table solution method. This approach expands the discrete space to the continuous space by value function approximation and effectively reduces the demand for storage space. During the target task learning stage, we utilize the Gaussian mixture model (GMM) to classify the source tasks. The source policies whose corresponding state-value sets have the highest probability densities are assigned for the agent in the target task for learning. This methodology not only effectively avoids negative transfer but also enhances the algorithm's generalization ability and convergence speed. Through simulation and experiment, we demonstrate the algorithm's effectiveness.

源语言英语
页(从-至)2125-2140
页数16
期刊Asian Journal of Control
26
4
DOI
出版状态已出版 - 7月 2024

指纹

探究 'Transfer reinforcement learning for multi-agent pursuit-evasion differential game with obstacles in a continuous environment' 的科研主题。它们共同构成独一无二的指纹。

引用此