TY - GEN
T1 - Deep Relationship Graph Reinforcement Learning for Multi-Aircraft Air Combat
AU - Han, Yue
AU - Piao, Haiyin
AU - Hou, Yaqing
AU - Sun, Yang
AU - Sun, Zhixiao
AU - Zhou, Deyun
AU - Yang, Shengqi
AU - Peng, Xuanqi
AU - Fan, Songyuan
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Air combat Artificial Intelligence (AI) has attracted increasing attentions from aeronautics engineers and artificial intelligence researchers. However, it is often of great difficulties for the existing methods to solve the collaboration problems in multi-aircraft air combat due to their high complexity incurred by combination explosion. In view of this, we propose a Deep Relationship Graph Reinforcement Learning (DRGRL) algorithm for multi-aircraft collaboration. Specifically, DRGRL significantly simplifies the complex situation space via abstracting the original problem into a symbolic form. Besides, a novel Air Combat Relationship Graph (ACRG) is introduced to represent the learned collaboration pattern, which concentrates on the most important combat relationships for tactic decision making. Consequently, experiments are conducted in an air combat simulation environment named WUKONG. The comprehensive experimental results demonstrate that DRGRL could evidently learn some valuable collaboration patterns and achieve better combat performance than state-of-the-art air combat AI methods.
AB - Air combat Artificial Intelligence (AI) has attracted increasing attentions from aeronautics engineers and artificial intelligence researchers. However, it is often of great difficulties for the existing methods to solve the collaboration problems in multi-aircraft air combat due to their high complexity incurred by combination explosion. In view of this, we propose a Deep Relationship Graph Reinforcement Learning (DRGRL) algorithm for multi-aircraft collaboration. Specifically, DRGRL significantly simplifies the complex situation space via abstracting the original problem into a symbolic form. Besides, a novel Air Combat Relationship Graph (ACRG) is introduced to represent the learned collaboration pattern, which concentrates on the most important combat relationships for tactic decision making. Consequently, experiments are conducted in an air combat simulation environment named WUKONG. The comprehensive experimental results demonstrate that DRGRL could evidently learn some valuable collaboration patterns and achieve better combat performance than state-of-the-art air combat AI methods.
KW - air combat AI
KW - graph neural network
KW - multi-aircraft collaboration
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85140791472&partnerID=8YFLogxK
U2 - 10.1109/IJCNN55064.2022.9892208
DO - 10.1109/IJCNN55064.2022.9892208
M3 - 会议稿件
AN - SCOPUS:85140791472
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2022 International Joint Conference on Neural Networks, IJCNN 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 International Joint Conference on Neural Networks, IJCNN 2022
Y2 - 18 July 2022 through 23 July 2022
ER -