TY - JOUR
T1 - Dynamic scheduling for multi-level air defense with contingency situations based on Human-Intelligence collaboration
AU - Tang, Rugang
AU - Ning, Xin
AU - Wang, Zheng
AU - Fan, Jiaqi
AU - Ma, Shichao
N1 - Publisher Copyright:
© 2024 Elsevier Ltd
PY - 2024/6
Y1 - 2024/6
N2 - Resource scheduling is an important part of military operation, especially in key-point air defense under saturation attack. Many achievements have been made in the field of radar resource scheduling and multi-aircraft scheduling. However, there is little research on the integrated scheduling of detection, tracking and attack, which will greatly increase the resource utilization to deal with the resource shortage problem caused by saturation attacks on key places. In this paper, we propose to autonomously accomplish real-time resource dispatching through end-to-end deep reinforcement learning (DRL), while allowing the commander's intervention to accomplish a variety of complex tactics. First, an integrated scheduling model of detection, tracking and interception is proposed and transformed into a sequential decision problem by introducing a disjunctive graph and a graph neural network (GNN) to extract node features. Subsequently, the Proximal Policy Optimization (PPO) algorithm is applied to learn the air defense environment (ADE), which is modeled as an Markov decision process (MDP). Benefitting from the powerful generalization capability of the policy network, our algorithm can adapt to scheduling missions of different sizes. Moreover, we propose a novel Human-Intelligence collaborative dynamic scheduling framework for emergency response. Simulation results indicate that our algorithm generates high-quality scheduling policies for defense resources, exhibiting superior performance than existing methods. In addition, the dynamic scheduling performance of the human and intelligence collaboration approach in response to multiple contingencies is proven.
AB - Resource scheduling is an important part of military operation, especially in key-point air defense under saturation attack. Many achievements have been made in the field of radar resource scheduling and multi-aircraft scheduling. However, there is little research on the integrated scheduling of detection, tracking and attack, which will greatly increase the resource utilization to deal with the resource shortage problem caused by saturation attacks on key places. In this paper, we propose to autonomously accomplish real-time resource dispatching through end-to-end deep reinforcement learning (DRL), while allowing the commander's intervention to accomplish a variety of complex tactics. First, an integrated scheduling model of detection, tracking and interception is proposed and transformed into a sequential decision problem by introducing a disjunctive graph and a graph neural network (GNN) to extract node features. Subsequently, the Proximal Policy Optimization (PPO) algorithm is applied to learn the air defense environment (ADE), which is modeled as an Markov decision process (MDP). Benefitting from the powerful generalization capability of the policy network, our algorithm can adapt to scheduling missions of different sizes. Moreover, we propose a novel Human-Intelligence collaborative dynamic scheduling framework for emergency response. Simulation results indicate that our algorithm generates high-quality scheduling policies for defense resources, exhibiting superior performance than existing methods. In addition, the dynamic scheduling performance of the human and intelligence collaboration approach in response to multiple contingencies is proven.
KW - Deep reinforcement learning
KW - Dynamic scheduling
KW - Human-Intelligence collaboration
KW - Key-point air defense
UR - http://www.scopus.com/inward/record.url?scp=85183472761&partnerID=8YFLogxK
U2 - 10.1016/j.engappai.2024.107893
DO - 10.1016/j.engappai.2024.107893
M3 - 文章
AN - SCOPUS:85183472761
SN - 0952-1976
VL - 132
JO - Engineering Applications of Artificial Intelligence
JF - Engineering Applications of Artificial Intelligence
M1 - 107893
ER -