A Trajectory Prediction Method of Drogue in Aerial Refueling Based on Transfer Learning and Attention Mechanism

Xiaojun Xing, Rui Wang, Bing Han, Cihang Wu, Bing Xiao

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The growing significance of aerial refueling requires that receiver aircraft can perform autonomous aerial refueling (AAR) tasks in flight. In this regard, precise docking is a key but challenging issue. To address this problem, a drogue trajectory prediction method based on transfer learning and attention mechanism is proposed in this article. The long short-term memory (LSTM) neural network is introduced as the base to learn temporal correlations between time-series trajectory data of a drogue. To further boost the network performance, the transfer learning strategy and the attention mechanism are involved in the model construction. Prior knowledge about physical models in similar domains can be passed to the network through transfer learning, and larger weights can be adaptively assigned to more important features. The effectiveness of the proposed method is verified through the comparisons with autoregressive integrated moving average (ARIMA), recurrent neural network (RNN), LSTM, and attention-based LSTM models, while effects of transfer learning and attention mechanism are visualized. When implementing this approach to perform predictive docking, a high success rate is achieved in the ground experiment, which shows great potential for industrial applications.

Original languageEnglish
Article number3531712
JournalIEEE Transactions on Instrumentation and Measurement
Volume73
DOIs
StatePublished - 2024

Keywords

  • Attention mechanism
  • autonomous aerial refueling (AAR)
  • long short-term memory (LSTM) network
  • trajectory prediction
  • transfer learning

Fingerprint

Dive into the research topics of 'A Trajectory Prediction Method of Drogue in Aerial Refueling Based on Transfer Learning and Attention Mechanism'. Together they form a unique fingerprint.

Cite this