Vision and acoustic emission multi-modal learning for aircraft crack monitoring

  • Kang Liu
  • , Ruiyao Huang
  • , Yu Yang
  • , Gang Miao
  • , Minghao Wang
  • , Ruiyuan Wang
  • , Junyu Gao
  • , Ju Huang
  • , Xuelong Li

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The rapid advancement of aviation technology has elevated the safety requirements for aircraft structural components to unprecedented levels. Conventional non-destructive testing methods exhibit inherent limitations in aircraft crack detection, including susceptiveness to human interference, challenges in inspecting complex geometric configurations, and limited capability in identifying subsurface defects. This paper presents a multi-modal diagnostic framework named Vision Acoustic Emission Multi-Attention Model. Firstly, the proposed method employs CLIP-based cross-modal to synergistically integrates vision with acoustic emission features. Secondly, a mixed attention model is adopted to enhance the feature representation ability. Moreover, we also publicly contribute a novel ViAED dataset as benchmark, containing synchronized crack image sequences and corresponding AE signal profiles. Experimental results demonstrate the model’s superior performance, achieving 98.05% F1-score on test datasets, outperforming comparative approaches by a considerable margin. The code and dataset of our model is available on Github for further exploration and application.

Original languageEnglish
Article number103964
JournalAdvanced Engineering Informatics
Volume69
DOIs
StatePublished - Jan 2026

Keywords

  • Aircraft structural safety
  • Crack monitoring
  • Mixed attention
  • Multi-modal learning
  • Structural health monitoring

Fingerprint

Dive into the research topics of 'Vision and acoustic emission multi-modal learning for aircraft crack monitoring'. Together they form a unique fingerprint.

Cite this