TY - JOUR
T1 - Intelligent fault diagnosis of rotating machine via Expansive dual-attention fusion Transformer enhanced by semi-supervised learning
AU - Liu, Sijie
AU - Li, Jin
AU - Zhou, Nan
AU - Chen, Geng
AU - Lu, Kuan
AU - Wu, Yafeng
N1 - Publisher Copyright:
© 2024
PY - 2025/1/15
Y1 - 2025/1/15
N2 - The precise identification of faults is an essential component in the effective operation and maintenance of rotating machinery (RM). The increasing adoption of deep learning in this field is likely due to the vast amounts of automatically generated monitoring data and its robust ability to learn and detect errors effectively. Deep learning techniques generally demonstrate high efficacy, particularly when limited fault data is available or when employing a two-stage diagnostic approach, involving manual feature extraction followed by fault classification. To address these challenges, we introduce the Expansive Dual-Attention Fusion Transformer (EDAF-Transformer), which is trained through a semi-supervised approach using minimal labeled data. Our method begins with a detailed explanation of the foundational theories behind transformers and semi-supervised learning strategies. This establishes the basis of our approach to intelligent fault diagnosis, particularly under the constraints of raw vibration signals and limited labeled data availability. We then present our novel model, the EDAF-Transformer, which comprises two main components: the Expansive Dual-Attention Enhanced Modules and the 1-D Twin Global Self-Attention Transformer Encoder Module. The first component employs an Expansive Dual-branch Attention Enhanced Module (EDAEM) within a dual-branch attention architecture. This design effectively enlarges the receptive field, capturing both global and local features. The second component, the Twin Sparse Self-Attention Fusion Module (TSSAFM), integrates Global Sparse Self-Attention with Squeeze-and-Excitation (SE) attention. This module is designed to enhance feature encoding capabilities while simultaneously reducing computational demands. Furthermore, we implement an uncertainty self-training semi-supervised strategy focused on balanced unlabeled sample selection. This approach significantly enhances the generalizability and performance of the EDAF-Transformer. We extensively tested our semi-supervised fault diagnosis method on publicly available motor bearing data and a specially curated transmission shaft dataset. The results from these tests demonstrate that our method surpasses other intelligent fault diagnosis methods in effectiveness.
AB - The precise identification of faults is an essential component in the effective operation and maintenance of rotating machinery (RM). The increasing adoption of deep learning in this field is likely due to the vast amounts of automatically generated monitoring data and its robust ability to learn and detect errors effectively. Deep learning techniques generally demonstrate high efficacy, particularly when limited fault data is available or when employing a two-stage diagnostic approach, involving manual feature extraction followed by fault classification. To address these challenges, we introduce the Expansive Dual-Attention Fusion Transformer (EDAF-Transformer), which is trained through a semi-supervised approach using minimal labeled data. Our method begins with a detailed explanation of the foundational theories behind transformers and semi-supervised learning strategies. This establishes the basis of our approach to intelligent fault diagnosis, particularly under the constraints of raw vibration signals and limited labeled data availability. We then present our novel model, the EDAF-Transformer, which comprises two main components: the Expansive Dual-Attention Enhanced Modules and the 1-D Twin Global Self-Attention Transformer Encoder Module. The first component employs an Expansive Dual-branch Attention Enhanced Module (EDAEM) within a dual-branch attention architecture. This design effectively enlarges the receptive field, capturing both global and local features. The second component, the Twin Sparse Self-Attention Fusion Module (TSSAFM), integrates Global Sparse Self-Attention with Squeeze-and-Excitation (SE) attention. This module is designed to enhance feature encoding capabilities while simultaneously reducing computational demands. Furthermore, we implement an uncertainty self-training semi-supervised strategy focused on balanced unlabeled sample selection. This approach significantly enhances the generalizability and performance of the EDAF-Transformer. We extensively tested our semi-supervised fault diagnosis method on publicly available motor bearing data and a specially curated transmission shaft dataset. The results from these tests demonstrate that our method surpasses other intelligent fault diagnosis methods in effectiveness.
KW - Expansive dual-attention fusion Transformer
KW - Fault diagnosis
KW - Rotating machine
KW - Semi-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85204933358&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2024.125398
DO - 10.1016/j.eswa.2024.125398
M3 - 文章
AN - SCOPUS:85204933358
SN - 0957-4174
VL - 260
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 125398
ER -