TY - JOUR
T1 - RFLE-Net
T2 - Refined Feature Extraction and Low-Loss Feature Fusion Method in Semantic Segmentation of Medical Images
AU - Zhang, Fan
AU - Zhang, Zihao
AU - Hou, Huifang
AU - Yang, Yale
AU - Xie, Kangzhan
AU - Fan, Chao
AU - Ren, Xiaozhen
AU - Pan, Quan
N1 - Publisher Copyright:
© Jilin University 2025.
PY - 2025
Y1 - 2025
N2 - The application of transformer networks and feature fusion models in medical image segmentation has aroused considerable attention within the academic circle. Nevertheless, two main obstacles persist: (1) the restrictions of the Transformer network in dealing with locally detailed features, and (2) the considerable loss of feature information in current feature fusion modules. To solve these issues, this study initially presents a refined feature extraction approach, employing a double-branch feature extraction network to capture complex multi-scale local and global information from images. Subsequently, we proposed a low-loss feature fusion method-Multi-branch Feature Fusion Enhancement Module (MFFEM), which realizes effective feature fusion with minimal loss. Simultaneously, the cross-layer cross-attention fusion module (CLCA) is adopted to further achieve adequate feature fusion by enhancing the interaction between encoders and decoders of various scales. Finally, the feasibility of our method was verified using the Synapse and ACDC datasets, demonstrating its competitiveness. The average DSC (%) was 83.62 and 91.99 respectively, and the average HD95 (mm) was reduced to 19.55 and 1.15 respectively.
AB - The application of transformer networks and feature fusion models in medical image segmentation has aroused considerable attention within the academic circle. Nevertheless, two main obstacles persist: (1) the restrictions of the Transformer network in dealing with locally detailed features, and (2) the considerable loss of feature information in current feature fusion modules. To solve these issues, this study initially presents a refined feature extraction approach, employing a double-branch feature extraction network to capture complex multi-scale local and global information from images. Subsequently, we proposed a low-loss feature fusion method-Multi-branch Feature Fusion Enhancement Module (MFFEM), which realizes effective feature fusion with minimal loss. Simultaneously, the cross-layer cross-attention fusion module (CLCA) is adopted to further achieve adequate feature fusion by enhancing the interaction between encoders and decoders of various scales. Finally, the feasibility of our method was verified using the Synapse and ACDC datasets, demonstrating its competitiveness. The average DSC (%) was 83.62 and 91.99 respectively, and the average HD95 (mm) was reduced to 19.55 and 1.15 respectively.
KW - Fine-grained dual branch feature extractor
KW - Low-Loss feature fusion module
KW - Multi-organ medical image segmentation
UR - http://www.scopus.com/inward/record.url?scp=105002785600&partnerID=8YFLogxK
U2 - 10.1007/s42235-025-00688-7
DO - 10.1007/s42235-025-00688-7
M3 - 文章
AN - SCOPUS:105002785600
SN - 1672-6529
JO - Journal of Bionic Engineering
JF - Journal of Bionic Engineering
ER -