RFLE-Net: Refined Feature Extraction and Low-Loss Feature Fusion Method in Semantic Segmentation of Medical Images

Fan Zhang, Zihao Zhang, Huifang Hou, Yale Yang, Kangzhan Xie, Chao Fan, Xiaozhen Ren, Quan Pan

Research output: Contribution to journalArticlepeer-review

Abstract

The application of transformer networks and feature fusion models in medical image segmentation has aroused considerable attention within the academic circle. Nevertheless, two main obstacles persist: (1) the restrictions of the Transformer network in dealing with locally detailed features, and (2) the considerable loss of feature information in current feature fusion modules. To solve these issues, this study initially presents a refined feature extraction approach, employing a double-branch feature extraction network to capture complex multi-scale local and global information from images. Subsequently, we proposed a low-loss feature fusion method-Multi-branch Feature Fusion Enhancement Module (MFFEM), which realizes effective feature fusion with minimal loss. Simultaneously, the cross-layer cross-attention fusion module (CLCA) is adopted to further achieve adequate feature fusion by enhancing the interaction between encoders and decoders of various scales. Finally, the feasibility of our method was verified using the Synapse and ACDC datasets, demonstrating its competitiveness. The average DSC (%) was 83.62 and 91.99 respectively, and the average HD95 (mm) was reduced to 19.55 and 1.15 respectively.

Original languageEnglish
JournalJournal of Bionic Engineering
DOIs
StateAccepted/In press - 2025

Keywords

  • Fine-grained dual branch feature extractor
  • Low-Loss feature fusion module
  • Multi-organ medical image segmentation

Fingerprint

Dive into the research topics of 'RFLE-Net: Refined Feature Extraction and Low-Loss Feature Fusion Method in Semantic Segmentation of Medical Images'. Together they form a unique fingerprint.

Cite this