Instance segmentation from small dataset by a dual-layer semantics-based deep learning framework

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Efficient and accurate segmentation of complex microstructures is a critical challenge in establishing process-structure-property (PSP) linkages of materials. Deep learning (DL)-based instance segmentation algorithms show potential in achieving this goal. However, to ensure prediction reliability, the current algorithms usually have complex structures and demand vast training data. To overcome the model complexity and its dependence on the amount of data, we developed an ingenious DL framework based on a simple method called dual-layer semantics. In the framework, a data standardization module was designed to remove extraneous microstructural noise and accentuate desired structural characteristics, while a post-processing module was employed to further improve segmentation accuracy. The framework was successfully applied in a small dataset of bimodal Ti-6Al-4V microstructures with only 112 samples. Compared with the ground truth, it realizes an 86.81% accuracy IoU for the globular α phase and a 94.70% average size distribution similarity for the colony structures. More importantly, only 36 s was taken to handle a 1024 × 1024 micrograph, which is much faster than the treatment of experienced experts (usually 900 s). The framework proved reliable, interpretable, and scalable, enabling its utilization in complex microstructures to deepen the understanding of PSP linkages.

Original languageEnglish
Pages (from-to)2817-2833
Number of pages17
JournalScience China Technological Sciences
Volume67
Issue number9
DOIs
StatePublished - Sep 2024

Keywords

  • Ti-6Al-4V
  • deep learning
  • dual-layer semantics
  • instance segmentation
  • small dataset

Fingerprint

Dive into the research topics of 'Instance segmentation from small dataset by a dual-layer semantics-based deep learning framework'. Together they form a unique fingerprint.

Cite this