Self-supervised Blind2Unblind deep learning scheme for OCT speckle reductions

Xiaojun Yu, Chenkun Ge, Mingshuai Li, Miao Yuan, Linbo Liu, Jianhua Mo, Perry Ping Shum, Jinna Chen

科研成果: 期刊稿件文章同行评审

10 引用 (Scopus)

摘要

As a low-coherence interferometry-based imaging modality, optical coherence tomography (OCT) inevitably suffers from the influence of speckles originating from multiply scattered photons. Speckles hide tissue microstructures and degrade the accuracy of disease diagnoses, which thus hinder OCT clinical applications. Various methods have been proposed to address such an issue, yet they suffer either from the heavy computational load, or the lack of high-quality clean images prior, or both. In this paper, a novel self-supervised deep learning scheme, namely, Blind2Unblind network with refinement strategy (B2Unet), is proposed for OCT speckle reduction with a single noisy image only. Specifically, the overall B2Unet network architecture is presented first, and then, a global-aware mask mapper together with a loss function are devised to improve image perception and optimize sampled mask mapper blind spots, respectively. To make the blind spots visible to B2Unet, a new re-visible loss is also designed, and its convergence is discussed with the speckle properties being considered. Extensive experiments with different OCT image datasets are finally conducted to compare B2Unet with those state-of-the-art existing methods. Both qualitative and quantitative results convincingly demonstrate that B2Unet outperforms the state-of-the-art model-based and fully supervised deep-learning methods, and it is robust and capable of effectively suppressing speckles while preserving the important tissue micro-structures in OCT images in different cases.

源语言英语
页(从-至)2773-2795
页数23
期刊Biomedical Optics Express
14
6
DOI
出版状态已出版 - 1 6月 2023

指纹

探究 'Self-supervised Blind2Unblind deep learning scheme for OCT speckle reductions' 的科研主题。它们共同构成独一无二的指纹。

引用此