TY - JOUR
T1 - Cross-domain learning for underwater image enhancement
AU - Li, Fei
AU - Zheng, Jiangbin
AU - Zhang, Yuan fang
AU - Jia, Wenjing
AU - Wei, Qianru
AU - He, Xiangjian
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2023/1
Y1 - 2023/1
N2 - The poor quality of underwater images has become a widely-known cause affecting the performance of the underwater development projects, including mineral exploitation, driving photography, and navigation for autonomous underwater vehicles. In recent years, deep learning-based techniques have achieved remarkable successes in image restoration and enhancement tasks. However, the limited availability of paired training data (underwater images and their corresponding clear images) and the requirement for vivid color correction remain challenging for underwater image enhancement, as almost all learning-based methods require paired data for training. In this study, instead of creating the time-consuming paired data, we explore the unsupervised training strategy. Specifically, we introduce a universal cross-domain GAN-based framework to generate high-quality images to address the dependence on paired training data. To ensure the vivid colorfulness, the color loss is designed to constrain the training process. Also, a feature fusion module (FFM) is proposed to increase the capacity of the whole model as well as the dual discriminator channel adopted in the architecture. Extensive quantitative and perceptual experiments show that our approach overcomes the limitation of paired data and obtains superior performance over the state-of-the-art on several underwater benchmarks in terms of both accuracy and model deployment.
AB - The poor quality of underwater images has become a widely-known cause affecting the performance of the underwater development projects, including mineral exploitation, driving photography, and navigation for autonomous underwater vehicles. In recent years, deep learning-based techniques have achieved remarkable successes in image restoration and enhancement tasks. However, the limited availability of paired training data (underwater images and their corresponding clear images) and the requirement for vivid color correction remain challenging for underwater image enhancement, as almost all learning-based methods require paired data for training. In this study, instead of creating the time-consuming paired data, we explore the unsupervised training strategy. Specifically, we introduce a universal cross-domain GAN-based framework to generate high-quality images to address the dependence on paired training data. To ensure the vivid colorfulness, the color loss is designed to constrain the training process. Also, a feature fusion module (FFM) is proposed to increase the capacity of the whole model as well as the dual discriminator channel adopted in the architecture. Extensive quantitative and perceptual experiments show that our approach overcomes the limitation of paired data and obtains superior performance over the state-of-the-art on several underwater benchmarks in terms of both accuracy and model deployment.
KW - GAN
KW - Loss function
KW - Underwater image enhancement
KW - Unsupervised learning
UR - http://www.scopus.com/inward/record.url?scp=85141928593&partnerID=8YFLogxK
U2 - 10.1016/j.image.2022.116890
DO - 10.1016/j.image.2022.116890
M3 - 文章
AN - SCOPUS:85141928593
SN - 0923-5965
VL - 110
JO - Signal Processing: Image Communication
JF - Signal Processing: Image Communication
M1 - 116890
ER -