TY - GEN
T1 - Improved cGAN for SAR-to-Optical Image Translation
AU - Hu, Pengcheng
AU - Wang, Yong
AU - Liu, Yifan
AU - Guo, Xinxin
AU - Wang, Yongkang
AU - Cui, Rongxin
N1 - Publisher Copyright:
© 2024 Technical Committee on Control Theory, Chinese Association of Automation.
PY - 2024
Y1 - 2024
N2 - Synthetic aperture radar (SAR) can be used for all-day and all-weather Earth observation, but it has the disadvantages of speckle noise and geometric distortion, which are not conducive to human eye recognition. In order to enhance the visual effect of SAR images, this paper proposes an improved cGAN(Conditional Generative Adversarial Network) method to achieve the translation of SAR images to optical images. Firstly, the generator uses U-Net structure to combine global features with local features, which improves the details of the generated image. Secondly, the discriminator uses PatchGAN structure to extract and characterize the local image features, and finely distinguish each part of the image. Finally, SSIM and PSNR loss functions are added to improve the restoration degree of the generated image. In the experiment on SEN1-2 dataset, our method surpasses the basic cGAN and pix2pix. The translated image retains the key content of SAR image, and also has the style of optical image.
AB - Synthetic aperture radar (SAR) can be used for all-day and all-weather Earth observation, but it has the disadvantages of speckle noise and geometric distortion, which are not conducive to human eye recognition. In order to enhance the visual effect of SAR images, this paper proposes an improved cGAN(Conditional Generative Adversarial Network) method to achieve the translation of SAR images to optical images. Firstly, the generator uses U-Net structure to combine global features with local features, which improves the details of the generated image. Secondly, the discriminator uses PatchGAN structure to extract and characterize the local image features, and finely distinguish each part of the image. Finally, SSIM and PSNR loss functions are added to improve the restoration degree of the generated image. In the experiment on SEN1-2 dataset, our method surpasses the basic cGAN and pix2pix. The translated image retains the key content of SAR image, and also has the style of optical image.
KW - cGAN(Conditional Generative Adversarial Network)
KW - deep learning
KW - SAR-to-optical image translation
UR - http://www.scopus.com/inward/record.url?scp=85205489043&partnerID=8YFLogxK
U2 - 10.23919/CCC63176.2024.10661422
DO - 10.23919/CCC63176.2024.10661422
M3 - 会议稿件
AN - SCOPUS:85205489043
T3 - Chinese Control Conference, CCC
SP - 7675
EP - 7680
BT - Proceedings of the 43rd Chinese Control Conference, CCC 2024
A2 - Na, Jing
A2 - Sun, Jian
PB - IEEE Computer Society
T2 - 43rd Chinese Control Conference, CCC 2024
Y2 - 28 July 2024 through 31 July 2024
ER -