Scene-Embedded Generative Adversarial Networks for Semi-Supervised SAR-to-Optical Image Translation

Zhe Guo, Rui Luo, Qinglin Cai, Jiayi Liu, Zhibo Zhang, Shaohui Mei

科研成果: 期刊稿件文章同行评审

摘要

SAR-to-optical image translation (S2OIT) improves the interpretability of SAR images, providing a clearer visual insight that can significantly enhance remote sensing applications. Compared to supervised S2OIT methods that are limited by the paired dataset, unsupervised methods have shown more advantages in practical applications. However, the existing unsupervised S2OIT approaches, designed for unpaired datasets, often struggle to generalize well to scenes that are significantly different from the training data, potentially leading to mistranslations in diverse scenarios. To address the above issues, we propose a scene-embedded generative adversarial network for semi-supervised S2OIT called ScE-GAN, which utilizes the scene category labels in addition to unpaired image dataset, thus effectively improving the robustness of S2OIT under different scenes without increasing complex network structure and learning cost. In particular, a scene information fusion generator (SIFG) is proposed to learn the relationship between the image and the scene directly through scene category guidance and multihead attention, enhancing its ability to adapt to scene changes. Moreover, a scene-assisted discriminator (SAD) is presented cooperating with the generator to ensure both image authenticity and scene accuracy. Extensive experiments on two challenging datasets SEN1-2 and QXS-SAROPT demonstrate that our method outperforms the state-of-the-art methods in both objective and subjective evaluations. Our code and more details are available at https://github.com/lr-dddd/ScE-GAN.

源语言英语
文章编号4018005
期刊IEEE Geoscience and Remote Sensing Letters
21
DOI
出版状态已出版 - 2024

指纹

探究 'Scene-Embedded Generative Adversarial Networks for Semi-Supervised SAR-to-Optical Image Translation' 的科研主题。它们共同构成独一无二的指纹。

引用此