摘要
In the field of remote sensing, cloud cover severely reduces the quality of satellite observations of the earth. Due to the complete absence of information in cloud-covered regions, cloud removal with a single optical image is an ill-posed problem. Since the synthetic aperture radar (SAR) can effectively penetrate clouds, fusing SAR and optical remote sensing images will effectively alleviate this problem. However, existing SAR-based optical cloud removal methods fail to effectively leverage the global information provided by the SAR image, resulting in limited performance gains. In this paper, we introduce a novel cloud removal method named the Multi-Level SAR-Guided Contextual Attention Network (MSGCA-Net). MSGCA-Net is designed with a multi-level architecture that integrates a SAR-Guided Contextual Attention (SGCA) module to fuse the dependable global contextual information from SAR images with the local features of optical images effectively. In the module of SGCA, the SAR image provides reliable global contextual information and genuine structure of cloud-covered regions, while the optical image provides the local feature information. The proposed model can efficiently extract and fuse global and local contextual information in SAR and optical images. We trained and evaluated the performance of the model on both simulated and real-world datasets. Both qualitative and quantitative experimental evaluation demonstrated that the proposed method can yield high quality cloud-free images and outperform state-of-the-art cloud removal methods.
源语言 | 英语 |
---|---|
文章编号 | 4767 |
期刊 | Remote Sensing |
卷 | 16 |
期 | 24 |
DOI | |
出版状态 | 已出版 - 12月 2024 |