TY - JOUR
T1 - Thick Cloud Removal with Optical and SAR Imagery via Convolutional-Mapping-Deconvolutional Network
AU - Li, Wenbo
AU - Li, Ying
AU - Chan, Jonathan Cheung Wai
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2020/4
Y1 - 2020/4
N2 - In this article, we proposed a thick cloud removal method for remote-sensing imagery based on multisource estimation. A convolutional-mapping-deconvolutional (CMD) network is proposed to estimate the cloud-free image directly from multisource reference images. Synthetic aperture radar (SAR) image and low-resolution heterogeneous (LRH) image, namely, image from a different optical sensor with lower spatial resolution, are used as reference images to recover the missing information in the cloud-contaminated high-resolution (HR) image. The CMD net is composed of three functional components: The convolutional layers for encoding, the mapping layer for feature transferring, and the deconvolutional layers for decoding. In the training procedure, HR images from cloud-free regions and their corresponding LRH and SAR reference images are used to train the CMD net. When the CMD net is fully trained, it is able to estimate the HR images with their corresponding LRH and SAR reference images. The LRH and SAR reference images are first encoded by the convolutional layers before being transferred to the feature space at HR by the mapping layer. The transferred features are then decoded into cloud-free HR image by the deconvolutional layers. Cloud-free regions in the cloud-contaminated HR image are used to further improve the estimated image via intensity normalization. At last, the cloudy pixels are replaced by their corresponding pixels from the estimated cloud-free HR image. Comparisons with several recently proposed multisource cloud removal methods show that our proposed method is superior as validated by quantitative indexes and visual inspections.
AB - In this article, we proposed a thick cloud removal method for remote-sensing imagery based on multisource estimation. A convolutional-mapping-deconvolutional (CMD) network is proposed to estimate the cloud-free image directly from multisource reference images. Synthetic aperture radar (SAR) image and low-resolution heterogeneous (LRH) image, namely, image from a different optical sensor with lower spatial resolution, are used as reference images to recover the missing information in the cloud-contaminated high-resolution (HR) image. The CMD net is composed of three functional components: The convolutional layers for encoding, the mapping layer for feature transferring, and the deconvolutional layers for decoding. In the training procedure, HR images from cloud-free regions and their corresponding LRH and SAR reference images are used to train the CMD net. When the CMD net is fully trained, it is able to estimate the HR images with their corresponding LRH and SAR reference images. The LRH and SAR reference images are first encoded by the convolutional layers before being transferred to the feature space at HR by the mapping layer. The transferred features are then decoded into cloud-free HR image by the deconvolutional layers. Cloud-free regions in the cloud-contaminated HR image are used to further improve the estimated image via intensity normalization. At last, the cloudy pixels are replaced by their corresponding pixels from the estimated cloud-free HR image. Comparisons with several recently proposed multisource cloud removal methods show that our proposed method is superior as validated by quantitative indexes and visual inspections.
KW - Cloud removal
KW - convolutional neural network (CNN)
KW - intensity normalization
KW - multisource estimation
UR - http://www.scopus.com/inward/record.url?scp=85082970744&partnerID=8YFLogxK
U2 - 10.1109/TGRS.2019.2956959
DO - 10.1109/TGRS.2019.2956959
M3 - 文章
AN - SCOPUS:85082970744
SN - 0196-2892
VL - 58
SP - 2865
EP - 2879
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
IS - 4
M1 - 8944160
ER -