TY - JOUR
T1 - Dehaze-TGGAN
T2 - Transformer-Guide Generative Adversarial Networks With Spatial-Spectrum Attention for Unpaired Remote Sensing Dehazing
AU - Zheng, Yitong
AU - Su, Jia
AU - Zhang, Shun
AU - Tao, Mingliang
AU - Wang, Ling
N1 - Publisher Copyright:
© 1980-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Satellite imagery plays a critical role in target detection. However, the quality and usability of optical remote sensing images can be severely compromised by atmospheric conditions, particularly haze, which significantly reduces the recognition accuracy of target detection algorithms such as ships. On the other hand, paired training data, i.e., the remote sensing data with or without fog at the same place, are difficult to obtain in real-world scenarios, leading to the failure of many existing dehazing methods. To deal with these issues, this article proposes a Transformer-Guide CycleGAN framework generative adversarial networks (Dehaze-TGGAN) incorporating an extra attention mechanism from the frequency domain. First, an SSA mechanism is proposed by using a 2-D fast Fourier transform (2D FFT) in the spatial domain, which enables the model to understand the relationships within the three-channel frequency domain information and to recover the spectral features of the hazy image through the spectrum encoder block. Then, a pre-training approach using semi-transparent masks (STM), which can effectively simulate hazy conditions by adjusting the transparency of masks, is presented as a key strategy to accelerate the convergence rate. Finally, the applicability of the transformer architecture is extended by incorporating total variation loss (TV Loss). The results of simulated and measured optical remote sensing data show that the recognition accuracy and the efficiency of the proposed algorithm are greatly improved.
AB - Satellite imagery plays a critical role in target detection. However, the quality and usability of optical remote sensing images can be severely compromised by atmospheric conditions, particularly haze, which significantly reduces the recognition accuracy of target detection algorithms such as ships. On the other hand, paired training data, i.e., the remote sensing data with or without fog at the same place, are difficult to obtain in real-world scenarios, leading to the failure of many existing dehazing methods. To deal with these issues, this article proposes a Transformer-Guide CycleGAN framework generative adversarial networks (Dehaze-TGGAN) incorporating an extra attention mechanism from the frequency domain. First, an SSA mechanism is proposed by using a 2-D fast Fourier transform (2D FFT) in the spatial domain, which enables the model to understand the relationships within the three-channel frequency domain information and to recover the spectral features of the hazy image through the spectrum encoder block. Then, a pre-training approach using semi-transparent masks (STM), which can effectively simulate hazy conditions by adjusting the transparency of masks, is presented as a key strategy to accelerate the convergence rate. Finally, the applicability of the transformer architecture is extended by incorporating total variation loss (TV Loss). The results of simulated and measured optical remote sensing data show that the recognition accuracy and the efficiency of the proposed algorithm are greatly improved.
KW - Dehaze
KW - generative adversarial networks (GANs)
KW - semi-transparent masks (STMs)
KW - spatial-spectrum attention (SSA)
KW - total variation loss (TV Loss)
KW - transformer
UR - http://www.scopus.com/inward/record.url?scp=85200238700&partnerID=8YFLogxK
U2 - 10.1109/TGRS.2024.3435470
DO - 10.1109/TGRS.2024.3435470
M3 - 文章
AN - SCOPUS:85200238700
SN - 0196-2892
VL - 62
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
M1 - 5634320
ER -