TY - JOUR
T1 - Generating Adversarial Examples Against Remote Sensing Scene Classification via Feature Approximation
AU - Zhu, Rui
AU - Ma, Shiping
AU - Lian, Jiawei
AU - He, Linyuan
AU - Mei, Shaohui
N1 - Publisher Copyright:
© 2008-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - The existence of adversarial examples highlights the vulnerability of deep neural networks, which can change the recognition results by adding well-designed perturbations to the original image. It brings a great challenge to the remote sensing images (RSI) scene classification. RSI scene classification primarily relies on the spatial and texture feature information of images, making attacks in the feature domain more effective. In this study, we introduce the feature approximation (FA) strategy, which generates adversarial examples by approximating clean image features to virtual images that are designed to not belong to any category. Our research aims to attack image classification models that are trained with RSI and discover the common vulnerabilities of these models. Specifically, we benchmark the FA attack using both featureless images and images generated via data augmentation methods. We then extend the FA attack to multimodel FA (MFA), improving the transferability of the attack. Finally, we show that the FA strategy is also effective for targeted attacks by approximating the input clean image features to the target category image features. Extensive experiments on the remote sensing classification datasets UC Merced and AID demonstrate the effectiveness of the methods in this article. The FA attack exhibits remarkable attack performance. Furthermore, the proposed MFA attack outperforms the success rate achieved by existing advanced targetless black-box attacks by an average of over 15%. The FA attack also performs better compared to multiple existing targeted white-box attacks.
AB - The existence of adversarial examples highlights the vulnerability of deep neural networks, which can change the recognition results by adding well-designed perturbations to the original image. It brings a great challenge to the remote sensing images (RSI) scene classification. RSI scene classification primarily relies on the spatial and texture feature information of images, making attacks in the feature domain more effective. In this study, we introduce the feature approximation (FA) strategy, which generates adversarial examples by approximating clean image features to virtual images that are designed to not belong to any category. Our research aims to attack image classification models that are trained with RSI and discover the common vulnerabilities of these models. Specifically, we benchmark the FA attack using both featureless images and images generated via data augmentation methods. We then extend the FA attack to multimodel FA (MFA), improving the transferability of the attack. Finally, we show that the FA strategy is also effective for targeted attacks by approximating the input clean image features to the target category image features. Extensive experiments on the remote sensing classification datasets UC Merced and AID demonstrate the effectiveness of the methods in this article. The FA attack exhibits remarkable attack performance. Furthermore, the proposed MFA attack outperforms the success rate achieved by existing advanced targetless black-box attacks by an average of over 15%. The FA attack also performs better compared to multiple existing targeted white-box attacks.
KW - Adversarial examples
KW - feature approximation (FA)
KW - remote sensing
KW - scene classification
UR - http://www.scopus.com/inward/record.url?scp=85193276101&partnerID=8YFLogxK
U2 - 10.1109/JSTARS.2024.3399780
DO - 10.1109/JSTARS.2024.3399780
M3 - 文章
AN - SCOPUS:85193276101
SN - 1939-1404
VL - 17
SP - 10174
EP - 10187
JO - IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
JF - IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
ER -