摘要
Remote sensing scene classification enables data-driven decisions for various applications, such as environmental monitoring, urban planning, and disaster management. However, deep learning models used for scene classification are highly vulnerable to adversarial samples, resulting in incorrect predictions and posing significant risks. While most current methods focus on improving adversarial robustness, they face a trade-off that compromises accuracy on clean, unperturbed images. To address this challenge, we utilized information theory by incorporating a mutual information (MI) representation module, which allows the model to capture high-quality, robust features. Furthermore, a domain adversarial training strategy is applied to promote the learning of domain-invariant features, reducing the effect of distribution differences between clean images and adversarial samples. We propose a novel algorithm that accurately differentiates between clean and adversarial scenes by introducing the MI and domain adaptation-guided network. Extensive experiments demonstrate the effectiveness of our approach against adversarial attacks, revealing a positive correlation between adversarial perturbations and image information entropy, and a negative correlation with robust accuracy.
| 源语言 | 英语 |
|---|---|
| 页(从-至) | 11963-11978 |
| 页数 | 16 |
| 期刊 | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
| 卷 | 18 |
| DOI | |
| 出版状态 | 已出版 - 2025 |
| 已对外发布 | 是 |
联合国可持续发展目标
此成果有助于实现下列可持续发展目标:
-
可持续发展目标 11 可持续城市和社区
指纹
探究 'Robust Representation Learning Based on Deep Mutual Information for Scene Classification Against Adversarial Perturbations' 的科研主题。它们共同构成独一无二的指纹。引用此
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver