Abstract
Exploiting the transferability of adversarial examples is a widely used approach in black-box attacks on deep neural networks (DNNs). Although input transformations that enhance data diversity have shown promise in improving transfer-based attacks, most existing techniques apply random modifications without distinction, failing to account for the differences in DNN attention regions across images. Therefore, we propose a novel Attention-guided Look-ahead and Data Augmentation-based adversarial attack method (ALDA). ALDA aims to improve the transferability of adversarial examples by strategically disrupting the DNN's attention on input images. Specifically, we utilize the Grad-CAM method to identify the regions where the DNN pays the most attention, and based on this, an attention-guided look-ahead mechanism is proposed, which refines the adversarial perturbation process through more precise corrections to the input data. In addition, we introduce an attention disruption-based data augmentation strategy to further interfere with the DNN's attention and elevate the performance of transfer-based black-box attacks. Comprehensive experiments on the ImageNet dataset reveal that our ALDA algorithm surpasses state-of-the-art methods in transfer attacks on unknown DNNs, especially those reinforced by defense mechanisms like adversarial training, achieving an average improvement of approximately 3.6 % in attack success rates. The source code for this study is publicly available at: https://github.com/LongTerm417/AttnDisrupt.
| Original language | English |
|---|---|
| Article number | 112686 |
| Journal | Pattern Recognition |
| Volume | 172 |
| DOIs | |
| State | Published - Apr 2026 |
Keywords
- Adversarial examples
- Attention disruption
- Attention-guided look-ahead
- Input transformation
Fingerprint
Dive into the research topics of 'ALDA: Enhancing the transferability of adversarial attacks with attention-guided look-ahead and data augmentation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver