TY - JOUR
T1 - Few pixels attacks with generative model
AU - Li, Yang
AU - Pan, Quan
AU - Feng, Zhaowen
AU - Cambria, Erik
N1 - Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2023/12
Y1 - 2023/12
N2 - Adversarial attacks have attracted much attention in recent years, and a number of works have demonstrated the effectiveness of attacks on the entire image at perturbation generation. However, in practice, specially designed perturbation of the entire image is impractical. Some work has crafted adversarial samples with a few scrambled pixels by advanced search, e.g., SparseFool, OnePixel, etc., but they take more time to find such pixels that can be perturbed. Therefore, to construct the adversarial samples with few pixels perturbed in an end-to-end way, we propose a new framework, in which a dual-decoder VAE for perturbations finding is designed. To make adversarial learning more effective, we proposed a new version of the adversarial loss by considering the generalization. To evaluate the sophistication of the proposed framework, we compared it with more than a dozen existing related attack methods. The effectiveness and efficiency of the proposed framework are verified from the extensive experimental results. The validity of the model structure is also validated by the ablation study.
AB - Adversarial attacks have attracted much attention in recent years, and a number of works have demonstrated the effectiveness of attacks on the entire image at perturbation generation. However, in practice, specially designed perturbation of the entire image is impractical. Some work has crafted adversarial samples with a few scrambled pixels by advanced search, e.g., SparseFool, OnePixel, etc., but they take more time to find such pixels that can be perturbed. Therefore, to construct the adversarial samples with few pixels perturbed in an end-to-end way, we propose a new framework, in which a dual-decoder VAE for perturbations finding is designed. To make adversarial learning more effective, we proposed a new version of the adversarial loss by considering the generalization. To evaluate the sophistication of the proposed framework, we compared it with more than a dozen existing related attack methods. The effectiveness and efficiency of the proposed framework are verified from the extensive experimental results. The validity of the model structure is also validated by the ablation study.
KW - Adversarial attack
KW - Few pixels attacks
KW - Generative attack
KW - Neural network vulnerability
UR - http://www.scopus.com/inward/record.url?scp=85169937292&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2023.109849
DO - 10.1016/j.patcog.2023.109849
M3 - 文章
AN - SCOPUS:85169937292
SN - 0031-3203
VL - 144
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 109849
ER -