跳到主要导航 跳到搜索 跳到主要内容

GM-Attack: Improving the Transferability of Adversarial Attacks

科研成果: 书/报告/会议事项章节会议稿件同行评审

10 引用 (Scopus)

摘要

In the real world, blackbox attacks seem to be widely existed due to the lack of detailed information of models to be attacked. Hence, it is desirable to obtain adversarial examples with high transferability which will facilitate practical adversarial attacks. Instead of adopting traditional input transformation approaches, we propose a mechanism to derive masked images through removing some regions from the initial input images. In this manuscript, the removed regions are spatially uniformly distributed squares. For comparison, several transferable attack methods are adopted as the baselines. Eventually, extensive empirical evaluations are conducted on the standard ImageNet dataset to validate the effectiveness of GM-Attack. As indicated, our GM-Attack can craft more transferable adversarial examples compared with other input transformation methods and attack success rate on Inc-v4 has been improved by 6.5% over state-of-the-art methods.

源语言英语
主期刊名Knowledge Science, Engineering and Management - 15th International Conference, KSEM 2022, Proceedings
编辑Gerard Memmi, Baijian Yang, Linghe Kong, Tianwei Zhang, Meikang Qiu
出版商Springer Science and Business Media Deutschland GmbH
489-500
页数12
ISBN(印刷版)9783031109881
DOI
出版状态已出版 - 2022
活动15th International Conference on Knowledge Science, Engineering and Management, KSEM 2022 - Singapore, 新加坡
期限: 6 8月 20228 8月 2022

出版系列

姓名Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
13370 LNAI
ISSN(印刷版)0302-9743
ISSN(电子版)1611-3349

会议

会议15th International Conference on Knowledge Science, Engineering and Management, KSEM 2022
国家/地区新加坡
Singapore
时期6/08/228/08/22

指纹

探究 'GM-Attack: Improving the Transferability of Adversarial Attacks' 的科研主题。它们共同构成独一无二的指纹。

引用此