TY - JOUR
T1 - On Single-Model Transferable Targeted Attacks
T2 - A Closer Look at Decision-Level Optimization
AU - Sun, Xuxiang
AU - Cheng, Gong
AU - Li, Hongda
AU - Pei, Lei
AU - Han, Junwei
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2023
Y1 - 2023
N2 - Known as a hard nut, the single-model transferable targeted attacks via decision-level optimization objectives have attracted much attention among scholars for a long time. On this topic, recent works devoted themselves to designing new optimization objectives. In contrast, we take a closer look at the intrinsic problems in three commonly adopted optimization objectives, and propose two simple yet effective methods in this paper to mitigate these intrinsic problems. Specifically, inspired by the basic idea of adversarial learning, we, for the first time, propose a unified Adversarial Optimization Scheme (AOS) to release both the problems of gradient vanishing in cross-entropy loss and gradient amplification in Po+Trip loss, and indicate that our AOS, a simple transformation on the output logits before passing them to the objective functions, can yield considerable improvements on the targeted transferability. Besides, we make a further clarification on the preliminary conjecture in Vanilla Logit Loss (VLL) and point out the problem of unbalanced optimization in VLL, in which the source logit may risk getting increased without the explicit suppression on it, leading to the low transferability. Then, the Balanced Logit Loss (BLL) is further proposed, where we take both the source logit and the target logit into account. Comprehensive validations witness the compatibility and the effectiveness of the proposed methods across most attack frameworks, and their effectiveness can also span two tough cases (i.e., the low-ranked transfer scenario and the transfer to defense methods) and three datasets (i.e., the ImageNet, CIFAR-10, and CIFAR-100). Our source code is available at https://github.com/xuxiangsun/DLLTTAA.
AB - Known as a hard nut, the single-model transferable targeted attacks via decision-level optimization objectives have attracted much attention among scholars for a long time. On this topic, recent works devoted themselves to designing new optimization objectives. In contrast, we take a closer look at the intrinsic problems in three commonly adopted optimization objectives, and propose two simple yet effective methods in this paper to mitigate these intrinsic problems. Specifically, inspired by the basic idea of adversarial learning, we, for the first time, propose a unified Adversarial Optimization Scheme (AOS) to release both the problems of gradient vanishing in cross-entropy loss and gradient amplification in Po+Trip loss, and indicate that our AOS, a simple transformation on the output logits before passing them to the objective functions, can yield considerable improvements on the targeted transferability. Besides, we make a further clarification on the preliminary conjecture in Vanilla Logit Loss (VLL) and point out the problem of unbalanced optimization in VLL, in which the source logit may risk getting increased without the explicit suppression on it, leading to the low transferability. Then, the Balanced Logit Loss (BLL) is further proposed, where we take both the source logit and the target logit into account. Comprehensive validations witness the compatibility and the effectiveness of the proposed methods across most attack frameworks, and their effectiveness can also span two tough cases (i.e., the low-ranked transfer scenario and the transfer to defense methods) and three datasets (i.e., the ImageNet, CIFAR-10, and CIFAR-100). Our source code is available at https://github.com/xuxiangsun/DLLTTAA.
KW - Adversarial attacks
KW - adaptive optimization scheme
KW - balanced logit loss
KW - decision-level attack
UR - http://www.scopus.com/inward/record.url?scp=85160230421&partnerID=8YFLogxK
U2 - 10.1109/TIP.2023.3276331
DO - 10.1109/TIP.2023.3276331
M3 - 文章
C2 - 37200127
AN - SCOPUS:85160230421
SN - 1057-7149
VL - 32
SP - 2972
EP - 2984
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -