TY - JOUR
T1 - Stateful detection of adversarial reprogramming
AU - Zheng, Yang
AU - Feng, Xiaoyi
AU - Xia, Zhaoqiang
AU - Jiang, Xiaoyue
AU - Pintor, Maura
AU - Demontis, Ambra
AU - Biggio, Battista
AU - Roli, Fabio
N1 - Publisher Copyright:
© 2023 Elsevier Inc.
PY - 2023/9
Y1 - 2023/9
N2 - Adversarial reprogramming allows stealing computational resources by repurposing machine learning models to perform a different task chosen by the attacker. For example, a model trained to recognize images of animals can be reprogrammed to recognize medical images by embedding an adversarial program in the images provided as inputs. This attack can be perpetrated even if the target model is a black box, supposed that the machine-learning model is provided as a service and the attacker can query the model and collect its outputs. So far, no defense has been demonstrated effective in this scenario. We show for the first time that this attack is detectable using stateful defenses, which store the queries made to the classifier and detect the abnormal cases in which they are similar. Once a malicious query is detected, the account of the user who made it can be blocked. Thus, the attacker must create many accounts to perpetrate the attack. To decrease this number, the attacker could create the adversarial program against a surrogate classifier and then fine-tune it by making a few queries to the target model. In this scenario, the effectiveness of the stateful defense is reduced, but we show that it is still effective.
AB - Adversarial reprogramming allows stealing computational resources by repurposing machine learning models to perform a different task chosen by the attacker. For example, a model trained to recognize images of animals can be reprogrammed to recognize medical images by embedding an adversarial program in the images provided as inputs. This attack can be perpetrated even if the target model is a black box, supposed that the machine-learning model is provided as a service and the attacker can query the model and collect its outputs. So far, no defense has been demonstrated effective in this scenario. We show for the first time that this attack is detectable using stateful defenses, which store the queries made to the classifier and detect the abnormal cases in which they are similar. Once a malicious query is detected, the account of the user who made it can be blocked. Thus, the attacker must create many accounts to perpetrate the attack. To decrease this number, the attacker could create the adversarial program against a surrogate classifier and then fine-tune it by making a few queries to the target model. In this scenario, the effectiveness of the stateful defense is reduced, but we show that it is still effective.
KW - Adversarial machine learning
KW - Adversarial reprogramming
KW - Neural networks
KW - Stateful defenses
UR - http://www.scopus.com/inward/record.url?scp=85160442784&partnerID=8YFLogxK
U2 - 10.1016/j.ins.2023.119093
DO - 10.1016/j.ins.2023.119093
M3 - 文章
AN - SCOPUS:85160442784
SN - 0020-0255
VL - 642
JO - Information Sciences
JF - Information Sciences
M1 - 119093
ER -