TY - JOUR
T1 - Deep neural rejection against adversarial examples
AU - Sotgiu, Angelo
AU - Demontis, Ambra
AU - Melis, Marco
AU - Biggio, Battista
AU - Fumera, Giorgio
AU - Feng, Xiaoyi
AU - Roli, Fabio
N1 - Publisher Copyright:
© 2020, The Author(s).
PY - 2020/12/1
Y1 - 2020/12/1
N2 - Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time. In this work, we propose a deep neural rejection mechanism to detect adversarial examples, based on the idea of rejecting samples that exhibit anomalous feature representations at different network layers. With respect to competing approaches, our method does not require generating adversarial examples at training time, and it is less computationally demanding. To properly evaluate our method, we define an adaptive white-box attack that is aware of the defense mechanism and aims to bypass it. Under this worst-case setting, we empirically show that our approach outperforms previously proposed methods that detect adversarial examples by only analyzing the feature representation provided by the output network layer.
AB - Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time. In this work, we propose a deep neural rejection mechanism to detect adversarial examples, based on the idea of rejecting samples that exhibit anomalous feature representations at different network layers. With respect to competing approaches, our method does not require generating adversarial examples at training time, and it is less computationally demanding. To properly evaluate our method, we define an adaptive white-box attack that is aware of the defense mechanism and aims to bypass it. Under this worst-case setting, we empirically show that our approach outperforms previously proposed methods that detect adversarial examples by only analyzing the feature representation provided by the output network layer.
KW - Adversarial examples
KW - Adversarial machine learning
KW - Deep neural networks
UR - http://www.scopus.com/inward/record.url?scp=85083282853&partnerID=8YFLogxK
U2 - 10.1186/s13635-020-00105-y
DO - 10.1186/s13635-020-00105-y
M3 - 文章
AN - SCOPUS:85083282853
SN - 2510-523X
VL - 2020
JO - Eurasip Journal on Information Security
JF - Eurasip Journal on Information Security
IS - 1
M1 - 5
ER -