TY - JOUR
T1 - A Tunable Framework for Joint Trade-Off Between Accuracy and Multi-Norm Robustness
AU - Zheng, Haonan
AU - Deng, Xinyang
AU - Jiang, Wen
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2025
Y1 - 2025
N2 - Adversarial training enhances the robustness of deep networks at the cost of reduced natural accuracy. Moreover, networks fortified struggle to simultaneously defend against both sparse and dense perturbations. Thus, achieving a better trade-off between natural accuracy and robustness against both types of noise remains an open challenge. Many proposed approaches explore solutions based on network architecture optimization. But, in most cases, the additional parameters introduced are static, meaning that once network training is completed, the performance remains unchanged, and retraining is required to explore other potential trade-offs. We propose two dynamic auxiliary modules, CBNI and CCNI, which can fine-tune convolutional layers and BN layers, respectively, during the inference phase, so that the trained network can still adjust its emphasis on natural examples, sparse perturbations or dense perturbations. This means our network can achieve an appropriate balance to adapt to the operational environment in situ, without retraining. Furthermore, fully exploring natural capability and robustness limits is a complex and time-consuming problem. Our method can serve as an efficient research tool to examine the achievable trade-offs with just a single training. It is worth mentioning that CCNI is a linear adjustment and CBNI does not directly participate in the inference process. Therefore, both of them don't introduce redundant parameters and inference latency. Experiments indicate that our network can indeed achieve a complex trade-off between accuracy and adversarial robustness, producing performance that is comparable to or even better than existing methods.
AB - Adversarial training enhances the robustness of deep networks at the cost of reduced natural accuracy. Moreover, networks fortified struggle to simultaneously defend against both sparse and dense perturbations. Thus, achieving a better trade-off between natural accuracy and robustness against both types of noise remains an open challenge. Many proposed approaches explore solutions based on network architecture optimization. But, in most cases, the additional parameters introduced are static, meaning that once network training is completed, the performance remains unchanged, and retraining is required to explore other potential trade-offs. We propose two dynamic auxiliary modules, CBNI and CCNI, which can fine-tune convolutional layers and BN layers, respectively, during the inference phase, so that the trained network can still adjust its emphasis on natural examples, sparse perturbations or dense perturbations. This means our network can achieve an appropriate balance to adapt to the operational environment in situ, without retraining. Furthermore, fully exploring natural capability and robustness limits is a complex and time-consuming problem. Our method can serve as an efficient research tool to examine the achievable trade-offs with just a single training. It is worth mentioning that CCNI is a linear adjustment and CBNI does not directly participate in the inference process. Therefore, both of them don't introduce redundant parameters and inference latency. Experiments indicate that our network can indeed achieve a complex trade-off between accuracy and adversarial robustness, producing performance that is comparable to or even better than existing methods.
KW - Adversarial training
KW - image classification
KW - multi-norm attack
KW - multi-norm robustness
KW - noise injection
UR - http://www.scopus.com/inward/record.url?scp=105001649567&partnerID=8YFLogxK
U2 - 10.1109/TETCI.2025.3540419
DO - 10.1109/TETCI.2025.3540419
M3 - 文章
AN - SCOPUS:105001649567
SN - 2471-285X
VL - 9
SP - 1490
EP - 1501
JO - IEEE Transactions on Emerging Topics in Computational Intelligence
JF - IEEE Transactions on Emerging Topics in Computational Intelligence
IS - 2
ER -