A Tunable Framework for Joint Trade-Off Between Accuracy and Multi-Norm Robustness

Haonan Zheng, Xinyang Deng, Wen Jiang

Research output: Contribution to journalArticlepeer-review

Abstract

Adversarial training enhances the robustness of deep networks at the cost of reduced natural accuracy. Moreover, networks fortified struggle to simultaneously defend against both sparse and dense perturbations. Thus, achieving a better trade-off between natural accuracy and robustness against both types of noise remains an open challenge. Many proposed approaches explore solutions based on network architecture optimization. But, in most cases, the additional parameters introduced are static, meaning that once network training is completed, the performance remains unchanged, and retraining is required to explore other potential trade-offs. We propose two dynamic auxiliary modules, CBNI and CCNI, which can fine-tune convolutional layers and BN layers, respectively, during the inference phase, so that the trained network can still adjust its emphasis on natural examples, sparse perturbations or dense perturbations. This means our network can achieve an appropriate balance to adapt to the operational environment in situ, without retraining. Furthermore, fully exploring natural capability and robustness limits is a complex and time-consuming problem. Our method can serve as an efficient research tool to examine the achievable trade-offs with just a single training. It is worth mentioning that CCNI is a linear adjustment and CBNI does not directly participate in the inference process. Therefore, both of them don't introduce redundant parameters and inference latency. Experiments indicate that our network can indeed achieve a complex trade-off between accuracy and adversarial robustness, producing performance that is comparable to or even better than existing methods.

Original languageEnglish
Pages (from-to)1490-1501
Number of pages12
JournalIEEE Transactions on Emerging Topics in Computational Intelligence
Volume9
Issue number2
DOIs
StatePublished - 2025

Keywords

  • Adversarial training
  • image classification
  • multi-norm attack
  • multi-norm robustness
  • noise injection

Fingerprint

Dive into the research topics of 'A Tunable Framework for Joint Trade-Off Between Accuracy and Multi-Norm Robustness'. Together they form a unique fingerprint.

Cite this