Towards Robust Differential Privacy in Adaptive Federated Learning Architectures

Zengwang Jin, Chenhao Xu, Zhen Wang, Changyin Sun

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The essential issues of data silos and user privacy leakage could be relaxed substantially by the development of the federated learning (FL) architecture. In a collaborative multi-user modeling situation, malicious attackers could still use user gradient information to infer the danger of user privacy. To mitigate the issue of privacy leakage, differential privacy (DP) mechanism is integrated into the federated learning framework to assess privacy loss and introduce noise to the local model parameters of users. In addition, in order to minimize information leakage and provide better noise rejection, Rényi differential privacy (RDP) is introduced as a privacy metric, which improves the balance between model privacy and utility. Owing to the unknown target model and limited communication cost resources, a client-based adaptive learning algorithm is developed in which each local model parameter is adaptively updated during local iterations to accelerate model convergence and avoid model overfitting. The experimental results reveal that the client-based adaptive federation learning model in this paper outperforms the classic model at a fixed communication cost, is more robust to noise resistance and variable hyperparameter settings, and provides more accurate privacy protection during transmission.

Original languageEnglish
JournalIEEE Transactions on Consumer Electronics
DOIs
StateAccepted/In press - 2025

Keywords

  • Adaptive gradient descent
  • Differential privacy
  • Federated learning
  • Privacy computing

Fingerprint

Dive into the research topics of 'Towards Robust Differential Privacy in Adaptive Federated Learning Architectures'. Together they form a unique fingerprint.

Cite this