TY - GEN
T1 - Client-based differential privacy federated learning
AU - Jin, Zengwang
AU - Xu, Chenhao
AU - Hu, Yanyan
AU - Zhang, Yanning
AU - Sun, Changyin
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Deep learning provides better personalized services by training specific models through massive amounts of data. However, due to the problem of gradient leakage during model training, the original data uploaded by the users is restored and privacy leakage occurs. In order to prevent data leakage, this paper introduces a federated learning method to deal with the privacy issues brought by multi-user joint modeling. Gradients generated by the user's local model training are uploaded to the aggregation server without being trained directly using the original user data. Under such a framework setting, the users' original data still has a certain risk of being leaked. In order to strengthen the protection of users' privacy, the training process is encrypted by combining the differential privacy mechanism and the federated learning system. The model parameters are stochastic to ensure that they cannot be acquired by adversaries. By adding Gaussian mechanism and Laplace mechanism, the influence of differential privacy on the convergence of federated learning model is analyzed. The Laplace mechanism is a strict definition of differential privacy, while the Gaussian mechanism is a relaxed definition and allows adding less noise for privacy protection. The simulation results show that both mechanisms can achieve good model convergence effect and verify that differential privacy can produce better privacy protection effect with lower communication cost.
AB - Deep learning provides better personalized services by training specific models through massive amounts of data. However, due to the problem of gradient leakage during model training, the original data uploaded by the users is restored and privacy leakage occurs. In order to prevent data leakage, this paper introduces a federated learning method to deal with the privacy issues brought by multi-user joint modeling. Gradients generated by the user's local model training are uploaded to the aggregation server without being trained directly using the original user data. Under such a framework setting, the users' original data still has a certain risk of being leaked. In order to strengthen the protection of users' privacy, the training process is encrypted by combining the differential privacy mechanism and the federated learning system. The model parameters are stochastic to ensure that they cannot be acquired by adversaries. By adding Gaussian mechanism and Laplace mechanism, the influence of differential privacy on the convergence of federated learning model is analyzed. The Laplace mechanism is a strict definition of differential privacy, while the Gaussian mechanism is a relaxed definition and allows adding less noise for privacy protection. The simulation results show that both mechanisms can achieve good model convergence effect and verify that differential privacy can produce better privacy protection effect with lower communication cost.
KW - differential privacy
KW - federated learning
KW - Guassion mechanism
UR - http://www.scopus.com/inward/record.url?scp=85185573567&partnerID=8YFLogxK
U2 - 10.1109/YAC59482.2023.10401762
DO - 10.1109/YAC59482.2023.10401762
M3 - 会议稿件
AN - SCOPUS:85185573567
T3 - Proceedings - 2023 38th Youth Academic Annual Conference of Chinese Association of Automation, YAC 2023
SP - 701
EP - 706
BT - Proceedings - 2023 38th Youth Academic Annual Conference of Chinese Association of Automation, YAC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 38th Youth Academic Annual Conference of Chinese Association of Automation, YAC 2023
Y2 - 27 August 2023 through 29 August 2023
ER -