Decentralized federated learning based on bivariate controlled averaging and sharpness aware minimization

Jihao Yang, Wen Jiang, Laisen Nie

Research output: Contribution to journalArticlepeer-review

Abstract

To avoid the single-point-of-failure problem of the central server of centralized federated learning (CFL) and mitigate the communication burden, decentralized federated learning (DFL) implements model training by establishing peer-to-peer communication directly among clients. However, the lack of coordination and consistency of the central server causes DFL to encounter more serious data heterogeneity challenges. In addition, different communication topologies also affect the performance of DFL aggregated models. In order to overcome the data heterogeneity challenge faced in DFL, we consider extending the client drift in CFL to DFL, and target to propose a decentralized federated learning algorithm based on bivariate controlled averaging to improve the adaptability and generalization of the model to heterogeneous data by solving the distributed client drift problem caused by data heterogeneity. Furthermore, we propose a variant based on sharpness aware minimization, which further improves the model performance and accelerates the model convergence by optimizing the gradient during the local model update. Experimental results show that our approach achieves better model performance and faster model convergence under various experimental settings on multiple datasets, as well as exhibits strong adaptability to different topologies.

Original languageEnglish
Article number129438
JournalExpert Systems with Applications
Volume297
DOIs
StatePublished - 1 Feb 2026

Keywords

  • Data heterogeneity
  • Decentralized federated learning
  • Distributed client drift
  • Model correction

Fingerprint

Dive into the research topics of 'Decentralized federated learning based on bivariate controlled averaging and sharpness aware minimization'. Together they form a unique fingerprint.

Cite this