TY - GEN
T1 - EFFICIENT FEDERATED LEARNING WITH SMOOTH AGGREGATION FOR NON-IID DATA FROM MULTIPLE EDGES
AU - Wang, Qianru
AU - Li, Qingyang
AU - Guo, Bin
AU - Cui, Jiangtao
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Federated learning (FL) learns an optimal global model by aggregating local models trained on distributed data from different devices. Due to heterogeneous data distributions across devices, local models will be divergent, resulting in the global model's performance degradation. Recent studies attempt to balance local models to obtain a global model that can adapt to each device. But they ignore a more challenging problem that redundant local models from devices will break the balance, resulting in the global model overfitting redundant local models. Therefore, we propose FedSmooth, a novel global aggregation algorithm. FedSmooth first identifies the redundant local models without sensitive local information (e.g., label distribution), then designs a smooth global aggregation to strengthen the effect of local models that can accelerate finding the optimal global model. Experimental results show that our method outperforms 4 SOTA baseline methods even if there is more redundancy.
AB - Federated learning (FL) learns an optimal global model by aggregating local models trained on distributed data from different devices. Due to heterogeneous data distributions across devices, local models will be divergent, resulting in the global model's performance degradation. Recent studies attempt to balance local models to obtain a global model that can adapt to each device. But they ignore a more challenging problem that redundant local models from devices will break the balance, resulting in the global model overfitting redundant local models. Therefore, we propose FedSmooth, a novel global aggregation algorithm. FedSmooth first identifies the redundant local models without sensitive local information (e.g., label distribution), then designs a smooth global aggregation to strengthen the effect of local models that can accelerate finding the optimal global model. Experimental results show that our method outperforms 4 SOTA baseline methods even if there is more redundancy.
KW - Federated learning
KW - deep neural network
KW - edge-collaborative computing
KW - redundant data
UR - http://www.scopus.com/inward/record.url?scp=85195367316&partnerID=8YFLogxK
U2 - 10.1109/ICASSP48485.2024.10447506
DO - 10.1109/ICASSP48485.2024.10447506
M3 - 会议稿件
AN - SCOPUS:85195367316
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 9006
EP - 9010
BT - 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024
Y2 - 14 April 2024 through 19 April 2024
ER -