Grouped Federated Learning: A Decentralized Learning Framework with Low Latency for Heterogeneous Devices

Tong Yin, Lixin Li, Wensheng Lin, Donghui Ma, Zhu Han

科研成果: 书/报告/会议事项章节会议稿件同行评审

7 引用 (Scopus)

摘要

In recent years, federated learning (FL) plays an important role in data privacy-sensitive scenarios to perform learning works collectively without data exchange. However, due to the centralized model aggregation for heterogeneous devices in FL, the convergence is delayed by the last updated model after local training, which increases the economic cost and dampens clients' motivations for participating FL. In this paper, we propose a decentralized FL framework by grouping the clients with the similar computing and communication performance, named federated averaging-inspired group-based federated learning (FGFL). Specifically, we provide a cost function and a greedy-based grouping strategy, which divides the clients into several groups to accelerate the convergence of the FL model. The simulation results verify the effectiveness of FGFL for accelerating the convergence of FL with heterogeneous clients. Besides the exemplified convolutional neural network (CNN), FGFL is also applicable with other learning models.

源语言英语
主期刊名2022 IEEE International Conference on Communications Workshops, ICC Workshops 2022
出版商Institute of Electrical and Electronics Engineers Inc.
55-60
页数6
ISBN(电子版)9781665426718
DOI
出版状态已出版 - 2022
活动2022 IEEE International Conference on Communications Workshops, ICC Workshops 2022 - Seoul, 韩国
期限: 16 5月 202220 5月 2022

出版系列

姓名2022 IEEE International Conference on Communications Workshops, ICC Workshops 2022

会议

会议2022 IEEE International Conference on Communications Workshops, ICC Workshops 2022
国家/地区韩国
Seoul
时期16/05/2220/05/22

指纹

探究 'Grouped Federated Learning: A Decentralized Learning Framework with Low Latency for Heterogeneous Devices' 的科研主题。它们共同构成独一无二的指纹。

引用此