TY - JOUR
T1 - Accelerating Federated Learning via Parameter Selection and Pre-Synchronization in Mobile Edge-Cloud Networks
AU - Zhou, Huan
AU - Li, Mingze
AU - Sun, Peng
AU - Guo, Bin
AU - Yu, Zhiwen
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Federated learning (FL) is a distributed machine learning methodology that can achieve collaborative model training among clients without collecting their private training data. Despite the great benefits in privacy protection, FL still faces challenges like limited computation capabilities of clients (e.g., end devices) and significant communication overheads when applied to mobile edge-cloud networks. To address these issues, this paper proposes a novel three-layer FL framework with Parameter Selection and Pre-synchronization (PSPFL) to achieve fast and accurate model training in mobile edge-cloud networks. The basic idea of PSPFL is that clients can select partial model parameters for transmission. Then, base stations aggregate these model parameters cooperatively (i.e., pre-synchronization) and send the aggregated results to the server for global model update periodically. However, there is an intrinsic trade-off between parameter transmission overhead and model training loss. To strike a desirable balance between them, we investigate the optimal parameter pre-synchronization round and local training round under PSPFL. Specifically, we propose an Alternating Minimization (AM) algorithm to obtain the initial local training round and parameter pre-synchronization round. Moreover, we integrate Deep Q-network with AM (namely DQNAM) to explore and update the optimal solution. Finally, extensive simulations are conducted to evaluate the performance of the proposed method on commonly used datasets. The results show that the proposed method can reduce the sum of FL completion time and training loss by an average of 20.72% - 69.25% compared to benchmark methods.
AB - Federated learning (FL) is a distributed machine learning methodology that can achieve collaborative model training among clients without collecting their private training data. Despite the great benefits in privacy protection, FL still faces challenges like limited computation capabilities of clients (e.g., end devices) and significant communication overheads when applied to mobile edge-cloud networks. To address these issues, this paper proposes a novel three-layer FL framework with Parameter Selection and Pre-synchronization (PSPFL) to achieve fast and accurate model training in mobile edge-cloud networks. The basic idea of PSPFL is that clients can select partial model parameters for transmission. Then, base stations aggregate these model parameters cooperatively (i.e., pre-synchronization) and send the aggregated results to the server for global model update periodically. However, there is an intrinsic trade-off between parameter transmission overhead and model training loss. To strike a desirable balance between them, we investigate the optimal parameter pre-synchronization round and local training round under PSPFL. Specifically, we propose an Alternating Minimization (AM) algorithm to obtain the initial local training round and parameter pre-synchronization round. Moreover, we integrate Deep Q-network with AM (namely DQNAM) to explore and update the optimal solution. Finally, extensive simulations are conducted to evaluate the performance of the proposed method on commonly used datasets. The results show that the proposed method can reduce the sum of FL completion time and training loss by an average of 20.72% - 69.25% compared to benchmark methods.
KW - deep Q-network
KW - Federated learning
KW - model parameter pre-synchronization
KW - parameter selection
UR - http://www.scopus.com/inward/record.url?scp=85188424712&partnerID=8YFLogxK
U2 - 10.1109/TMC.2024.3376636
DO - 10.1109/TMC.2024.3376636
M3 - 文章
AN - SCOPUS:85188424712
SN - 1536-1233
VL - 23
SP - 10313
EP - 10328
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
IS - 11
ER -