TY - JOUR
T1 - Incentive-Driven and Energy Efficient Federated Learning in Mobile Edge Networks
AU - Zhou, Huan
AU - Gu, Qiangqiang
AU - Sun, Peng
AU - Zhou, Xiaokang
AU - Leung, Victor C.M.
AU - Fan, Xinggang
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2025
Y1 - 2025
N2 - Federated Learning (FL), as a new distributed learning approach, allows multiple heterogeneous clients to cooperatively train models without disclosing private data. However, selfish clients may be unwilling to participate in FL training without appropriate compensation. In addition, the characteristics of clients in mobile edge networks (e.g., limited available resources) may also reduce the efficiency of FL and increase training cost. To solve these challenges, this paper proposes a Cost-Aware FL framework with client incentive and model compression (CAFL), aiming to minimize the training cost while ensuring the accuracy of the global model. In CAFL, we employ the reverse auction for incentive design, where Base Station (BS) acts as the auctioneer to select clients, and determines the local training rounds and model compression rates. Meanwhile, clients act as the bidders to train local models and get payments. We model the process of client selection, local training, and model compression as Mixed-Integer Non-Linear programming. Accordingly, we propose an improved Soft Actor-Critic-based client selection and model compression algorithm to solve the optimization problem, and design a Vickrey-Clarke-Groves-based payment rule to compensate for clients' cost. Finally, extensive simulation experiments are conducted to evaluate the performance of the proposed method. The results show that the proposed method outperforms other benchmarks in terms of BS's cost under various scenarios.
AB - Federated Learning (FL), as a new distributed learning approach, allows multiple heterogeneous clients to cooperatively train models without disclosing private data. However, selfish clients may be unwilling to participate in FL training without appropriate compensation. In addition, the characteristics of clients in mobile edge networks (e.g., limited available resources) may also reduce the efficiency of FL and increase training cost. To solve these challenges, this paper proposes a Cost-Aware FL framework with client incentive and model compression (CAFL), aiming to minimize the training cost while ensuring the accuracy of the global model. In CAFL, we employ the reverse auction for incentive design, where Base Station (BS) acts as the auctioneer to select clients, and determines the local training rounds and model compression rates. Meanwhile, clients act as the bidders to train local models and get payments. We model the process of client selection, local training, and model compression as Mixed-Integer Non-Linear programming. Accordingly, we propose an improved Soft Actor-Critic-based client selection and model compression algorithm to solve the optimization problem, and design a Vickrey-Clarke-Groves-based payment rule to compensate for clients' cost. Finally, extensive simulation experiments are conducted to evaluate the performance of the proposed method. The results show that the proposed method outperforms other benchmarks in terms of BS's cost under various scenarios.
KW - Client selection
KW - Federated learning
KW - Model compression
KW - Reverse auction
KW - Soft Actor-Critic
UR - http://www.scopus.com/inward/record.url?scp=85215933602&partnerID=8YFLogxK
U2 - 10.1109/TCCN.2025.3531464
DO - 10.1109/TCCN.2025.3531464
M3 - 文章
AN - SCOPUS:85215933602
SN - 2332-7731
JO - IEEE Transactions on Cognitive Communications and Networking
JF - IEEE Transactions on Cognitive Communications and Networking
ER -