Incentive-Driven and Energy Efficient Federated Learning in Mobile Edge Networks

Huan Zhou, Qiangqiang Gu, Peng Sun, Xiaokang Zhou, Victor C.M. Leung, Xinggang Fan

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Federated Learning (FL), as a new distributed learning approach, allows multiple heterogeneous clients to cooperatively train models without disclosing private data. However, selfish clients may be unwilling to participate in FL training without appropriate compensation. In addition, the characteristics of clients in mobile edge networks (e.g., limited available resources) may also reduce the efficiency of FL and increase training cost. To solve these challenges, this paper proposes a Cost-Aware FL framework with client incentive and model compression (CAFL), aiming to minimize the training cost while ensuring the accuracy of the global model. In CAFL, we employ the reverse auction for incentive design, where Base Station (BS) acts as the auctioneer to select clients, and determines the local training rounds and model compression rates. Meanwhile, clients act as the bidders to train local models and get payments. We model the process of client selection, local training, and model compression as Mixed-Integer Non-Linear programming. Accordingly, we propose an improved Soft Actor-Critic-based client selection and model compression algorithm to solve the optimization problem, and design a Vickrey-Clarke-Groves-based payment rule to compensate for clients' cost. Finally, extensive simulation experiments are conducted to evaluate the performance of the proposed method. The results show that the proposed method outperforms other benchmarks in terms of BS's cost under various scenarios.

Keywords

  • Client selection
  • Federated learning
  • Model compression
  • Reverse auction
  • Soft Actor-Critic

Fingerprint

Dive into the research topics of 'Incentive-Driven and Energy Efficient Federated Learning in Mobile Edge Networks'. Together they form a unique fingerprint.

Cite this