Joint Task Offloading and Power Control Optimization for IoT-Enabled Smart Cities: An Energy-Efficient Coordination via Deep Reinforcement Learning

Rugui Yao, Lipei Liu, Xiaoya Zuo, Lin Yu, Juan Xu, Ye Fan, Wenhua Li

Research output: Contribution to journalArticlepeer-review

Abstract

Mobile Edge Computing (MEC) enhances computational efficiency by reducing data transmission distance, yet optimizing resource allocation and reducing operational cost remain critical challenges as the number of users grows. This paper investigates a multi-user partial computation offloading system under the time-varying channel environment and proposes a novel deep reinforcement learning-based framework to jointly optimize offloading strategy and power control, aiming to minimize the weighted sum of latency and energy consumption. Due to the problem’s multi-parameter, highly coupled, and non-convex characteristics, a deep neural network is firstly utilized to generate offloading ratio vectors, which are then discretized using an improved k-Nearest Neighbor (KNN) algorithm. Based on the quantized offloading actions, the Differential Evolution (DE) algorithm is employed to seek the optimal power control. Finally, the optimal action and state vectors are stored in an experience replay pool for subsequent network training until convergence, producing the optimal solution. Numerical results demonstrate that the proposed improved quantization method avoids the additional action exploration while accelerating convergence. Furthermore, the proposed algorithm significantly lowers user devices latency and energy consumption, outperforming other schemes and providing more efficient edge computing services.

Original languageEnglish
JournalIEEE Transactions on Consumer Electronics
DOIs
StateAccepted/In press - 2025

Keywords

  • Deep reinforcement learning
  • Differential evolution algorithm
  • Mobile edge computing
  • Partial offloading
  • Power control

Cite this