Computing Assistance From the Sky: Decentralized Computation Efficiency Optimization for Air-Ground Integrated MEC Networks

Wensheng Lin, Hui Ma, Lixin Li, Zhu Han

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

This letter proposes a multi-agent deep reinforcement learning (MADRL) framework for resource allocation in air-ground integrated multi-access edge computing (MEC) networks, where unmanned aerial vehicles (UAVs) provide computing services in addition to ground-computing access points (GCAPs). For maximizing the computation efficiency, the resource allocation problem is formulated as the mixed-integer programming problems. Then, we develop a cooperative deep deterministic policy gradient (CODDPG) algorithm to solve the problem via an observable Markov game. The simulation results demonstrate that the proposed algorithm outperforms centralized reinforcement learning in terms of the computation efficiency.

Original languageEnglish
Pages (from-to)2420-2424
Number of pages5
JournalIEEE Wireless Communications Letters
Volume11
Issue number11
DOIs
StatePublished - 1 Nov 2022

Keywords

  • Multi-access edge computing
  • computation efficiency
  • cooperative deep deterministic policy gradient
  • multi-agent deep reinforcement learning
  • resource allocation

Fingerprint

Dive into the research topics of 'Computing Assistance From the Sky: Decentralized Computation Efficiency Optimization for Air-Ground Integrated MEC Networks'. Together they form a unique fingerprint.

Cite this