Abstract
This letter proposes a multi-agent deep reinforcement learning (MADRL) framework for resource allocation in air-ground integrated multi-access edge computing (MEC) networks, where unmanned aerial vehicles (UAVs) provide computing services in addition to ground-computing access points (GCAPs). For maximizing the computation efficiency, the resource allocation problem is formulated as the mixed-integer programming problems. Then, we develop a cooperative deep deterministic policy gradient (CODDPG) algorithm to solve the problem via an observable Markov game. The simulation results demonstrate that the proposed algorithm outperforms centralized reinforcement learning in terms of the computation efficiency.
| Original language | English |
|---|---|
| Pages (from-to) | 2420-2424 |
| Number of pages | 5 |
| Journal | IEEE Wireless Communications Letters |
| Volume | 11 |
| Issue number | 11 |
| DOIs | |
| State | Published - 1 Nov 2022 |
Keywords
- Multi-access edge computing
- computation efficiency
- cooperative deep deterministic policy gradient
- multi-agent deep reinforcement learning
- resource allocation
Fingerprint
Dive into the research topics of 'Computing Assistance From the Sky: Decentralized Computation Efficiency Optimization for Air-Ground Integrated MEC Networks'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver