TY - GEN
T1 - Energy-Efficient Task Offloading in UAV-Enabled MEC via Multi-agent Reinforcement Learning
AU - Gao, Jiakun
AU - Zhang, Jie
AU - Xu, Xiaolong
AU - Qi, Lianyong
AU - Yuan, Yuan
AU - Li, Zheng
AU - Dou, Wanchun
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd 2024.
PY - 2024
Y1 - 2024
N2 - Nowadays, artificial intelligence-based tasks are imposing increasing demands on computation resources and energy consumption. Unmanned aerial vehicles (UAVs) are widely utilized in mobile edge computing (MEC) due to maneuverability and integration of MEC servers, providing computation assistance to ground terminals (GTs). The task offloading process from GTs to UAVs in UAV-enabled MEC faces challenges such as workload imbalance among UAVs due to uneven GT distribution and conflicts arising from the increasing number of GTs and limited communication resources. Additionally, the dynamic nature of communication networks and workload needs to be considered. To address these challenges, this paper proposes a Multi-Agent Deep Deterministic Policy Gradient based distributed offloading method, named DMARL, treating each GT as an independent decision-maker responsible for determining task offloading strategies and transmission power. Furthermore, a UAV-enabled MEC with Non-Orthogonal Multiple Access architecture is introduced, incorporating task computation and transmission queue models. In addition, a differential reward function that considers both system-level rewards and individual rewards for each GT is designed. Simulation experiments conducted in three different scenarios demonstrate that the proposed method exhibits superior performance in balancing latency and energy consumption.
AB - Nowadays, artificial intelligence-based tasks are imposing increasing demands on computation resources and energy consumption. Unmanned aerial vehicles (UAVs) are widely utilized in mobile edge computing (MEC) due to maneuverability and integration of MEC servers, providing computation assistance to ground terminals (GTs). The task offloading process from GTs to UAVs in UAV-enabled MEC faces challenges such as workload imbalance among UAVs due to uneven GT distribution and conflicts arising from the increasing number of GTs and limited communication resources. Additionally, the dynamic nature of communication networks and workload needs to be considered. To address these challenges, this paper proposes a Multi-Agent Deep Deterministic Policy Gradient based distributed offloading method, named DMARL, treating each GT as an independent decision-maker responsible for determining task offloading strategies and transmission power. Furthermore, a UAV-enabled MEC with Non-Orthogonal Multiple Access architecture is introduced, incorporating task computation and transmission queue models. In addition, a differential reward function that considers both system-level rewards and individual rewards for each GT is designed. Simulation experiments conducted in three different scenarios demonstrate that the proposed method exhibits superior performance in balancing latency and energy consumption.
KW - Mobile Edge Computing
KW - multi-agent deep reinforcement learning
KW - NOMA
KW - unmanned aerial vehicles
UR - http://www.scopus.com/inward/record.url?scp=85184298639&partnerID=8YFLogxK
U2 - 10.1007/978-981-99-9896-8_5
DO - 10.1007/978-981-99-9896-8_5
M3 - 会议稿件
AN - SCOPUS:85184298639
SN - 9789819998951
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 63
EP - 80
BT - Green, Pervasive, and Cloud Computing - 18th International Conference, GPC 2023, Proceedings
A2 - Jin, Hai
A2 - Yu, Zhiwen
A2 - Yu, Chen
A2 - Zhou, Xiaokang
A2 - Lu, Zeguang
A2 - Song, Xianhua
PB - Springer Science and Business Media Deutschland GmbH
T2 - 18th International Conference on Green, Pervasive, and Cloud Computing, GPC 2023
Y2 - 22 September 2023 through 24 September 2023
ER -