TY - GEN
T1 - Joint computation offloading and resource configuration in ultra-dense edge computing networks
T2 - 90th IEEE Vehicular Technology Conference, VTC 2019 Fall
AU - Lv, Jianfeng
AU - Xiong, Jingyu
AU - Guo, Hongzhi
AU - Liu, Jiajia
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/9
Y1 - 2019/9
N2 - The prompt development of wireless communication network and emerging technologies such as Internet of Things (IoT) and 5G have increased the number of various mobile devices (MDs). In order to enlarge the capacity of the system and meet the high computation demands of MDs, the integration of ultra-dense heterogeneous networks (UDN) and mobile edge computing (MEC) is proposed as a promising paradigm. However, when massively deploying edge servers in UDN scenario, the operating expense reduction has become an essential issue to be solved, which can be achieved by computation offloading decision-making optimization and edge servers' computing resource configuration. In consideration of the complicated state information and ever-changing environment in UDN, applying reinforcement learning (RL) to the dynamical systems is envisioned as an effective way. Toward this end, we combine the deep learning with RL and propose a deep Qnetwork based method to address this high-dimensional problem. The experimental results demonstrate the superior performance of our proposed scheme on reducing the processing delay and enhancing the computing resource utilization.
AB - The prompt development of wireless communication network and emerging technologies such as Internet of Things (IoT) and 5G have increased the number of various mobile devices (MDs). In order to enlarge the capacity of the system and meet the high computation demands of MDs, the integration of ultra-dense heterogeneous networks (UDN) and mobile edge computing (MEC) is proposed as a promising paradigm. However, when massively deploying edge servers in UDN scenario, the operating expense reduction has become an essential issue to be solved, which can be achieved by computation offloading decision-making optimization and edge servers' computing resource configuration. In consideration of the complicated state information and ever-changing environment in UDN, applying reinforcement learning (RL) to the dynamical systems is envisioned as an effective way. Toward this end, we combine the deep learning with RL and propose a deep Qnetwork based method to address this high-dimensional problem. The experimental results demonstrate the superior performance of our proposed scheme on reducing the processing delay and enhancing the computing resource utilization.
KW - Computation offloading
KW - Computing resource configuration
KW - Deep reinforcement learning
KW - Mobile edge computing
KW - Ultra-dense network
UR - http://www.scopus.com/inward/record.url?scp=85075247656&partnerID=8YFLogxK
U2 - 10.1109/VTCFall.2019.8891384
DO - 10.1109/VTCFall.2019.8891384
M3 - 会议稿件
AN - SCOPUS:85075247656
T3 - IEEE Vehicular Technology Conference
BT - 2019 IEEE 90th Vehicular Technology Conference, VTC 2019 Fall - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 22 September 2019 through 25 September 2019
ER -