TY - GEN
T1 - A reinforcement learning based task offloading scheme for vehicular edge computing network
AU - Zhang, Jie
AU - Guo, Hongzhi
AU - Liu, Jiajia
N1 - Publisher Copyright:
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2019 Published by Springer Nature Switzerland AG 2019. All Rights Reserved.
PY - 2019
Y1 - 2019
N2 - Recently, the trends of automation and intelligence in vehicular networks have led to the emergence of intelligent connected vehicles (ICVs), and various intelligent applications like autonomous driving have also rapidly developed. Usually, these applications are compute-intensive, and require large amounts of computation resources, which conflicts with resource-limited vehicles. This contradiction becomes a bottleneck in the development of vehicular networks. To address this challenge, the researchers combined mobile edge computing (MEC) with vehicular networks, and proposed vehicular edge computing networks (VECNs). The deploying of MEC servers near the vehicles allows compute-intensive applications to be offloaded to MEC servers for execution, so as to alleviate vehicles’ computational pressure. However, the high dynamic feature which makes traditional optimization algorithms like convex/non-convex optimization less suitable for vehicular networks, often lacks adequate consideration in the existing task offloading schemes. Toward this end, we propose a reinforcement learning based task offloading scheme, i.e., a deep Q learning algorithm, to solve the delay minimization problem in VECNs. Extensive numerical results corroborate the superior performance of our proposed scheme on reducing the processing delay of vehicles’ computation tasks.
AB - Recently, the trends of automation and intelligence in vehicular networks have led to the emergence of intelligent connected vehicles (ICVs), and various intelligent applications like autonomous driving have also rapidly developed. Usually, these applications are compute-intensive, and require large amounts of computation resources, which conflicts with resource-limited vehicles. This contradiction becomes a bottleneck in the development of vehicular networks. To address this challenge, the researchers combined mobile edge computing (MEC) with vehicular networks, and proposed vehicular edge computing networks (VECNs). The deploying of MEC servers near the vehicles allows compute-intensive applications to be offloaded to MEC servers for execution, so as to alleviate vehicles’ computational pressure. However, the high dynamic feature which makes traditional optimization algorithms like convex/non-convex optimization less suitable for vehicular networks, often lacks adequate consideration in the existing task offloading schemes. Toward this end, we propose a reinforcement learning based task offloading scheme, i.e., a deep Q learning algorithm, to solve the delay minimization problem in VECNs. Extensive numerical results corroborate the superior performance of our proposed scheme on reducing the processing delay of vehicles’ computation tasks.
KW - Mobile edge computing
KW - Reinforcement learning
KW - Vehicular edge computing networks
UR - http://www.scopus.com/inward/record.url?scp=85069166833&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-22971-9_38
DO - 10.1007/978-3-030-22971-9_38
M3 - 会议稿件
AN - SCOPUS:85069166833
SN - 9783030229702
T3 - Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
SP - 438
EP - 449
BT - Artificial Intelligence for Communications and Networks - 1st EAI International Conference, AICON 2019, Proceedings
A2 - Han, Shuai
A2 - Ye, Liang
A2 - Meng, Weixiao
PB - Springer Verlag
T2 - 1st EAI International Conference on Artificial Intelligence for Communications and Networks, AICON 2019
Y2 - 25 May 2019 through 26 May 2019
ER -