TY - JOUR
T1 - Adaptive Task Offloading in Vehicular Edge Computing Networks
T2 - a Reinforcement Learning Based Scheme
AU - Zhang, Jie
AU - Guo, Hongzhi
AU - Liu, Jiajia
N1 - Publisher Copyright:
© 2020, Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2020/10/1
Y1 - 2020/10/1
N2 - In recent years, with the rapid development of Internet of Things (IoTs) and artificial intelligence, vehicular networks have transformed from simple interactive systems to smart integrated networks. The accompanying intelligent connected vehicles (ICVs) can communicate with each other and connect to the urban traffic information network, to support intelligent applications, i.e., autonomous driving, intelligent navigation, and in-vehicle entertainment services. These applications are usually delay-sensitive and compute-intensive, with the result that the computation resources of vehicles cannot meet the quality requirements of service for vehicles. To solve this problem, vehicular edge computing networks (VECNs) that utilize mobile edge computing offloading technology are seen as a promising paradigm. However, existing task offloading schemes lack consideration of the highly dynamic feature of vehicular networks, which makes them unable to give time-varying offloading decisions for dynamic changes in vehicular networks. Meanwhile, the current mobility model cannot truly reflect the actual road traffic situation. Toward this end, we study the task offloading problem in VECNs with the synchronized random walk model. Then, we propose a reinforcement learning-based scheme as our solution, and verify its superior performance in processing delay reduction and dynamic scene adaptability.
AB - In recent years, with the rapid development of Internet of Things (IoTs) and artificial intelligence, vehicular networks have transformed from simple interactive systems to smart integrated networks. The accompanying intelligent connected vehicles (ICVs) can communicate with each other and connect to the urban traffic information network, to support intelligent applications, i.e., autonomous driving, intelligent navigation, and in-vehicle entertainment services. These applications are usually delay-sensitive and compute-intensive, with the result that the computation resources of vehicles cannot meet the quality requirements of service for vehicles. To solve this problem, vehicular edge computing networks (VECNs) that utilize mobile edge computing offloading technology are seen as a promising paradigm. However, existing task offloading schemes lack consideration of the highly dynamic feature of vehicular networks, which makes them unable to give time-varying offloading decisions for dynamic changes in vehicular networks. Meanwhile, the current mobility model cannot truly reflect the actual road traffic situation. Toward this end, we study the task offloading problem in VECNs with the synchronized random walk model. Then, we propose a reinforcement learning-based scheme as our solution, and verify its superior performance in processing delay reduction and dynamic scene adaptability.
KW - Mobile edge computing
KW - Reinforcement learning
KW - Vehicular networks
UR - http://www.scopus.com/inward/record.url?scp=85087630681&partnerID=8YFLogxK
U2 - 10.1007/s11036-020-01584-6
DO - 10.1007/s11036-020-01584-6
M3 - 文章
AN - SCOPUS:85087630681
SN - 1383-469X
VL - 25
SP - 1736
EP - 1745
JO - Mobile Networks and Applications
JF - Mobile Networks and Applications
IS - 5
ER -