TY - JOUR
T1 - Graph convolutional network-based reinforcement learning for tasks offloading in multi-access edge computing
AU - Leng, Lixiong
AU - Li, Jingchen
AU - Shi, Haobin
AU - Zhu, Yi’an
N1 - Publisher Copyright:
© 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2021/8
Y1 - 2021/8
N2 - To achieve high quality of service for computation-intensive applications, multi-access edge computing (MEC) is proposed for offloading tasks to MEC servers. The emerging reinforcement learning-based task offloading strategies have attracted attention of researchers, but the incomplete Markov models in them result in limited improvements. This work proposes a graph convolutional network-based reinforcement learning (GRL-based) method to enhance the reinforcement learning-based task offloading in MEC. The Graph Convolutional Network is introduced to extract features from tasks through regarding the task set as a directed acyclic graph. Then we construct a complete Markov model for the offloading strategy. In the proposed GRL-based method, the decision process is deployed in the user layer, while the training process is deployed in the cloud layer. An off-policy reinforcement learning method, soft actor-critic, is used to train the offloading strategy, by which the sampling and training can be implemented separately. Several simulation experiments show the proposed GRL-based method performs better than baseline methods, and it can achieve continuous decisions for task offloading efficiently.
AB - To achieve high quality of service for computation-intensive applications, multi-access edge computing (MEC) is proposed for offloading tasks to MEC servers. The emerging reinforcement learning-based task offloading strategies have attracted attention of researchers, but the incomplete Markov models in them result in limited improvements. This work proposes a graph convolutional network-based reinforcement learning (GRL-based) method to enhance the reinforcement learning-based task offloading in MEC. The Graph Convolutional Network is introduced to extract features from tasks through regarding the task set as a directed acyclic graph. Then we construct a complete Markov model for the offloading strategy. In the proposed GRL-based method, the decision process is deployed in the user layer, while the training process is deployed in the cloud layer. An off-policy reinforcement learning method, soft actor-critic, is used to train the offloading strategy, by which the sampling and training can be implemented separately. Several simulation experiments show the proposed GRL-based method performs better than baseline methods, and it can achieve continuous decisions for task offloading efficiently.
KW - Graph convolutional network
KW - Multi-access edge computing
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85113188287&partnerID=8YFLogxK
U2 - 10.1007/s11042-021-11130-5
DO - 10.1007/s11042-021-11130-5
M3 - 文章
AN - SCOPUS:85113188287
SN - 1380-7501
VL - 80
SP - 29163
EP - 29175
JO - Multimedia Tools and Applications
JF - Multimedia Tools and Applications
IS - 19
ER -