Graph convolutional network-based reinforcement learning for tasks offloading in multi-access edge computing

Lixiong Leng, Jingchen Li, Haobin Shi, Yi’an Zhu

Research output: Contribution to journalArticlepeer-review

19 Scopus citations

Abstract

To achieve high quality of service for computation-intensive applications, multi-access edge computing (MEC) is proposed for offloading tasks to MEC servers. The emerging reinforcement learning-based task offloading strategies have attracted attention of researchers, but the incomplete Markov models in them result in limited improvements. This work proposes a graph convolutional network-based reinforcement learning (GRL-based) method to enhance the reinforcement learning-based task offloading in MEC. The Graph Convolutional Network is introduced to extract features from tasks through regarding the task set as a directed acyclic graph. Then we construct a complete Markov model for the offloading strategy. In the proposed GRL-based method, the decision process is deployed in the user layer, while the training process is deployed in the cloud layer. An off-policy reinforcement learning method, soft actor-critic, is used to train the offloading strategy, by which the sampling and training can be implemented separately. Several simulation experiments show the proposed GRL-based method performs better than baseline methods, and it can achieve continuous decisions for task offloading efficiently.

Original languageEnglish
Pages (from-to)29163-29175
Number of pages13
JournalMultimedia Tools and Applications
Volume80
Issue number19
DOIs
StatePublished - Aug 2021

Keywords

  • Graph convolutional network
  • Multi-access edge computing
  • Reinforcement learning

Fingerprint

Dive into the research topics of 'Graph convolutional network-based reinforcement learning for tasks offloading in multi-access edge computing'. Together they form a unique fingerprint.

Cite this