Hierarchical Deep Reinforcement Learning for Computation Offloading in Autonomous Multi-Robot Systems

Wen Gao, Zhiwen Yu, Liang Wang, Helei Cui, Bin Guo, Hui Xiong

科研成果: 期刊稿件文章同行评审

摘要

To ensure system responsiveness, some compute-intensive tasks are usually offloaded to cloud or edge computing devices. In environments where connection to external computing facilities is unavailable, computation offloading among members within an autonomous multi-robot system (AMRS) becomes a solution. The challenge lies in how to maximize the use of other members' idle resources without disrupting their local computation tasks. Therefore, this study proposes HRL-AMRS, a hierarchical deep reinforcement learning framework designed to distribute computational loads and reduce the processing time of computational tasks within an AMRS. In this framework, the high-level must consider the impact of data loading scales determined by low-level under varying computational device states on the actual processing times. In addition, the low-level employs Long Short-Term Memory (LSTM) networks to enhance the understanding of time-series states of computing devices. Experimental results show that, across various task sizes and numbers of robots, the framework reduces processing times by an average of 4.32% compared to baseline methods.

源语言英语
期刊IEEE Robotics and Automation Letters
DOI
出版状态已接受/待刊 - 2024

指纹

探究 'Hierarchical Deep Reinforcement Learning for Computation Offloading in Autonomous Multi-Robot Systems' 的科研主题。它们共同构成独一无二的指纹。

引用此