跳到主要导航 跳到搜索 跳到主要内容

Accelerating Deep Learning Inference via Model Parallelism and Partial Computation Offloading

  • Huan Zhou
  • , Mingze Li
  • , Ning Wang
  • , Geyong Min
  • , Jie Wu
  • China Three Gorges University
  • Rowan University
  • University of Exeter
  • Temple University

科研成果: 期刊稿件文章同行评审

69 引用 (Scopus)

摘要

With the rapid development of Internet-of-Things (IoT) and the explosive advance of deep learning, there is an urgent need to enable deep learning inference on IoT devices in Mobile Edge Computing (MEC). To address the computation limitation of IoT devices in processing complex Deep Neural Networks (DNNs), computation offloading is proposed as a promising approach. Recently, partial computation offloading is developed to dynamically adjust task assignment strategy in different channel conditions for better performance. In this paper, we take advantage of intrinsic DNN computation characteristics and propose a novel Fused-Layer-based (FL-based) DNN model parallelism method to accelerate inference. The key idea is that a DNN layer can be converted to several smaller layers in order to increase partial computation offloading flexibility, and thus further create the better computation offloading solution. However, there is a trade-off between computation offloading flexibility as well as model parallelism overhead. Then, we investigate the optimal DNN model parallelism and the corresponding scheduling and offloading strategies in partial computation offloading. In particular, we propose a Particle Swarm Optimization with Minimizing Waiting (PSOMW) method, which explores and updates the FL strategy, path scheduling strategy, and path offloading strategy to reduce time complexity and avoid invalid solutions. Finally, we validate the effectiveness of the proposed method in commonly used DNNs. The results show that the proposed method can reduce the DNN inference time by an average of 12.75 times compared to the legacy No FL (NFL) algorithm, and is very close to the optimal solution achieved by the Brute Force (BF) algorithm with the difference of less than 0.04%.

源语言英语
页(从-至)475-488
页数14
期刊IEEE Transactions on Parallel and Distributed Systems
34
2
DOI
出版状态已出版 - 1 2月 2023
已对外发布

指纹

探究 'Accelerating Deep Learning Inference via Model Parallelism and Partial Computation Offloading' 的科研主题。它们共同构成独一无二的指纹。

引用此