TY - GEN
T1 - AdaOper
T2 - 2024 Workshop on Adaptive AIoT Systems, AdaAIoTSys 2024
AU - Lin, Zheng
AU - Guo, Bin
AU - Liu, Sicong
AU - Zhou, Wentao
AU - Ding, Yasan
AU - Zhang, Yu
AU - Yu, Zhiwen
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2024/6/3
Y1 - 2024/6/3
N2 - Deep neural network (DNN) has driven extensive applications in mobile technology. However, for long-running mobile apps like voice assistants or video applications on smartphones, energy efficiency is critical for battery-powered devices. The rise of heterogeneous processors in mobile devices today has introduced new challenges for optimizing energy efficiency. Our key insight is that partitioning computations across different processors for parallelism and speedup doesn't necessarily correlate with energy consumption optimization and may even increase it. To address this, we present AdaOper, an energy-efficient concurrent DNN inference system. It optimizes energy efficiency on mobile heterogeneous processors while maintaining responsiveness. AdaOper includes a runtime energy profiler that dynamically adjusts operator partitioning to optimize energy efficiency based on dynamic device conditions. We conduct preliminary experiments, which show that AdaOper reduces energy consumption by 16.88% compared to the existing concurrent method while ensuring real-time performance.
AB - Deep neural network (DNN) has driven extensive applications in mobile technology. However, for long-running mobile apps like voice assistants or video applications on smartphones, energy efficiency is critical for battery-powered devices. The rise of heterogeneous processors in mobile devices today has introduced new challenges for optimizing energy efficiency. Our key insight is that partitioning computations across different processors for parallelism and speedup doesn't necessarily correlate with energy consumption optimization and may even increase it. To address this, we present AdaOper, an energy-efficient concurrent DNN inference system. It optimizes energy efficiency on mobile heterogeneous processors while maintaining responsiveness. AdaOper includes a runtime energy profiler that dynamically adjusts operator partitioning to optimize energy efficiency based on dynamic device conditions. We conduct preliminary experiments, which show that AdaOper reduces energy consumption by 16.88% compared to the existing concurrent method while ensuring real-time performance.
KW - Cross-processor DL execution
KW - DNN concurrent inference
KW - Heterogeneous processors
UR - http://www.scopus.com/inward/record.url?scp=85196526508&partnerID=8YFLogxK
U2 - 10.1145/3662007.3663884
DO - 10.1145/3662007.3663884
M3 - 会议稿件
AN - SCOPUS:85196526508
T3 - AdaAIoTSys 2024 - Proceedings of the 2024 AdaAIoTSys 2024 - Workshop on Adaptive AIoT Systems
SP - 19
EP - 20
BT - AdaAIoTSys 2024 - Proceedings of the 2024 AdaAIoTSys 2024 - Workshop on Adaptive AIoT Systems
PB - Association for Computing Machinery, Inc
Y2 - 3 June 2024 through 7 June 2024
ER -