TY - GEN
T1 - ACP
T2 - 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022
AU - Zhang, Yuan
AU - Yuan, Yuan
AU - Wang, Qi
N1 - Publisher Copyright:
© 2022 IEEE
PY - 2022
Y1 - 2022
N2 - In recent years, deep convolutional neural networks have achieved amazing results on multiple tasks. However, these complex network models often require significant computation resources and energy costs, so that they are difficult to deploy to power-constrained devices, such as IoT systems, mobile phones, embedded devices, etc. Aforementioned challenges can be overcome through model compression like network pruning. In this paper, we propose an adaptive channel pruning module (ACPM) to automatically adjust the pruning rate with respect to each channel, which is more efficient to prune redundant channel parameters, as well as more robust to datasets and backbones. With one-shot pruning strategy design, the model compression time can be saved significantly. Extensive experiments demonstrate that ACPM makes tremendous improvement on both pruning rate and accuracy, and also achieves the state-of-the-art results on a series of different networks and benchmarks.
AB - In recent years, deep convolutional neural networks have achieved amazing results on multiple tasks. However, these complex network models often require significant computation resources and energy costs, so that they are difficult to deploy to power-constrained devices, such as IoT systems, mobile phones, embedded devices, etc. Aforementioned challenges can be overcome through model compression like network pruning. In this paper, we propose an adaptive channel pruning module (ACPM) to automatically adjust the pruning rate with respect to each channel, which is more efficient to prune redundant channel parameters, as well as more robust to datasets and backbones. With one-shot pruning strategy design, the model compression time can be saved significantly. Extensive experiments demonstrate that ACPM makes tremendous improvement on both pruning rate and accuracy, and also achieves the state-of-the-art results on a series of different networks and benchmarks.
KW - channel pruning
KW - Efficient deep learning
KW - model compression
KW - neural network acceleration
UR - http://www.scopus.com/inward/record.url?scp=85134022413&partnerID=8YFLogxK
U2 - 10.1109/ICASSP43922.2022.9747839
DO - 10.1109/ICASSP43922.2022.9747839
M3 - 会议稿件
AN - SCOPUS:85134022413
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 4488
EP - 4492
BT - 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 23 May 2022 through 27 May 2022
ER -