ASKs: Convolution with any-shape kernels for efficient neural networks

Guangzhe Liu, Ke Zhang, Meibo Lv

科研成果: 期刊稿件文章同行评审

10 引用 (Scopus)

摘要

Despite the outstanding performance, deep convolutional neural networks (CNNs) are computationally expensive and contain a large number of redundant parameters, hindering their deployment on resource constrained platforms. To address this issue, many model compression methods have been proposed. However, these methods mainly focus on pruning redundant parameters or designing efficient architectures, the redundancy in convolution kernels has rarely been investigated. In this paper, we find that the contributions of parameters at different locations in the traditional 3×3 kernels are not the same, and this distribution varies considerably in different layers. Motivated by this, we propose to use irregular kernels and present a novel approach to implementing convolution with any-shape kernels (ASKs) efficiently. The proposed ASKs are plug-and-play and can be readily embedded into existing CNNs, providing efficient modules for building compact CNNs. Experiments on benchmarks demonstrate the effectiveness of the proposed method. We improve the accuracy of VGG-16 on CIFAR-10 dataset from 93.45% to 94.04% simply by replacing the regular 3×3 kernel with cross-shaped kernel, which takes up only about 5/9 of the original storage and computing resources. Compared to state-of-the-art model compression methods, our ASKs achieve a better trade-off between accuracy and compression ratio.

源语言英语
页(从-至)32-49
页数18
期刊Neurocomputing
446
DOI
出版状态已出版 - 25 7月 2021

指纹

探究 'ASKs: Convolution with any-shape kernels for efficient neural networks' 的科研主题。它们共同构成独一无二的指纹。

引用此