ASKs: Convolution with any-shape kernels for efficient neural networks

Guangzhe Liu, Ke Zhang, Meibo Lv

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Despite the outstanding performance, deep convolutional neural networks (CNNs) are computationally expensive and contain a large number of redundant parameters, hindering their deployment on resource constrained platforms. To address this issue, many model compression methods have been proposed. However, these methods mainly focus on pruning redundant parameters or designing efficient architectures, the redundancy in convolution kernels has rarely been investigated. In this paper, we find that the contributions of parameters at different locations in the traditional 3×3 kernels are not the same, and this distribution varies considerably in different layers. Motivated by this, we propose to use irregular kernels and present a novel approach to implementing convolution with any-shape kernels (ASKs) efficiently. The proposed ASKs are plug-and-play and can be readily embedded into existing CNNs, providing efficient modules for building compact CNNs. Experiments on benchmarks demonstrate the effectiveness of the proposed method. We improve the accuracy of VGG-16 on CIFAR-10 dataset from 93.45% to 94.04% simply by replacing the regular 3×3 kernel with cross-shaped kernel, which takes up only about 5/9 of the original storage and computing resources. Compared to state-of-the-art model compression methods, our ASKs achieve a better trade-off between accuracy and compression ratio.

Original languageEnglish
Pages (from-to)32-49
Number of pages18
JournalNeurocomputing
Volume446
DOIs
StatePublished - 25 Jul 2021

Keywords

  • Any-shape kernels
  • Efficient neural networks
  • Irregular convolution
  • Model compression

Fingerprint

Dive into the research topics of 'ASKs: Convolution with any-shape kernels for efficient neural networks'. Together they form a unique fingerprint.

Cite this