RUFP: Reinitializing unimportant filters for soft pruning

Ke Zhang, Guangzhe Liu, Meibo Lv

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Network pruning has become a popular method to reduce the storage and computational complexity of deep neural networks. In order to minimize the performance loss, soft pruning retains a large model capacity by setting unimportant weights to zero and allowing them to be updated. However, these weights are difficult to reactivate due to the small amplitude and frequent resets. In this paper, we propose a novel method, termed RUFP, to reinitialize unimportant filters according to the most important one, which not only gives these filters a chance to be reactivated, but also introduces more filter forms that may win the initialization lottery. By gradually increasing the reinitialization ratio and decreasing the reassigned values of factors in the batch normalization layer, soft pruning is achieved. Benefiting from the large model capacity and multiple reinitializations, the compressed model after fine-tuning achieves superior performance. Extensive experiments demonstrate the effectiveness of this method in improving the accuracy of the pruned model. The accuracy of ResNet-56 on CIFAR-10 is improved from 93.05% to 93.17% while reducing 57.7% calculations and 58.8% parameters. Compared with the traditional soft pruning method and other state-of-the-art methods, our RUFP obtains outstanding performance at various compression levels.

Original languageEnglish
Pages (from-to)311-321
Number of pages11
JournalNeurocomputing
Volume483
DOIs
StatePublished - 28 Apr 2022

Keywords

  • Model compression
  • Network pruning
  • Soft pruning
  • Weight reinitialization

Fingerprint

Dive into the research topics of 'RUFP: Reinitializing unimportant filters for soft pruning'. Together they form a unique fingerprint.

Cite this