TY - JOUR
T1 - E2FIF
T2 - Push the Limit of Binarized Deep Imagery Super-Resolution Using End-to-End Full-Precision Information Flow
AU - Song, Chongxing
AU - Lang, Zhiqiang
AU - Wei, Wei
AU - Zhang, Lei
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2023
Y1 - 2023
N2 - Binary neural network (BNN) provides a promising solution to deploy parameter-intensive deep single image super-resolution (SISR) models onto real devices with limited storage and computational resources. To achieve comparable performance with the full-precision counterpart, most existing BNNs for SISR mainly focus on compensating for the information loss incurred by binarizing weights and activations in the network through better approximations to the binarized convolution. In this study, we revisit the difference between BNNs and their full-precision counterparts and argue that the key to good generalization performance of BNNs lies on preserving a complete full-precision information flow along with an accurate gradient flow passing through each binarized convolution layer. Inspired by this, we propose to introduce a full-precision skip connection, or a variant thereof, over each binarized convolution layer across the entire network, which can increase the forward expressive capability and the accuracy of back-propagated gradient, thus enhancing the generalization performance. More importantly, such a scheme can be applied to any existing BNN backbones for SISR without introducing any additional computation cost. To validate the efficacy of the proposed approach, we evaluate it using four different backbones for SISR on four benchmark datasets and report obviously superior performance over existing BNNs and even some 4-bit competitors.
AB - Binary neural network (BNN) provides a promising solution to deploy parameter-intensive deep single image super-resolution (SISR) models onto real devices with limited storage and computational resources. To achieve comparable performance with the full-precision counterpart, most existing BNNs for SISR mainly focus on compensating for the information loss incurred by binarizing weights and activations in the network through better approximations to the binarized convolution. In this study, we revisit the difference between BNNs and their full-precision counterparts and argue that the key to good generalization performance of BNNs lies on preserving a complete full-precision information flow along with an accurate gradient flow passing through each binarized convolution layer. Inspired by this, we propose to introduce a full-precision skip connection, or a variant thereof, over each binarized convolution layer across the entire network, which can increase the forward expressive capability and the accuracy of back-propagated gradient, thus enhancing the generalization performance. More importantly, such a scheme can be applied to any existing BNN backbones for SISR without introducing any additional computation cost. To validate the efficacy of the proposed approach, we evaluate it using four different backbones for SISR on four benchmark datasets and report obviously superior performance over existing BNNs and even some 4-bit competitors.
KW - Binary neural network
KW - image super-resolution
UR - http://www.scopus.com/inward/record.url?scp=85173397044&partnerID=8YFLogxK
U2 - 10.1109/TIP.2023.3315540
DO - 10.1109/TIP.2023.3315540
M3 - 文章
AN - SCOPUS:85173397044
SN - 1057-7149
VL - 32
SP - 5379
EP - 5393
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -