TY - GEN
T1 - Research on Compression Optimization Algorithm for Super-resolution Reconstruction Network
AU - Zhao, Xiaodong
AU - Fu, Yanfang
AU - Tian, Feng
AU - Zhang, Xunying
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Under the condition of limited resources of embedded systems, the paper proposes a compression optimization algorithm based on pruning and quantization, so that the computational requirements of the super-resolution reconstruction algorithm based on a Convolutional Neural Network (CNN) can be met. First, a multiple regularization pruning optimization algorithm based on an attention module and a BatchNorm layer is proposed. Then, a coordination optimization algorithm of INT8 training and quantization for FPGA architecture is proposed. The performance of the pruning optimization algorithm was verified for the Super-Resolution CNN (SRCNN), the Fast Super-Resolution CNN (FSRCNN), and the Very Deep Super-resolution CNN (VDSRCNN). As for SRCNN, the performance of the quantization optimization algorithm was verified on the FPGA EC2 hardware simulation platform. The results show that the proposed compression optimization algorithm can achieve a good balance between network accuracy and inference speed.
AB - Under the condition of limited resources of embedded systems, the paper proposes a compression optimization algorithm based on pruning and quantization, so that the computational requirements of the super-resolution reconstruction algorithm based on a Convolutional Neural Network (CNN) can be met. First, a multiple regularization pruning optimization algorithm based on an attention module and a BatchNorm layer is proposed. Then, a coordination optimization algorithm of INT8 training and quantization for FPGA architecture is proposed. The performance of the pruning optimization algorithm was verified for the Super-Resolution CNN (SRCNN), the Fast Super-Resolution CNN (FSRCNN), and the Very Deep Super-resolution CNN (VDSRCNN). As for SRCNN, the performance of the quantization optimization algorithm was verified on the FPGA EC2 hardware simulation platform. The results show that the proposed compression optimization algorithm can achieve a good balance between network accuracy and inference speed.
KW - FPGA
KW - INT8 quantization
KW - multiple regular terms pruning
KW - neural network optimization
KW - super-resolution reconstruction
UR - http://www.scopus.com/inward/record.url?scp=85149118257&partnerID=8YFLogxK
U2 - 10.1109/IFEEA57288.2022.10038052
DO - 10.1109/IFEEA57288.2022.10038052
M3 - 会议稿件
AN - SCOPUS:85149118257
T3 - 2022 9th International Forum on Electrical Engineering and Automation, IFEEA 2022
SP - 1075
EP - 1079
BT - 2022 9th International Forum on Electrical Engineering and Automation, IFEEA 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 9th International Forum on Electrical Engineering and Automation, IFEEA 2022
Y2 - 4 November 2022 through 6 November 2022
ER -