HFOD: A hardware-friendly quantization method for object detection on embedded FPGAs

Fei Zhang, Ziyang Gao, Jiaming Huang, Peining Zhen, Hai Bao Chen, Jie Yan

科研成果: 期刊稿件文章同行评审

5 引用 (Scopus)

摘要

There are two research hotspots for improving performance and energy efficiency of the inference phase of Convolutional neural networks (CNNs). The first one is model compression techniques while the second is hardware accelerator implementation. To overcome the incompatibility of algorithm optimization and hardware design, this paper proposes HFOD, a hardware-friendly quantization method for object detection on embedded FPGAs. We adopt a channel-wise, uniform quantization method to compress YOLOv3-Tiny model. Weights are quantized to 2-bit while activations are quantized to 8-bit for all convolutional layers. To achieve highly-efficient implementations on FPGA, we add batch normalization (BN) layer fusion in quantization process. A flexible, efficient convolutional unit structure is designed to utilize hardware-friendly quantization, and the accelerator is developed based on an automatic synthesis template. Experimental results show that the resources of FPGA in the proposed accelerator design contribute more computing performance compared with regular 8-bit/16-bit fixed point quantization. The model size and the activation size of the proposed network with 2-bit weights and 8-bit activations can be effectively reduced by 16× and 4× with a small amount of accuracy loss, respectively. Our HFOD method can achieve 90.6 GOPS on PYNQZ2 at 150 MHz, which is 1.4× faster and 2× better in power efficiency than peer FPGA implementation on the same platform.

源语言英语
期刊IEICE Electronics Express
19
8
DOI
出版状态已出版 - 25 4月 2022

指纹

探究 'HFOD: A hardware-friendly quantization method for object detection on embedded FPGAs' 的科研主题。它们共同构成独一无二的指纹。

引用此