Binarized Encoder-Decoder Network and Binarized Deconvolution Engine for Semantic Segmentation

Hyunwoo Kim, Jeonghoon Kim, Jungwook Choi, Jungkeol Lee, Yong Ho Song.
2020 IEEE Access  
Recently, semantic segmentation based on deep neural network (DNN) has attracted attention as it exhibits high accuracy, and many studies have been conducted on this. However, DNN-based segmentation studies focused mainly on improving accuracy, thus greatly increasing the computational demand and memory footprint of the segmentation network. For this reason, the segmentation network requires a lot of hardware resources and power consumption, and it is difficult to be applied to an environment
more » ... ere they are limited, such as an embedded system. In this paper, we propose a binarized encoder-decoder network (BEDN ) and a binarized deconvolution engine (BiDE) accelerating the network to realize lowpower, real-time semantic segmentation. BiDE implements a binarized segmentation network with custom hardware, greatly reducing the hardware resource usage and greatly increasing the throughput of network implementation. The deconvolution used for upsampling in a segmentation network includes zero padding. In order to enable deconvolution in a binarized segmentation network that cannot express zero, we introduce zero-aware binarized deconvolution which skips padded zero activations and zero-aware batch normalization embedded binary activation considering zero-skipped convolution. The BEDN, which is a binarized segmentation network proposed to be accelerated on BiDE, has acceptable accuracy while greatly reducing the computational and memory demands of the segmentation network through full-binarization and simple structure. BEDN has a network size of 0.21 MB, and its maximum memory usage is 1.38 MB. BiDE was implemented on Xilinx ZU7EV field-programmable gate array (FPGA) to operate at 187.5 MHz. BiDE accelerated the proposed BEDN within CamVid11 images of 480×360 size at 25.89 frames per second (FPS) achieving a performance of 1.682 Tera operations per second (TOPS) and 824 Giga operations per second per watt (GOPS/W). INDEX TERMS Binarized neural network, binarized deconvolution, binarized segmentation network, zero-aware deconvolution, zero-skip deconvolution, neural network accelerator. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ VOLUME 9, 2021
doi:10.1109/access.2020.3048375 fatcat:kcy6zjhtzzgslmhth367gtephm