Filters








16 Hits in 2.5 sec

ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions [article]

Zechun Liu and Zhiqiang Shen and Marios Savvides and Kwang-Ting Cheng
2020 arXiv   pre-print
Based on this important observation, we propose to generalize the traditional Sign and PReLU functions, denoted as RSign and RPReLU for the respective generalized functions, to enable explicit learning  ...  Through extensive experiments and analysis, we observed that the performance of binary networks is sensitive to activation distribution variations.  ...  Proposed ReActNets signif- icantly outperform other binary neural networks.  ... 
arXiv:2003.03488v2 fatcat:6awrbfaqxbgmlmb3y5zktkx574

ReActNet: Temporal Localization of Repetitive Activities in Real-World Videos [article]

Giorgos Karvounas, Iason Oikonomidis, Antonis Argyros
2019 arXiv   pre-print
These distances are computed on frame representations obtained with a convolutional neural network.  ...  On top of this representation, we design, implement and evaluate ReActNet, a lightweight convolutional neural network that classifies a given frame as belonging (or not) to a repetitive video segment.  ...  the defined input and tar- (BCE) [18] as the loss function for training: get output, we propose, train, and evaluate ReActNet, a cus- tom, lightweight convolutional neural network  ... 
arXiv:1910.06096v1 fatcat:vkp43gxzpzhyxjiuzfhzbbfyoe

RepBNN: towards a precise Binary Neural Network with Enhanced Feature Map via Repeating [article]

Xulong Shi, Zhi Qi, Jiaxuan Cai, Keqi Fu, Yaru Zhao, Zan Li, Xuanyu Liu, Hao Liu
2022 arXiv   pre-print
Binary neural network (BNN) is an extreme quantization version of convolutional neural networks (CNNs) with all features and weights mapped to just 1-bit.  ...  For example, the Top-1 accuracy of Rep-ReCU-ResNet-20, i.e., a RepBconv enhanced ReCU-ResNet-20, reaches 88.97 CIFAR-10, which is 1.47 Rep-AdamBNN-ReActNet-A achieves 71.342 state-of-the-art result of  ...  The work in [19] shows that a binary neural network has achieve 32× parameter compression and 58× speedup than its full-precision network.  ... 
arXiv:2207.09049v1 fatcat:xs4ugwylizc53emubw4oxvo32y

"Ghost" and Attention in Binary Neural Network

Ruimin Sun, Wanbing Zou, Yi Zhan
2022 IEEE Access  
As the memory footprint requirement and computational scale concerned, the light-weighted Binary Neural Networks (BNNs) have great advantages in limited-resources platforms, such as AIoT (Artificial Intelligence  ...  With these three approaches, our improved binarized network outperforms the other state-of-the-art methods.  ...  BINARY NEURAL NETWORKS Binary Neural Network (BNN) utilizes the most lightweighted quantization {−1, +1} as its weight and activation values in each layer.  ... 
doi:10.1109/access.2022.3181192 fatcat:rxc4ymj6i5co7jxfp2dd35w344

Binary Neural Network for Automated Visual Surface Defect Detection

Wenzhe Liu, Jiehua Zhang, Zhuo Su, Zhongzhu Zhou, Li Liu
2021 Sensors  
To address these issues, this paper introduces binary networks into the area of surface defect detection for the first time, for the reason that binary networks prohibitively constrain weight and activation  ...  As is well-known, defects precisely affect the lives and functions of the machines in which they occur, and even cause potentially catastrophic casualties.  ...  Generally, Bi-ShuffleNet is on par with both real-valued and binary networks in capability and stability.  ... 
doi:10.3390/s21206868 pmid:34696081 pmcid:PMC8541482 fatcat:62x5hiyiyrfsxkagfoabvoiv6m

Distribution-sensitive Information Retention for Accurate Binary Neural Network [article]

Haotong Qin, Xiangguo Zhang, Ruihao Gong, Yifu Ding, Yi Xu, XianglongLiu
2021 arXiv   pre-print
The empirical study shows that binarization causes a great loss of information in the forward and backward propagation which harms the performance of binary neural networks (BNNs), and the limited information  ...  Model binarization is an effective method of compressing neural networks and accelerating their inference process, which enables state-of-the-art models to run on resource-limited devices.  ...  binary neural network respectively.  ... 
arXiv:2109.12338v1 fatcat:imylinvr2rfcjlivfbloapm7wu

A storage-efficient ensemble classification using filter sharing on binarized convolutional neural networks

HyunJin Kim, Mohammed Alnemari, Nader Bagherzadeh
2022 PeerJ Computer Science  
This paper proposes a storage-efficient ensemble classification to overcome the low inference accuracy of binary neural networks (BNNs).  ...  With binarized ResNet-20 and ReActNet-10 on the CIFAR-100 dataset, the proposed scheme can achieve 56.74% and 70.29% Top-1 accuracies with 10 BNN classifiers, which enhances performance by 7.6% and 3.6%  ...  base classifiers that contain different binary weight files from one high-precision neural network.  ... 
doi:10.7717/peerj-cs.924 pmid:35494815 pmcid:PMC9044348 fatcat:f2fmevcx2bggtlkzozyvgczmmi

Bimodal Distributed Binarized Neural Networks [article]

Tal Rozen, Moshe Kimhi, Brian Chmiel, Avi Mendelson, Chaim Baskin
2022 arXiv   pre-print
Binary Neural Networks (BNNs) are an extremely promising method to reduce deep neural networks' complexity and power consumption massively.  ...  The proposed method consists of a training scheme that we call Weight Distribution Mimicking (WDM), which efficiently imitates the full-precision network weight distribution to their binary counterpart  ...  Similar to [6] , we also train the network in two stages. First, we train with binary activations and FP weights, and then we train the full binary network.  ... 
arXiv:2204.02004v1 fatcat:hbck33udlbfvrolw4nf76d24cu

Structured Binary Neural Networks for Image Recognition [article]

Bohan Zhuang, Chunhua Shen, Mingkui Tan, Peng Chen, Lingqiao Liu, Ian Reid
2022 arXiv   pre-print
We propose methods to train convolutional neural networks (CNNs) with both binarized weights and activations, leading to quantized models that are specifically friendly to mobile devices with limited power  ...  Furthermore, for the first time, we apply binary neural networks to object detection.  ...  Comparison with binary neural networks Since we employ binary weights and binary activations, we directly compare to the previous state-of-the-art binary approaches, including standard binary neural networks  ... 
arXiv:1909.09934v4 fatcat:j7nx2mkcrbbkdobfzwg5l7elwy

ReCU: Reviving the Dead Weights in Binary Neural Networks [article]

Zihan Xu, Mingbao Lin, Jianzhuang Liu, Jie Chen, Ling Shao, Yue Gao, Yonghong Tian, Rongrong Ji
2021 arXiv   pre-print
Binary neural networks (BNNs) have received increasing attention due to their superior reductions of computation and memory.  ...  By considering the "dead weights", our method offers not only faster BNN training, but also state-of-the-art performance on CIFAR-10 and ImageNet, compared with recent methods.  ...  In the extreme case of a 1-bit representation, a binary neural network (BNN) restricts the weights and activations to only two possible values, i.e., -1 and +1.  ... 
arXiv:2103.12369v2 fatcat:rukizlovdnhbjl7ybbzlchxyu4

BoolNet: Minimizing The Energy Consumption of Binary Neural Networks [article]

Nianhui Guo, Joseph Bethge, Haojin Yang, Kai Zhong, Xuefei Ning, Christoph Meinel, Yu Wang
2021 arXiv   pre-print
Recent works on Binary Neural Networks (BNNs) have made promising progress in narrowing the accuracy gap of BNNs to their 32-bit counterparts.  ...  to hardware accelerators with limited memory, energy, and computing resources.  ...  They are closer to mixed-precision neural networks rather than being highly efficient binary neural networks, as one might expect.  ... 
arXiv:2106.06991v1 fatcat:dgylasbvv5f7rfoerdsaepcr3m

Enabling Binary Neural Network Training on the Edge [article]

Erwei Wang, James J. Davis, Daniele Moro, Piotr Zielinski, Jia Jie Lim, Claudionor Coelho, Satrajit Chatterjee, Peter Y. K. Cheung, George A. Constantinides
2022 arXiv   pre-print
Binary neural networks are known to be promising candidates for on-device inference due to their extreme compute and memory savings over higher-precision alternatives.  ...  In this article, we demonstrate that the backward propagation operations needed for binary neural network training are strongly robust to quantization, thereby making on-the-edge learning with modern models  ...  INTRODUCTION Although binary neural networks (BNNs) feature weights and activations with just single-bit precision, many models are able to reach accuracy indistinguishable from that of their higher-precision  ... 
arXiv:2102.04270v5 fatcat:typecdnifzborloclmpjp7jwo4

GAAF: Searching Activation Functions for Binary Neural Networks through Genetic Algorithm [article]

Yanfei Li, Tong Geng, Samuel Stein, Ang Li, Huimin Yu
2022 arXiv   pre-print
Binary neural networks (BNNs) show promising utilization in cost and power-restricted domains such as edge devices and mobile systems.  ...  To close the accuracy gap, in this paper we propose to add a complementary activation function (AF) ahead of the sign based binarization, and rely on the genetic algorithm (GA) to automatically search  ...  INTRODUCTION Binary Neural Networks (BNNs) [1] , [2] binarize the full precision inputs and weights of deep neural networks into binary values: {+1, −1}.  ... 
arXiv:2206.03291v1 fatcat:xpdqi3a345g3bp2vsya2sjnawm

How to train accurate BNNs for embedded systems? [article]

Floran de Putter, Henk Corporaal
2022 arXiv   pre-print
A key enabler of deploying convolutional neural networks on resource-constrained embedded systems is the binary neural network (BNN).  ...  To reduce the accuracy gap between binary and full-precision networks, many repair methods have been proposed in the recent past, which we have classified and put into a single overview in this chapter  ...  In all Teacher-Student approaches used in BNNs, the student binary network trains towards labels generated by the teacher network. No longer does it train towards the ground truth.  ... 
arXiv:2206.12322v1 fatcat:tchgrujclnhznk2z3xye6lf4nu

Fully Quantized Image Super-Resolution Networks [article]

Hu Wang, Peng Chen, Bohan Zhuang, Chunhua Shen
2021 arXiv   pre-print
Experimental results show that our FQSR using low bits quantization can achieve on par performance compared with the full-precision counterparts on five benchmark datasets and surpass state-of-the-art  ...  We further identify training obstacles faced by low-bit SR networks and propose two novel methods accordingly.  ...  Reactnet: Towards precise binary neural network with generalized activation functions. arXiv preprint arXiv:2003.03488, 2020.  ... 
arXiv:2011.14265v2 fatcat:h7g4kh42kjbuplvupbimgjqqja
« Previous Showing results 1 — 15 out of 16 results