Filters








2,733 Hits in 7.2 sec

Learning Accurate Low-Bit Deep Neural Networks with Stochastic Quantization [article]

Yinpeng Dong, Renkun Ni, Jianguo Li, Yurong Chen, Jun Zhu, Hang Su
2017 arXiv   pre-print
This paper proposes the stochastic quantization (SQ) algorithm for learning accurate low-bit DNNs. The motivation is due to the following observation.  ...  Low-bit deep neural networks (DNNs) become critical for embedded applications due to their low storage requirement and computing efficiency.  ...  Conclusion In this paper, we propose a Stochastic Quantization (SQ) algorithm to learn accurate low-bit DNNs.  ... 
arXiv:1708.01001v1 fatcat:wqymsfctbrefhejwk7feceiz7a

NITI: Training Integer Neural Networks Using Integer-only Arithmetic [article]

Maolin Wang, Seyedramin Rasoulinezhad, Philip H.W. Leong, Hayden K.H. So
2020 arXiv   pre-print
While integer arithmetic has been widely adopted for improved performance in deep quantized neural network inference, training remains a task primarily executed using floating point arithmetic.  ...  In this paper, we present NITI, an efficient deep neural network training framework that stores all parameters and intermediate values as integers, and computes exclusively with integer arithmetic.  ...  As discussed in [6] , the use of stochastic rounding with zero rounding bias is crucial to the success of training neural network with low precision.  ... 
arXiv:2009.13108v1 fatcat:mvegf5mkbrabtkz4qfcbgdnpym

Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations [article]

Bohan Zhuang, Jing Liu, Mingkui Tan, Lingqiao Liu, Ian Reid, Chunhua Shen
2021 arXiv   pre-print
This paper tackles the problem of training a deep convolutional neural network of both low-bitwidth weights and activations.  ...  Furthermore, we propose a second progressive quantization scheme which gradually decreases the bit-width from high-precision to low-precision during training.  ...  Output: A low-precision deep model M k low with weights W k low and activations being quantized into k-bit. 1 Stage 1: Quantize W K low : 2 for epoch = 1, ... do 3 for i = 1, ...N do 4 Randomly sample  ... 
arXiv:1908.04680v3 fatcat:vskpva375vg3xfqmuiqikugtpu

Two-Step Quantization for Low-bit Neural Networks

Peisong Wang, Qinghao Hu, Yifan Zhang, Chunjie Zhang, Yang Liu, Jian Cheng
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Thus, how to train extremely-low-bit neural networks with high accuracy is of central importance.  ...  Every bit matters in the hardware design of quantized neural networks. However, extremely-low-bit representation usually causes large accuracy drop.  ...  In [5, 4] , it is also shown that, the internal representations of deep neural networks can also be turned into low-bit format.  ... 
doi:10.1109/cvpr.2018.00460 dblp:conf/cvpr/WangH0ZL018 fatcat:lzmvfesnrfhzzbwhvemqz56vl4

Deep Task-Based Quantization

Nir Shlezinger, Yonina C. Eldar
2021 Entropy  
In this work we design data-driven task-oriented quantization systems with scalar ADCs, which determine their analog-to-digital mapping using deep learning tools.  ...  By using deep learning, we circumvent the need to explicitly recover the system model and to find the proper quantization rule for it.  ...  Alternatively, a large body of deep learning related works consider deep neural network (DNN) model compression [29] [30] [31] , where a DNN operates with quantized instead of continuous weights.  ... 
doi:10.3390/e23010104 pmid:33450996 fatcat:g5f5bbabdncadbv55qwkodvofu

Binary Neural Networks: A Survey

Haotong Qin, Ruihao Gong, Xianglong Liu, Xiao Bai, Jingkuan Song, Nicu Sebe
2020 Pattern Recognition  
The binary neural network, largely saving the storage and computation, serves as a promising technique for deploying deep models on resource-limited devices.  ...  We also investigate other practical aspects of binary neural networks such as the hardware-friendly design and the training tricks.  ...  Introduction With the continuous development of deep learning [1] , deep neural networks have made significant progress in various fields, such as computer vision, natural language processing and speech  ... 
doi:10.1016/j.patcog.2020.107281 fatcat:p7ohjigozza5viejq6x7cyf6zi

Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks [article]

Darryl D. Lin, Sachin S. Talathi
2016 arXiv   pre-print
It is known that training deep neural networks, in particular, deep convolutional networks, with aggressively reduced numerical precision is challenging.  ...  One of the well-accepted solutions facilitating the training of low precision fixed point networks is stochastic rounding.  ...  In all of these works stochastic rounding has been the key to improving the convergence properties of the training algorithm, which in turn has enabled training of deep networks with relatively small bit-widths  ... 
arXiv:1607.02241v1 fatcat:rxw2cbqqmbgfnc3lvha2gitcqi

Deep Signal Recovery with One-bit Quantization

Shahin Khobahi, Naveed Naimipour, Mojtaba Soltanalian, Yonina C. Eldar
2019 ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
Namely, we propose a model-based machine learning method and unfold the iterations of an inference optimization algorithm into the layers of a deep neural network for one-bit signal recovery.  ...  Machine learning, and more specifically deep learning, have shown remarkable performance in sensing, communications, and inference.  ...  , in conjunction with the new optimization and learning methods, have paved the way for deep neural networks (DNNs) and machine learning-based models to prove their effectiveness in many engineering areas  ... 
doi:10.1109/icassp.2019.8683876 dblp:conf/icassp/KhobahiNSE19 fatcat:2h24c3dpwbbwvavtlddduzdczi

Hardware-Aware Design for Edge Intelligence

Warren J. Gross, Brett H. Meyer, Arash Ardakani
2020 IEEE Open Journal of Circuits and Systems  
INDEX TERMS Artificial intelligence, deep neural networks, hardware and systems, neural architecture search, quantization and pruning, stochastic computing, surveys and reviews.  ...  With the rapid growth of the number of devices connected to the Internet, there is a trend to move intelligent processing of the generated data with deep neural networks (DNNs) from cloud servers to the  ...  Since the deep learning revolution in 2012, deep neural networks (DNNs) have become the foundation of complex sensing and recognition tasks [4] .  ... 
doi:10.1109/ojcas.2020.3047418 fatcat:d5u57awixzgl3au7fk5hh2gezu

A Gradually Distilled CNN for SAR Target Recognition

Rui Min, Hai Lan, Zongjie Cao, Zongyong Cui
2019 IEEE Access  
The proposed MCNN has only two layers, and it is compressed from a deep convolutional neural network (DCNN) with 18 layers by a novel knowledge distillation algorithm called gradual distillation.  ...  with the smaller network.  ...  Deep compression can reduce the model size by tens of times with little bit loss of accuracy. The quantization bits in deep compression is no less than 4 bit, and there is still room for reduction.  ... 
doi:10.1109/access.2019.2906564 fatcat:aqf7wfipm5a7nmx6xuz2mpur3u

Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks [article]

Yoonho Boo, Sungho Shin, Jungwook Choi, Wonyong Sung
2020 arXiv   pre-print
The quantization of deep neural networks (QDNNs) has been actively studied for deployment in edge devices.  ...  SPEQ outperforms the existing quantization training methods in various tasks, such as image classification, question-answering, and transfer learning without the need for cumbersome teacher networks.  ...  Related Works Quantization of Deep Neural Networks QDNNs have been studied for a long time.  ... 
arXiv:2009.14502v1 fatcat:5natlf6rjnh6hgqribruuqp6t4

Low-bit Quantization of Neural Networks for Efficient Inference [article]

Yoni Choukroun, Eli Kravchik, Fan Yang, Pavel Kisilev
2019 arXiv   pre-print
Recent machine learning methods use increasingly large deep neural networks to achieve state of the art results in various tasks.  ...  One popular approach to address this challenge is to perform low-bit precision computations via neural network quantization.  ...  Related Work Neural network acceleration received increasing attention in the deep learning community, where the need for accurate yet fast and efficient frameworks is crucial for realworld applications  ... 
arXiv:1902.06822v2 fatcat:2nqou5phifhrjhfxpq6epcjhyu

Relaxed Quantization for Discretized Neural Networks [article]

Christos Louizos, Matthias Reisser, Tijmen Blankevoort, Efstratios Gavves, Max Welling
2018 arXiv   pre-print
Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices.  ...  We further show that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent.  ...  DISCUSSION We have introduced Relaxed Quantization (RQ), a powerful and versatile algorithm for learning low-bit neural networks using a uniform quantization scheme.  ... 
arXiv:1810.01875v1 fatcat:voyi2ajybbdtllrpifcsgv6n4a

Towards Unified INT8 Training for Convolutional Neural Network

Feng Zhu, Ruihao Gong, Fengwei Yu, Xianglong Liu, Yanfei Wang, Zhelong Li, Xiuqi Yang, Junjie Yan
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Recently low-bit (e.g., 8-bit) network quantization has been extensively studied to accelerate the inference.  ...  Besides inference, low-bit training with quantized gradients can further bring more considerable acceleration, since the backward process is often computation-intensive.  ...  For other neural networks, we use cosine scheduler [1] with initial learning rate set to 0.1. The α and β in learning rate scaling are set to 20 and 0.1 respectively.  ... 
doi:10.1109/cvpr42600.2020.00204 dblp:conf/cvpr/ZhuGYLWLYY20 fatcat:7ujbnvuumrbp5ogz7vgxrxukl4

Quantization of Deep Neural Networks for Accurate Edge Computing [article]

Wentao Chen, Hailong Qiu, Jian Zhuang, Chutong Zhang, Yu Hu, Qing Lu, Tianchen Wang, Yiyu Shi, Meiping Huang, Xiaowe Xu
2021 arXiv   pre-print
Deep neural networks (DNNs) have demonstrated their great potential in recent years, exceeding the per-formance of human experts in a wide range of applications.  ...  with 3.5x-6.4x memory reduction.  ...  Quantized neural networks, binarized neural networks, and XNOR-net [29] reduced the weights to only 1 bit and the activations to 1-2 bits resulting in a large reduction on memory and computation cost  ... 
arXiv:2104.12046v2 fatcat:dltil2m2yrgnbp6vgfrf46l6va
« Previous Showing results 1 — 15 out of 2,733 results