Filters








30,841 Hits in 4.0 sec

Deep Attentive Generative Adversarial Network for Photo-Realistic Image De-Quantization [article]

Yang Zhang, Changhui Hu, Xiaobo Lu
2020 arXiv   pre-print
Until now, this is the first attempt to apply Generative Adversarial Network (GAN) framework for image de-quantization.  ...  Moreover, the series connection of sequential DenseResAtt modules forms deep attentive network with superior discriminative learning ability in image de-quantization, modeling representative feature maps  ...  In this work, we consider to utilize GAN framework to realize SR on image intensity resolution, which is orthogonal to the spatial Deep Attentive Generative Adversarial Network for Photo-Realistic Image  ... 
arXiv:2004.03150v1 fatcat:plmkqkgjcja7patyq6mi66mxqi

A Generalized Zero-Shot Quantization of Deep Convolutional Neural Networks via Learned Weights Statistics [article]

Prasen Kumar Sharma, Arun Abraham, Vikram Nelvoy Rajendiran
2021 arXiv   pre-print
Importantly, our work is the first attempt towards the post-training zero-shot quantization of futuristic unnormalized deep neural networks.  ...  Quantizing the floating-point weights and activations of deep convolutional neural networks to fixed-point representation yields reduced memory footprints and inference time.  ...  CONCLUSIONS In this work, we have presented a novel generalized zeroshot quantization (GZSQ) framework for the post-training quantization of deep CNNs, that leverages only the pre-trained weights of the  ... 
arXiv:2112.02834v1 fatcat:j5kbnygja5h33cp3rexc2xee2a

SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks [article]

Sungho Shin, Yoonho Boo, Wonyong Sung
2020 arXiv   pre-print
Designing a deep neural network (DNN) with good generalization capability is a complex process especially when the weights are severely quantized.  ...  We present a new quantized neural network optimization approach, stochastic quantized weight averaging (SQWA), to design low-precision DNNs with good generalization capability using model averaging.  ...  Quantization of deep neural networks The weight vector, w, of a deep neural network can be quantized in b-bit using a symmetric uniform quantizer as (a) Conventional [18] (b) Ours Figure 1 .  ... 
arXiv:2002.00343v1 fatcat:4evbb2pwrbe7hk6v3pfvsfhzbe

Training Deep Neural Networks with Joint Quantization and Pruning of Weights and Activations [article]

Xinyu Zhang, Ian Colbert, Ken Kreutz-Delgado, Srinjoy Das
2021 arXiv   pre-print
Quantization and pruning are core techniques used to reduce the inference costs of deep neural networks.  ...  weights and activations of deep neural networks.  ...  Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149. Hanin, B. and Rolnick, D. (2019).  ... 
arXiv:2110.08271v2 fatcat:gpdguvghxne27cqrkofbubcst4

Differentiable Fine-grained Quantization for Deep Neural Network Compression [article]

Hsin-Pai Cheng, Yuanjun Huang, Xuyang Guo, Yifei Huang, Feng Yan, Hai Li, Yiran Chen
2018 arXiv   pre-print
In this work, we propose a fine-grained quantization approach for deep neural network compression by relaxing the search space of quantization bitwidth from discrete to a continuous domain.  ...  Neural networks have shown great performance in cognitive tasks. When deploying network models on mobile devices with limited resources, weight quantization has been widely adopted.  ...  Once obtaining the hyperparameter set α with the best trade-off, we retrain the quantization and fine tune the quantized weights to generate the final network model.  ... 
arXiv:1810.10351v3 fatcat:sfbjws4turd7fi2izdvgoquxky

Deep Optimized Multiple Description Image Coding via Scalar Quantization Learning [article]

Lijun Zhao, Huihui Bai, Anhong Wang, Yao Zhao
2020 arXiv   pre-print
First, MD multi-scale-dilated encoder network generates multiple description tensors, which are discretized by scalar quantizers, while these quantized tensors are decompressed by MD cascaded-ResBlock  ...  Third, considering the variation in the image spatial distribution, each scalar quantizer is accompanied by an importance-indicator map to generate MD tensors, rather than using direct quantization.  ...  Our contributions are listed below: (1) We design a general deep optimized MD coding framework based on artificial neural networks.  ... 
arXiv:2001.03851v1 fatcat:dowcpdodmzalrapjdz25ntsv2q

Deploy Large-Scale Deep Neural Networks in Resource Constrained IoT Devices with Local Quantization Region [article]

Yi Yang, Andy Chen, Xiaoming Chen, Jiang Ji, Zhenyang Chen, Yan Dai
2018 arXiv   pre-print
This disjunction makes the state-of-art deep learning algorithms, i.e. CNN (Convolutional Neural Networks), incompatible with IoT world.  ...  Implementing large-scale deep neural networks with high computational complexity on low-cost IoT devices may inevitably be constrained by limited computation resource, making the devices hard to respond  ...  Typical deep nerual networks structure B.  ... 
arXiv:1805.09473v1 fatcat:na7bq5lcozfrtgjgii2wsbvxua

Generalized Product Quantization Network for Semi-supervised Image Retrieval [article]

Young Kyun Jang, Nam Ik Cho
2020 arXiv   pre-print
To resolve this issue, we propose the first quantization-based semi-supervised image retrieval scheme: Generalized Product Quantization (GPQ) network.  ...  Our solution increases the generalization capacity of the quantization network, which allows overcoming previous limitations in the retrieval community.  ...  Conclusion In this paper, we have proposed the first quantization based deep semi-supervised image retrieval technique, named Generalized Product Quantization (GPQ) network.  ... 
arXiv:2002.11281v3 fatcat:v27lks4kvffbrkviy7ubveuebi

Deep Visual-Semantic Quantization for Efficient Image Retrieval

Yue Cao, Mingsheng Long, Jianmin Wang, Shichen Liu
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We propose Deep Visual-Semantic Quantization (DVSQ), which is the first approach to learning deep quantization models from labeled image data as well as the semantic information underlying general text  ...  The main contribution lies in jointly learning deep visual-semantic embeddings and visual-semantic quantizers using carefullydesigned hybrid networks and well-specified loss functions.  ...  To our best knowledge, Deep Quantization Network (DQN) [8] is the only prior work on deep learning to quantization.  ... 
doi:10.1109/cvpr.2017.104 dblp:conf/cvpr/CaoL0L17 fatcat:jgzhlmcoeraqblejcdeoeovh6i

CLIP-Q: Deep Network Compression Learning by In-parallel Pruning-Quantization

Frederick Tung, Greg Mori
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Deep neural networks enable state-of-the-art accuracy on visual recognition tasks such as image classification and object detection.  ...  In this paper, we combine network pruning and weight quantization in a single learning framework that performs pruning and quantization jointly, and in parallel with fine-tuning.  ...  deep networks.  ... 
doi:10.1109/cvpr.2018.00821 dblp:conf/cvpr/TungM18 fatcat:ooq2o22m7badzn5ch2fik35j6i

Deep Multiple Description Coding by Learning Scalar Quantization [article]

Lijun Zhao, Huihui Bai, Anhong Wang, Yao Zhao
2019 arXiv   pre-print
In this paper, we propose a deep multiple description coding framework, whose quantizers are adaptively learned via the minimization of multiple description compressive loss.  ...  Secondly, two entropy estimation networks are learned to estimate the informative amounts of the quantized tensors, which can further supervise the learning of multiple description encoder network to represent  ...  Our contributions are listed below: (1) a general deep multiple description coding framework is built upon convolutional neural networks; (2) a pair of scalar quantization operators is automatically learned  ... 
arXiv:1811.01504v3 fatcat:27siiv6javantau7eto2dxwqv4

Hardware-friendly Deep Learning by Network Quantization and Binarization [article]

Haotong Qin
2021 arXiv   pre-print
Quantization is emerging as an efficient approach to promote hardware-friendly deep learning and run deep neural networks on resource-limited hardware.  ...  Our studies focus mainly on applying quantization on various architectures and scenes and pushing the limit of quantization to extremely compress and accelerate networks.  ...  Background With the continuous development of deep learning, deep neural networks (DNNs) have made significant progress in various fields, such as computer vision, natural language processing, and speech  ... 
arXiv:2112.00737v1 fatcat:trqqn4tncvghvh6adqh7q2xug4

Understanding Unconventional Preprocessors in Deep Convolutional Neural Networks for Face Identification [article]

Chollette C. Olisah, Lyndon Smith
2019 arXiv   pre-print
Deep networks have achieved huge successes in application domains like object and face recognition.  ...  The experiments show that the discriminative capability of the deep networks can be improved by preprocessing RGB data with HE, full-based and plane-based quantization, rgbGELog, and YCBCR, preprocessors  ...  preprocessing strategy in deep networks, the plane-based quantization preprocessing.  ... 
arXiv:1904.00815v2 fatcat:bemcyxjyube7fmbtnqhcgdxpvy

Accurate Deep Representation Quantization with Gradient Snapping Layer for Similarity Search [article]

Shicong Liu, Hongtao Lu
2016 arXiv   pre-print
Joint deep representation and vector quantization learning can be easily performed by alternatively optimize the quantization codebook and the deep neural network.  ...  However, how to learn deep representations that strongly preserve similarities between data pairs and can be accurately quantized via vector quantization remains a challenging task.  ...  Network Settings We implement GSL on the open-source Caffe deep learning framework. We employ the AlexNet architecture (Krizhevsky, Sutskever, and Hinton 2012) and use pre-trained weights  ... 
arXiv:1610.09645v1 fatcat:txz33dcwabfcpmbdw4rjrma7ru

Reduced Precision Strategies for Deep Learning: A High Energy Physics Generative Adversarial Network Use Case [article]

Florian Rehm, Sofia Vallecorsa, Vikram Saletore, Hans Pabst, Adel Chaibi, Valeriu Codreanu, Kerstin Borras, Dirk Krücker
2021 arXiv   pre-print
In this paper we analyse the effects of low precision inference on a complex deep generative adversarial network model.  ...  A promising approach to make deep learning more efficient is to quantize the parameters of the neural networks to reduced precision.  ...  They belong to the group of unsupervised learning methods and are nowadays used for a large variety of different generative tasks. The whole GAN model consists of two deep neural networks.  ... 
arXiv:2103.10142v1 fatcat:yl7ddmqdszfrphvoe25qzr4ipy
« Previous Showing results 1 — 15 out of 30,841 results