Filters








3,726 Hits in 4.6 sec

Unified Signal Compression Using Generative Adversarial Networks [article]

Bowen Liu, Ang Cao, Hun-seok Kim
2019 arXiv   pre-print
To efficiently quantize the compressed signal, non-uniformly quantized optimal latent vectors are identified by iterative back-propagation with ADMM optimization performed for each iteration.  ...  We propose a unified compression framework that uses generative adversarial networks (GAN) to compress image and speech signals.  ...  The gain of ADMM based non-uniform quantization is also shown in the same figure.  ... 
arXiv:1912.03734v1 fatcat:iddflnglxrb53enimypzzhubpm

A Deep Learning Framework of Quantized Compressed Sensing for Wireless Neural Recording

Biao Sun, Hui Feng, Kefan Chen, Xinshan Zhu
2016 IEEE Access  
INDEX TERMS Wireless neural recording, quantized compressive sensing, non-uniform quantization, deep learning.  ...  In this paper, a deep learning framework of quantized CS, termed BW-NQ-DNN, is proposed, which consists of a binary measurement matrix, a non-uniform quantizer, and a non-iterative recovery solver.  ...  To our best knowledge, this is the first time that a deep neural network has been used for the task of non-uniform quantizer optimization. 3) A non-iterative recovery solver is learned for QCS, leading  ... 
doi:10.1109/access.2016.2604397 fatcat:lsdgockgtnhaze5xs4hbhq57je

Towards the Limit of Network Quantization [article]

Yoojin Choi, Mostafa El-Khamy, Jungwon Lee
2017 arXiv   pre-print
Network quantization is one of network compression techniques to reduce the redundancy of deep neural networks.  ...  and consequently propose two solutions of ECSQ for network quantization, i.e., uniform quantization and an iterative solution similar to Lloyd's algorithm.  ...  A.2 EXPERIMENT RESULTS FOR UNIFORM QUANTIZATION We compare uniform quantization with non-weighted mean and uniform quantization with Hessianweighted mean in Figure 3 , which shows that uniform quantization  ... 
arXiv:1612.01543v2 fatcat:5tg2ycyxyrhizadbz2yytgd5l4

Low-Complexity Vector Quantized Compressed Sensing via Deep Neural Networks [article]

Markus Leinonen, Marian Codreanu
2020 arXiv   pre-print
Simulation results show that the proposed non-iterative DNN-based QCS method achieves higher rate-distortion performance with lower algorithm complexity as compared to standard QCS methods, conducive to  ...  We propose a deep encoder-decoder architecture, consisting of an encoder deep neural network (DNN), a quantizer, and a decoder DNN, that realizes low-complexity vector quantization aiming at minimizing  ...  All the above methods consider non-quantized CS. In a non-CS setup [53] , compressive autoencoders were proposed for lossy image compression.  ... 
arXiv:2005.08385v3 fatcat:hmt4ofci7fcpjajinpim2wvni4

Low-Complexity Vector Quantized Compressed Sensing via Deep Neural Networks

Markus Leinonen, Marian Codreanu
2020 IEEE Open Journal of the Communications Society  
Simulation results show that the proposed non-iterative DNN-based QCS method achieves higher rate-distortion performance with lower algorithm complexity as compared to standard QCS methods, conducive to  ...  We propose a deep encoder-decoder architecture, consisting of an encoder deep neural network (DNN), a quantizer, and a decoder DNN, that realizes low-complexity vector quantization aiming at minimizing  ...  All the above methods consider non-quantized CS. In a non-CS setup [53] , compressive autoencoders were proposed for lossy image compression.  ... 
doi:10.1109/ojcoms.2020.3020131 fatcat:ewhlt7nnijafdpbjekqnzkh2oa

Learning a Single Tucker Decomposition Network for Lossy Image Compression with Multiple Bits-Per-Pixel Rates [article]

Jianrui Cai, Zisheng Cao, Lei Zhang
2018 arXiv   pre-print
Furthermore, an iterative non-uniform quantization scheme is presented to optimize the quantizer, and a coarse-to-fine training strategy is introduced to reconstruct the decompressed images.  ...  Lossy image compression (LIC), which aims to utilize inexact approximations to represent an image more compactly, is a classical problem in image processing.  ...  Furthermore, an iterative non-uniform quantization scheme is presented to optimize the quantizer, and a coarse-to-fine training strategy is introduced to reconstruct the decompressed images.  ... 
arXiv:1807.03470v1 fatcat:ozqw3wnytfazdhvcl6flgwlrx4

Variable-Rate Deep Image Compression through Spatially-Adaptive Feature Transform [article]

Myungseo Song, Jinyoung Choi, Bohyung Han
2021 arXiv   pre-print
image with variable rates.  ...  We propose a versatile deep image compression network based on Spatial Feature Transform (SFT arXiv:1804.02815), which takes a source image and a corresponding quality map as inputs and produce a compressed  ...  Deep image compression Deep image compression models learn to minimize distortion between a pair of a source image and a reconstructed image while maximizing the likelihood of the quantized latent representation  ... 
arXiv:2108.09551v1 fatcat:xqsa3lqmvzdlfjuaxfqvrgyc6q

AI Enlightens Wireless Communication: Analyses, Solutions and Opportunities on CSI Feedback [article]

Han Xiao, Zhiqin Wang, Wenqiang Tian, Xiaofeng Liu, Wendong Liu, Shi Jin, Jia Shen, Zhi Zhang, Ning Yang
2021 arXiv   pre-print
Then the enhancing schemes for DL-based F-CSI feedback including i) channel data analysis and preprocessing, ii) neural network design and iii) quantization enhancement are elaborated.  ...  The uniform and non-uniform quantification consider the uniform and non-uniform quantization interval, respectively.  ...  Since the distribution of amplitude of element in the feature vector is not uniform, the non-uniform quantization such as µ-law and A-law can be applied, which can cope with non-uniform distribution to  ... 
arXiv:2106.06759v2 fatcat:zbijlccjsjg3ledzz7qbbtqece

DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression [article]

Enmao Diao, Jie Ding, Vahid Tarokh
2019 arXiv   pre-print
We propose a new architecture for distributed image compression from a group of distributed data sources.  ...  To the best of our knowledge, this is the first data-driven DSC framework for general distributed code design with deep learning.  ...  [2] replaced non-differentiable quantization step with a continuous relaxation by adding uniform noises. [1] , on the other hand, used a stochastic form of binarization.  ... 
arXiv:1903.09887v3 fatcat:pggenmvw65cvvinu5fo2eh4wmy

CLIP-Q: Deep Network Compression Learning by In-parallel Pruning-Quantization

Frederick Tung, Greg Mori
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Deep neural networks enable state-of-the-art accuracy on visual recognition tasks such as image classification and object detection.  ...  Our proposed CLIP-Q method (Compression Learning by In-Parallel Pruning-Quantization) compresses AlexNet by 51fold, GoogLeNet by 10-fold, and ResNet-50 by 15-fold, while preserving the uncompressed network  ...  Network compression performance compared with state-of-the-art algorithms sult extends the previous best compressed AlexNet result, Deep Compression + Weighted-Entropy Quantization, by 1.7 MB while obtaining  ... 
doi:10.1109/cvpr.2018.00821 dblp:conf/cvpr/TungM18 fatcat:ooq2o22m7badzn5ch2fik35j6i

Reducing the Model Order of Deep Neural Networks Using Information Theory [article]

Ming Tu, Visar Berisha, Yu Cao, Jae-sun Seo
2016 arXiv   pre-print
We first remove unimportant parameters and then use non-uniform fixed point quantization to assign more bits to parameters with higher Fisher Information estimates.  ...  In this paper, we propose a method to compress deep neural networks by using the Fisher Information metric, which we estimate through a stochastic optimization method that keeps track of second-order information  ...  This shows that on this classification task, non-uniform quantization based on the Fisher Information achieves the highest compression ratio compared to non-uniform quantization based on magnitude ranking  ... 
arXiv:1605.04859v1 fatcat:7vecjvztqbhnjazdtlalbbu6bi

Utilizing Explainable AI for Quantization and Pruning of Deep Neural Networks [article]

Muhammad Sabih, Frank Hannig, Juergen Teich
2020 arXiv   pre-print
We use typical image classification datasets with common deep learning image classification models for evaluation.  ...  We use these methods for (1) pruning of DNNs; this includes structured and unstructured pruning of CNN filters pruning as well as pruning weights of fully connected layers, (2) non-uniform quantization  ...  This is an example of non-uniform quantization (with non-uniformly separated quantization points), and the idea is also known as weight sharing [26] . Han et al.  ... 
arXiv:2008.09072v1 fatcat:r7ypi5tgrrdtxmetn3iyqufdxy

Efficient Weights Quantization of Convolutional Neural Networks Using Kernel Density Estimation based Non-uniform Quantizer

Sanghyun Seo, Juntae Kim
2019 Applied Sciences  
In this paper, we propose a kernel density estimation based non-uniform quantization methodology that can perform compression efficiently.  ...  Four-bit quantization experiments on the classification of the ImageNet dataset with various CNN architectures show that the proposed methodology can perform weights quantization efficiently in terms of  ...  However, recent deep learning models have a very large number of weights, so it is inefficient to perform non-uniform quantization with all the weights in the model.  ... 
doi:10.3390/app9122559 fatcat:lfbow6uym5f3lgv6yysgytooq4

Variable Rate Deep Image Compression With a Conditional Autoencoder [article]

Yoojin Choi, Mostafa El-Khamy, Jungwon Lee
2019 arXiv   pre-print
In this paper, we propose a novel variable-rate learned image compression framework with a conditional autoencoder.  ...  In contrast, we train and deploy only one variable-rate image compression network implemented with a conditional autoencoder.  ...  In particular, non-linear transform coding designed with deep neural networks has advanced to outperform the classical image compression codecs sophisticatedly designed and optimized by domain experts,  ... 
arXiv:1909.04802v1 fatcat:baseugdguberbjutpowggql3r4

Learned Neural Iterative Decoding for Lossy Image Compression Systems [article]

Alexander G. Ororbia, Ankur Mali, Jian Wu, Scott O'Connell, David Miller, C. Lee Giles
2018 arXiv   pre-print
For lossy image compression systems, we develop an algorithm, iterative refinement, to improve the decoder's reconstruction compared to standard decoding techniques.  ...  We experiment with variants of our estimator and find that iterative refinement consistently creates lower distortion images of higher perceptual quality compared to other approaches.  ...  We experimented with 6 different test sets: 1) the Kodak Lossless True Color Image Suite 4 (Kodak) with 24 true color 24-bit uncompressed images, 2) the image compression benchmark (CB 8-Bit 5 ) with 14  ... 
arXiv:1803.05863v3 fatcat:elkklf37ejcn5exgoeldieggtq
« Previous Showing results 1 — 15 out of 3,726 results