Filters








17,080 Hits in 3.1 sec

Additive Quantization for Extreme Vector Compression

Artem Babenko, Victor Lempitsky
2014 2014 IEEE Conference on Computer Vision and Pattern Recognition  
We introduce a new compression scheme for highdimensional vectors that approximates the vectors using sums of M codewords coming from M different codebooks.  ...  In the experiments, we demonstrate that the proposed compression can be used instead of or together with product quantization.  ...  Additive product quantization The complexity of the Beam Search algorithm grows cubically with M . While for the extreme compression (e.g.  ... 
doi:10.1109/cvpr.2014.124 dblp:conf/cvpr/BabenkoL14 fatcat:x4azagnsmnhevlcodyojn54euq

Training with Quantization Noise for Extreme Model Compression [article]

Angela Fan, Pierre Stock, Benjamin Graham, Edouard Grave, Remi Gribonval, Herve Jegou, Armand Joulin
2021 arXiv   pre-print
Controlling the amount of noise and its form allows for extreme compression rates while maintaining the performance of the original model.  ...  In this paper, we extend this approach to work beyond int8 fixed-point quantization with extreme compression methods where the approximations introduced by STE are severe, such as Product Quantization.  ...  For language modeling, we train for 10 additional epochs. For RoBERTa, we train for 25k additional updates.  ... 
arXiv:2004.07320v3 fatcat:lexa56jkmfffvgnyl3f7dfonkq

Extremely Low Bit-Rate Nearest Neighbor Search Using a Set Compression Tree

Relja Arandjelovic, Andrew Zisserman
2014 IEEE Transactions on Pattern Analysis and Machine Intelligence  
, and Iterative Quantization.  ...  The goal of this work is a data structure to support approximate nearest neighbor search on very large scale sets of vector descriptors.  ...  We are grateful for financial support from ERC grant VisRec no. 228180.  ... 
doi:10.1109/tpami.2014.2339821 pmid:26353147 fatcat:qnt6vtbt35dddohgoryora7h6e

CosSGD: Communication-Efficient Federated Learning with a Simple Cosine-Based Quantization [article]

Yang He and Hui-Po Wang and Maximilian Zenk and Mario Fritz
2022 arXiv   pre-print
Further, our approach is highly suitable for federated learning problems since it has low computational complexity and requires only a little additional data to recover the compressed information.  ...  when quantization are applied in double directions to compress model weights and gradients.  ...  In addition to setting the exact b θ for a vector, we also clip the top dimensions alternatively. Sometimes, there is one dimension dominating the gradient or weight vector.  ... 
arXiv:2012.08241v2 fatcat:pes6bfrkxveohcouipxa4uwdga

Compressing Deep Convolutional Networks using Vector Quantization [article]

Yunchao Gong and Liu Liu and Ming Yang and Lubomir Bourdev
2014 arXiv   pre-print
In this paper, we tackle this model storage issue by investigating information theoretical vector quantization methods for compressing the parameters of CNNs.  ...  In particular, we have found in terms of compressing the most storage demanding dense connected layers, vector quantization methods have a clear gain over existing matrix factorization methods.  ...  RQ works extremely poorly for such a task, which probably means there are few global structures in these weight vectors.  ... 
arXiv:1412.6115v1 fatcat:qmfcwljfjjaubmfjw3mxgegn2y

Feature Vector Compression Based on Least Error Quantization

Tomokazu Kawahara, Osamu Yamaguchi
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
We propose a distinctive feature vector compression method based on least error quantization.  ...  In this paper, we prove that minimizing quantization error between the compressed and original vectors is most effective to control the performance in face recognition.  ...  Therefore, instead of a common quantizer, we efficiently compress each vector with the non-uniform quantizer optimized for the vector, although we have to add another table to each quantized vector.  ... 
doi:10.1109/cvprw.2016.18 dblp:conf/cvpr/KawaharaY16 fatcat:5ox2aeusp5gyrfduhlyk2kvzxq

SDR: Efficient Neural Re-ranking using Succinct Document Representation [article]

Nachshon Cohen, Amit Portnoy, Besnik Fetahu, Amir Ingber
2021 arXiv   pre-print
better compression rates for the same ranking quality.  ...  After this token encoding step, we further reduce the size of entire document representations using a modern quantization technique.  ...  Depending on the use case, such tradeoffs are highly desirable, allowing for extreme compression rates that minimize the costs of deploying Q&A systems.  ... 
arXiv:2110.02065v1 fatcat:objlfg2zlbgdphmnghimoxfyj4

Survey Paper on Fractal Image Compression using Block Truncation Coding Technique

Anshu Agrawal, Pushpraj Singh
2018 International Journal of Computer Applications  
BTC algorithm as well as vector quantization method for purpose of multi-leveltechnique for gray and color image.  ...  Designing an efficient compression scheme is more critical with the recent growth of computer applications.Modern applications, in addition to high compression ratio, also demand for efficient encoding  ...  [3] , Block truncation committal to writing (BTC) has been thought of extremely economical compression technique for many years.  ... 
doi:10.5120/ijca2018917486 fatcat:2tjh73dqnnavhefnstskfmzwni

Compression of SAR images using KLT, VQ and mixture of principal components

R.D. Dony, S. Haykin
1997 IEE Proceedings - Radar Sonar and Navigation  
Two common methods for compressing images are linear block transform coding such as the Karhunen-Loève transform (KLT) and vector quantization (VQ).  ...  Like vector quantization, it partitions the input space into a number of non-overlapping regions, while each region is represented by a number of basis vectors in the manner of transform coding.  ...  Vector Quantization At the other extreme, VQ is a purely discrete representation of the data.  ... 
doi:10.1049/ip-rsn:19971175 fatcat:ganmocvyo5h23kxe2xu62epbhq

A SYSTEMATIC IMAGE COMPRESSION IN THE COMBINATION OF LINEAR VECTOR QUANTISATION AND DISCRETE WAVELET TRANSFORM

K. Kalaivani .
2014 International Journal of Research in Engineering and Technology  
Vector quantisation (VQ) is a novel technique for image compression. VQ is a lossy compression scheme, used to compress image both in spatial domain & frequency domain.  ...  The proposed algorithm uses the most effective and simple methods like self organizing maps and linear vector quantization together with the discrete wavelet transform in order to reduce the loss of information  ...  Their reciprocal can be implemented using only integer addition and bit shifts, which are extremely fast operation.  ... 
doi:10.15623/ijret.2014.0304044 fatcat:js377di43jeargbpmopx24t4li

Combined compression and denoising of images using vector quantization

Kannan Panchapakesan, Ali Bilgin, David G. Sheppard, Michael W. Marcellin, Bobby R. Hunt, Andrew G. Tescher
1998 Applications of Digital Image Processing XXI  
optimal quantizer for the estimate.  ...  What we present in this paper is a simple but sub-optimal vector quantization (VQ) strategy that combines estimation and compression in one efficient step.  ...  This paper introduces a joint compression and denoising technique based on non-linear interpolative vector quantization (NLIVQ).'  ... 
doi:10.1117/12.323206 fatcat:fsjfdiqejbfy7ofd3bemd3oaxq

Bit-level Optimized Neural Network for Multi-antenna Channel Quantization [article]

Chao Lu, Wei Xu, Shi Jin, Kezhi Wang
2019 arXiv   pre-print
extraction and recovery from the perspective of bit-level quantization performance.  ...  Quantized channel state information (CSI) plays a critical role in precoding design which helps reap the merits of multiple-input multiple-output (MIMO) technology.  ...  CsiNet-Q(L) denotes the CsiNet version which is trained without considering the quantization. For testing CsiNet-Q(L), each entry of the compressed CSI vector is quantized to L bits.  ... 
arXiv:1909.10730v1 fatcat:rss7xwx7kjc7tkwdaxpbpueeji

Lossy parametric motion model compression for global motion temporal filtering

M. Tok, A. Krutz, A. Glantz, T. Sikora
2012 2012 Picture Coding Symposium  
For that, we propose a compression scheme for perspective motion models using transformation before quantization and temporal redundancy reduction and integrate this scheme into a video coding environment  ...  A critical issue is the transmission of accurate higher-order motion parameters with as little additional bits as possible to maximize the compression gain of the whole system.  ...  This compression scheme transforms a given model to global motion vectors at the corners of each frame. Subsequently, these vectors are quantized.  ... 
doi:10.1109/pcs.2012.6213354 dblp:conf/pcs/TokKGS12 fatcat:o2r644wdyjgzxmtwxv6oovbfda

Position-based Scaled Gradient for Model Quantization and Pruning [article]

Jangho Kim, KiYoon Yoo, Nojun Kwak
2020 arXiv   pre-print
Second, we empirically show that PSG acting as a regularizer to a weight vector is favorable for model compression domains such as quantization and pruning.  ...  The experimental results on CIFAR-10/100 and ImageNet datasets show the effectiveness of the proposed PSG in both domains of pruning and quantization even for extremely low bits.  ...  • We apply PSG in quantization and pruning and verify the effectiveness of PSG on CIFAR and ImageNet datasets. We also show that PSGD is very effective for extremely low bit quantization.  ... 
arXiv:2005.11035v4 fatcat:ftahgiooqnfcbe34jmqnzppc4u

An Efficient Coding Method for Teleconferencing Video and Confocal Microscopic Image Sequences

Vinay Arya, Ankush Mittal, Amit Pande, Ramesh C. Joshi
2008 Journal of Computing and Information Technology  
The algorithm uses a 3D vector quantization pyramidal codebook-based model with adaptive pyramidal codebook for compression.  ...  The adaptive vector quantization algorithm is used to train the codebook for optimal performance with time.  ...  The vector quantization and adaptive codebook procedures are explained in Section 2. 3D vector quantization used for compression and encoding of teleconferencing videos and confocal microscopic image sequences  ... 
doi:10.2498/cit.1000892 fatcat:wsknuacp4bfbll6igui5flhpze
« Previous Showing results 1 — 15 out of 17,080 results