Filters








26,430 Hits in 1.0 sec

Deep Learning Vector Quantization

Harm de Vries, Roland Memisevic, Aaron C. Courville
2016 The European Symposium on Artificial Neural Networks  
Motivated by the observation that such fooling examples might be caused by the extrapolating nature of the log-softmax, we propose to combine neural networks with Learning Vector Quantization (LVQ).  ...  Our proposed method, called Deep LVQ (DLVQ), achieves comparable performance on MNIST while being more robust against fooling and adversarial examples.  ...  Softmax DLVQ GMLVQ( 1 ) GMLVQ ( 10 Note that a linear projection f (x) = Ax boils down to Generalized Matrix Learning Vector Quantization (GMLVQ) [9, 10] .  ... 
dblp:conf/esann/VriesMC16 fatcat:m5ugpcfl3bb4dobajs3sa65noe

Pyramid Vector Quantization for Deep Learning [article]

Vincenzo Liguori
2017 arXiv   pre-print
This paper explores the use of Pyramid Vector Quantization (PVQ) to reduce the computational cost for a variety of neural networks (NNs) while, at the same time, compressing the weights that describe them  ...  This is based on the fact that the dot product between an N dimensional vector of real numbers and an N dimensional PVQ vector can be calculated with only additions and subtractions and one multiplication  ...  PYRAMID VECTOR QUANTIZATION A pyramid vector quantizer [8] (PVQ) is based on the cubic lattice points that lie on the surface of an N-dimensional pyramid.  ... 
arXiv:1704.02681v1 fatcat:dmq4nchxlvgmhfkzr2pjixx4t4

Aggregated Learning: A Deep Learning Framework Based on Information-Bottleneck Vector Quantization [article]

Hongyu Guo, Yongyi Mao, Ali Al-Bashabsheh, Richong Zhang
2019 arXiv   pre-print
Such a deficiency then inspires us to develop a novel learning framework, AgrLearn, that corresponds to vector IB quantizers for learning with neural networks.  ...  It is known, from conventional rate-distortion theory, that scalar quantizers are inferior to vector (multi-sample) quantizers.  ...  Recognizing theoretical inferiority of scalar quantizers to vector quantizers, we devise a novel neural-network learning framework, AgrLearn, that is equivalent to vector IB quantizers. • We empirically  ... 
arXiv:1807.10251v3 fatcat:7opuatzfknfh5ld2qat5y4qzuq

Deep learning vector quantization for acoustic information retrieval

Zhen Huang, Chao Weng, Kehuang Li, You-Chi Cheng, Chin-Hui Lee
2014 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
We propose a novel deep learning vector quantization (DLVQ) algorithm based on deep neural networks (DNNs).  ...  Utilizing a strong representation power of this deep learning framework, with any vector quantization (VQ) method as an initializer, the proposed DLVQ technique is capable of learning a code-constrained  ...  We refer to this deep structured LVQ as deep learning vector quantization (DLVQ). The proposed DLVQ method is tested on an audio information retrieval task.  ... 
doi:10.1109/icassp.2014.6853817 dblp:conf/icassp/HuangWLCL14 fatcat:6ojc5ripwrfbxpc2wz4gphcfrm

Learning a Deep Vector Quantization Network for Image Compression

Xiaotong Lu, Heng Wang, Weisheng Dong, Fangfang Wu, Zhonglong Zheng, Guangming Shi
2019 IEEE Access  
INDEX TERMS Image compression, deep learning, vector quantization, fine tune.  ...  Specifically, a fully convolutional vector quantization network (VQNet) has been proposed to quantize the feature vectors of the image representation, where the representative vectors of the VQNet are  ...  Since the source codes of the other deep learning based methods are not available, we cannot obtain the running time of other deep learning based methods.  ... 
doi:10.1109/access.2019.2934731 fatcat:qnrr7qblvfbbdoyntkqz4ucb3u

Can Learning Vector Quantization be an Alternative to SVM and Deep Learning? - Recent Trends and Advanced Variants of Learning Vector Quantization for Classification Learning

Thomas Villmann, Andrea Bohnsack, Marika Kaden
2017 Journal of Artificial Intelligence and Soft Computing Research  
Learning vector quantization (LVQ) is one of the most powerful approaches for prototype based classification of vector data, intuitively introduced by Kohonen.  ...  Although deep learning architectures and support vector classifiers frequently achieve comparable or even better results, LVQ models are smart alternatives with low complexity and computational costs making  ...  CAN LEARNING VECTOR QUANTIZATION BE AN ALTERNATIVE TO . . .  ... 
doi:10.1515/jaiscr-2017-0005 fatcat:a3bnek56tfgy5e2umpb62d4vae

Multipose Face Recognition-Based Combined Adaptive Deep Learning Vector Quantization

Shahenda Sarhan, Aida A. Nasr, Mahmoud Y. Shams
2020 Computational Intelligence and Neuroscience  
In this paper, a combined adaptive deep learning vector quantization (CADLVQ) classifier is proposed.  ...  The proposed classifier has boosted the weakness of the adaptive deep learning vector quantization classifiers through using the majority voting algorithm with the speeded up robust feature extractor.  ...  Combined Adaptive Deep Learning Vector Quantization As mentioned in section 1, here we introduce a system for multipose face recognition based on combined adaptive deep learning vector quantization. e  ... 
doi:10.1155/2020/8821868 pmid:33029115 pmcid:PMC7532404 fatcat:4qtv5zifabg4nbcqtwcrjk7ozm

Accelerating SLIDE Deep Learning on Modern CPUs: Vectorization, Quantizations, Memory Optimizations, and More [article]

Shabnam Daghaghi, Nicholas Meisburger, Mengnan Zhao, Yong Wu, Sameh Gobriel, Charlie Tai, Anshumali Shrivastava
2021 arXiv   pre-print
Deep learning implementations on CPUs (Central Processing Units) are gaining more traction.  ...  Our work highlights several novel perspectives and opportunities for implementing randomized algorithms for deep learning on modern CPUs.  ...  We enable SLIDE to take advantage of vectorization, quantizations, and several memory optimizations in modern CPUs.  ... 
arXiv:2103.10891v1 fatcat:kvi4fszq4vampgsztwo52omc34

Vector Quantization of Deep Convolutional Neural Networks with Learned Codebook [article]

Siyuan Yang, University, My
2022
In this thesis, we focus on compressing deep CNNs based on vector quantization techniques.  ...  In the second part of this thesis, we propose a novel vector quantization approach, which we refer to as Vector Quantization with Learned Codebook, or VQLC, for CNNs.  ...  Chapter 4 Vector Quantization with Learned Codebook In this chapter, we develop a novel vector quantization approach to the compression of convolutional neural networks, which we refer to as Vector Quantization  ... 
doi:10.20381/ruor-27521 fatcat:psly6vcw7ze5rgtqzevf2lvvwm

Accurate Deep Representation Quantization with Gradient Snapping Layer for Similarity Search [article]

Shicong Liu, Hongtao Lu
2016 arXiv   pre-print
Joint deep representation and vector quantization learning can be easily performed by alternatively optimize the quantization codebook and the deep neural network.  ...  However, how to learn deep representations that strongly preserve similarities between data pairs and can be accurately quantized via vector quantization remains a challenging task.  ...  We use hamming ranking for hashing based methods and asymmetric distance computation for vector quantization to retrieve the image rankings of the dataset.  ... 
arXiv:1610.09645v1 fatcat:txz33dcwabfcpmbdw4rjrma7ru

Generalized Product Quantization Network for Semi-supervised Image Retrieval [article]

Young Kyun Jang, Nam Ik Cho
2020 arXiv   pre-print
Image retrieval methods that employ hashing or vector quantization have achieved great success by taking advantage of deep learning.  ...  To resolve this issue, we propose the first quantization-based semi-supervised image retrieval scheme: Generalized Product Quantization (GPQ) network.  ...  Precisely, Deep Quantization Network (DQN) [2] simultaneously optimizes a pairwise cosine loss on semantic similarity pairs to learn feature representations and a product quantization loss to learn the  ... 
arXiv:2002.11281v3 fatcat:v27lks4kvffbrkviy7ubveuebi

Supervised Contrastive Vehicle Quantization for Efficient Vehicle Retrieval

Yongbiao Chen, Kaicheng Guo, Fangxin Liu, Yusheng Huang, Zhengwei Qi
2022 Proceedings of the 2022 International Conference on Multimedia Retrieval  
Specifically, we integrate the product quantization process into deep supervised learning by designing a differentiable quantization network.  ...  In addition, we propose a novel supervised cross-quantized contrastive quantization (SCQC) loss for similaritypreserving learning, which is tailored for the asymmetric retrieval in the product quantization  ...  supervised deep feature learning process.  ... 
doi:10.1145/3512527.3531432 fatcat:bel34thof5ghljsc7dazosml7q

Generalized Product Quantization Network for Semi-Supervised Image Retrieval

Young Kyun Jang, Nam Ik Cho
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Image retrieval methods that employ hashing or vector quantization have achieved great success by taking advantage of deep learning.  ...  To resolve this issue, we propose the first quantization-based semi-supervised image retrieval scheme: Generalized Product Quantization (GPQ) network.  ...  Precisely, Deep Quantization Network (DQN) [2] simultaneously optimizes a pairwise cosine loss on semantic similarity pairs to learn feature representations and a product quantization loss to learn the  ... 
doi:10.1109/cvpr42600.2020.00348 dblp:conf/cvpr/JangC20 fatcat:c2sbj5vevngzfhfe5h6trnqrhe

Word2Bits - Quantized Word Vectors [article]

Maximilian Lam
2018 arXiv   pre-print
We show that high quality quantized word vectors using 1-2 bits per parameter can be learned by introducing a quantization function into Word2Vec.  ...  Our quantized word vectors not only take 8-16x less space than full precision (32 bit) word vectors but also outperform them on word similarity tasks and question answering.  ...  Introduction Word vectors are extensively used in deep learning models for natural language processing.  ... 
arXiv:1803.05651v3 fatcat:dmgsmqudhraz5huc4fzp2lvnuy

Similarity Preserving Deep Asymmetric Quantization for Image Retrieval

Junjie Chen, William K. Cheung
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
To alleviate this problem, we propose a novel model called Similarity Preserving Deep Asymmetric Quantization (SPDAQ) which can directly learn the compact binary codes and quantization codebooks for all  ...  Deep quantization models have been demonstrated to achieve the state-of-the-art retrieval accuracy.  ...  Most of the deep quantization methods use deep convolution network as the backbone for image representations learning.  ... 
doi:10.1609/aaai.v33i01.33018183 fatcat:jepdc4y5uvclrhllnibtlaa4z4
« Previous Showing results 1 — 15 out of 26,430 results