A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Structured Bayesian Compression for Deep models in mobile enabled devices for connected healthcare
[article]
2019
arXiv
pre-print
Deep Models, typically Deep neural networks, have millions of parameters, analyze medical data accurately, yet in a time-consuming method. ...
However, energy cost effectiveness and computational efficiency are important for prerequisites developing and deploying mobile-enabled devices, the mainstream trend in connected healthcare. ...
OVERVIEW OF THE STRUCTURED BAYESIAN COMPRESSION
FRAMEWORK Following the idea of sparse learning based on Bayesian learning, a Bayesian sparse learning method constructed with mixture sparsity inducing ...
arXiv:1902.05429v1
fatcat:j2jdl4y5gfaanpbmpp4m7nyjby
Neural Sparse Topical Coding
2018
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Topic models with sparsity enhancement have been proven to be effective at learning discriminative and coherent latent topics of short texts, which is critical to many scientific and engineering applications ...
We propose a novel sparsityenhanced topic model, Neural Sparse Topical Coding (NSTC) base on a sparsityenhanced topic model called Sparse Topical Coding (STC). ...
Acknowledgments This work is supported by the National Science Foundation of China, under grant No.61472291 and grant No.61272110. ...
doi:10.18653/v1/p18-1217
dblp:conf/acl/WangPXZZHT18
fatcat:6apsb767wvc2vjvjt6sjosnpd4
Supervised Bayesian sparse coding for classification
2014
2014 International Joint Conference on Neural Networks (IJCNN)
The sparse coding with Laplacian scale mixture prior is formulated as a weighted l1 minimization problem. ...
Classification of a test sample is done using the MAP estimate of the sparse codes. We have tested the model on different recognition tasks and demonstrated the effectiveness of the model. ...
as random variables and sparse coding with Laplacian scale mixture prior is formulated. ...
doi:10.1109/ijcnn.2014.6889402
dblp:conf/ijcnn/XuDS14
fatcat:ahaar4s6g5g2phut6cfhmdwr3i
Efficient Noisy Sound-Event Mixture Classification Using Adaptive-Sparse Complex-Valued Matrix Factorization and OvsO SVM
2020
Sensors
The traditional complex nonnegative matrix factorization (CMF) is extended by cooperation with the optimal adaptive L1 sparsity to decompose a noisy single-channel mixture. ...
The proposed adaptive L1 sparsity CMF algorithm encodes the spectra pattern and estimates the phase of the original signals in time-frequency representation. ...
aims to facilitate spectral dictionaries with adaptive sparse coding. ...
doi:10.3390/s20164368
pmid:32764362
pmcid:PMC7472059
fatcat:3jl5cmk6ezatrjshtwsrogzmaa
Learning Sparse Sentence Encoding without Supervision: An Exploration of Sparsity in Variational Autoencoders
[article]
2021
arXiv
pre-print
It has been long known that sparsity is an effective inductive bias for learning efficient representation of data in vectors with fixed dimensionality, and it has been explored in many areas of representation ...
Additionally, NLP is also lagging behind in terms of learning sparse representations of large units of text e.g., sentences. ...
have a negative correlation, while giving up task performance and having sparse codes helps with the analysis of the representations, (III) presence/absence of task related signal in the sparsity codes ...
arXiv:2009.12421v2
fatcat:4ym6vjyfnnb7vop322nfrn4ogu
A Sparsity-promoting Dictionary Model for Variational Autoencoders
[article]
2022
arXiv
pre-print
In this paper, we propose a simple yet effective methodology to structure the latent space via a sparsity-promoting dictionary model, which assumes that each latent code can be written as a sparse linear ...
Experiments on speech generative modeling demonstrate the advantage of the proposed approach over competing techniques, since it promotes sparsity while not deteriorating the output speech quality. ...
Acknowledgements Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities ...
arXiv:2203.15758v1
fatcat:ptxrp3uuirguxitythzwxyajuy
Mixture Model Auto-Encoders: Deep Clustering through Dictionary Learning
[article]
2022
arXiv
pre-print
Derived from the perspective of sparse dictionary learning and mixture models, MixMate comprises several auto-encoders, each tasked with reconstructing data in a distinct cluster, while enforcing sparsity ...
Many of these networks require a large number of parameters and suffer from a lack of interpretability, due to the black-box nature of the auto-encoders. ...
It is derived from a mixture of sparse dictionary learning models, inspired by the wealth of evidence for the sparsity of natural images with respect to suitable dictionaries [12, 13] . ...
arXiv:2110.04683v2
fatcat:a6zkzbgm6bctrpwxbwsvqhsy34
A Probabilistic Model-Based Method with Nonlocal Filtering for Robust Magnetic Resonance Imaging Reconstruction
2020
IEEE Access
Sparse coefficients of similar packed patches are modeled with LSM distribution to exploit the nonlocal self-similarity prior of MR image, and a maximum a posterior estimation problem for sparse coding ...
It is shown that both hidden scale parameters i.e. variances of sparse coefficients and location parameters can be jointly estimated along with the unknown sparse coefficients via the method of alternating ...
The distribution over α i is therefore a continuous mixture of Laplacian distributions with different scales. ...
doi:10.1109/access.2020.2991442
fatcat:rcz7t7okmbfydbhqj3ozhb6vja
Disentangled VAE Representations for Multi-Aspect and Missing Data
[article]
2018
arXiv
pre-print
For example, sampling from the distribution of English sentences conditioned on a given French sentence or sampling audio waveforms conditioned on a given piece of text. ...
missing observations, and with a prior that encourages disentanglement between the groups and the latent dimensions. ...
The Appendix of [4] notes that the JMVAE model is equivalent to a uniform mixture distribution over the individual approximate posteriors: q(z|x) = 1 |O| g∈O q g (z|x (g) ). ...
arXiv:1806.09060v1
fatcat:bi4hckerjnbnnc7wqa762hcac4
Efficient learning of sparse, distributed, convolutional feature representations for object recognition
2011
2011 International Conference on Computer Vision
Along with this efficient training, we evaluate the importance of convolutional training that can capture a larger spatial context with less redundancy, as compared to nonconvolutional training. ...
To the best of our knowledge, this is the first work showing that RBMs can be trained with almost no hyperparameter tuning to provide classification performance similar to or significantly better than ...
Sparsity (α/K) The performance of sparse models, such as sparse coding and sparse RBMs, can vary significantly as a function of sparsity level. ...
doi:10.1109/iccv.2011.6126554
dblp:conf/iccv/SohnJLH11
fatcat:bgo67xqpbrfdvnnh3zgemibrom
Bayesian and L1 Approaches to Sparse Unsupervised Learning
[article]
2012
arXiv
pre-print
The use of L1 regularisation for sparse learning has generated immense research interest, with successful application in such diverse areas as signal acquisition, image coding, genomics and collaborative ...
distributions. ...
The set of sparsity-favouring distributions includes the Normal-Gamma, Normal Inverse-Gaussian, Laplace (or double Exponential), Exponential, or generally the class of scale-mixtures of Gaussian distributions ...
arXiv:1106.1157v3
fatcat:kf2snogsejc4nmym4jyd2zrc7u
The optimal odor-receptor interaction network is sparse in olfactory systems: Compressed sensing by nonlinear neurons with a finite dynamic range
[article]
2018
bioRxiv
pre-print
Here, we investigate possible optimal olfactory coding strategies by maximizing mutual information between odor mixtures and ORNs' responses with respect to the bipartite odor-receptor interaction network ...
We find that the optimal ORIN is sparse-a finite fraction of sensitives are zero, and the nonzero sensitivities follow a broad distribution that depends on the odor statistics. ...
We trained the ANN with a training set of sparse odor mixtures drawn from the odor distribution P env (c), and tested its performance by using new odor mixtures randomly drawn from the same odor distribution ...
doi:10.1101/464875
fatcat:pbsel3lya5ev7itwmejysyvjsi
Scalable Neural Network Compression and Pruning Using Hard Clustering and L1 Regularization
[article]
2018
arXiv
pre-print
We propose a simple and easy to implement neural network compression algorithm that achieves results competitive with more complicated state-of-the-art methods. ...
Unlike many existing quantization-based methods, our method uses hard clustering assignments of network parameters, which adds minimal change or overhead to standard network training. ...
As Huffman coding performs best with non-uniform distributions, the primary difference between the sparse APT and the BC solutions is that the BC solutions do not return many equal sized clusters. ...
arXiv:1806.05355v1
fatcat:5nylxle2wjc3nocgvmo2i3o66u
Efficient Highly Over-Complete Sparse Coding Using a Mixture Model
[chapter]
2010
Lecture Notes in Computer Science
Therefore, the feature learned by the mixture sparse coding model works pretty well with linear classifiers. ...
Sparse coding of sensory data has recently attracted notable attention in research of learning useful features from the unlabeled data. ...
The main part of this work was done when the first author was a summer intern at NEC Laboratories America, Cupertino, CA. The work is also supported in part by the U.S. ...
doi:10.1007/978-3-642-15555-0_9
fatcat:m53yz35dgjc7ff75tnrkyvhqla
A Discriminative Gaussian Mixture Model with Sparsity
[article]
2021
arXiv
pre-print
Using this sparse learning framework, we can simultaneously remove redundant Gaussian components and reduce the number of parameters used in the remaining components during learning; this learning method ...
The mixture model can address this issue, although it leads to an increase in the number of parameters. ...
Hence, the posterior distribution of w can be approximated by a Gaussian distribution with a mean ofŵ and a covariance matrix of Λ, where Λ = −(∇∇E z [ln P (ŵ|T, z, X, π, α)]) −1 . (16) Because the evidence ...
arXiv:1911.06028v2
fatcat:p56ubno5vjgdfctt7rtm4qxoia
« Previous
Showing results 1 — 15 out of 4,689 results