A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Sample Complexity of Learning Mixtures of Sparse Linear Regressions
[article]
2019
arXiv
pre-print
In the problem of learning mixtures of linear regressions, the goal is to learn a collection of signal vectors from a sequence of (possibly noisy) linear measurements, where each measurement is evaluated ...
sparse, we still learn a good sparse approximation of the signals. ...
Introduction Learning mixtures of linear regressions is a natural generalization of the basic linear regression problem. ...
arXiv:1910.14106v1
fatcat:fvnirpe7k5hkjkbr2n4c3arhsi
Mixtures of Sparse Autoregressive Networks
[article]
2016
arXiv
pre-print
By combining the concepts of sparsity, mixtures and parameter sharing we obtain a simple model which is fast to train and which achieves state-of-the-art or better results on several standard benchmark ...
Specifically, we use an L1-penalty to regularize the conditional distributions and introduce a procedure for automatic parameter sharing between mixture components. ...
Samples from a sparse linear autoregressive network with constant conditional variance are presented in Figure 4 . ...
arXiv:1511.04776v4
fatcat:dhb3f2das5av3acfivfgiba65m
Flexible Modeling of Latent Task Structures in Multitask Learning
[article]
2012
arXiv
pre-print
We present a flexible, nonparametric Bayesian model that posits a mixture of factor analyzers structure on the tasks. ...
The nonparametric aspect makes the model expressive enough to subsume many existing models of latent task structures (e.g, mean-regularized tasks, clustered tasks, low-rank or linear/non-linear subspace ...
Any opinions, findings, and conclusion or recommendations expressed in this material are the authors' and do not necessarily reflect those of the sponsors. ...
arXiv:1206.6486v1
fatcat:va2tqlbefbgyldcufobmpvqiva
Machine learning based hyperspectral image analysis: A survey
[article]
2019
arXiv
pre-print
The machine learning algorithms covered are Gaussian models, linear regression, logistic regression, support vector machines, Gaussian mixture model, latent linear models, sparse linear models, Gaussian ...
mixture models, ensemble learning, directed graphical models, undirected graphical models, clustering, Gaussian processes, Dirichlet processes, and deep learning. ...
The machine learning algorithms covered are Gaussian models [293] , linear regression [223] , logistic regression [215] , support vector machines [271] , Gaussian mixture models [32] , latent linear ...
arXiv:1802.08701v2
fatcat:bfi6qkpx2bf6bowhyloj2duugu
Efficient subset selection via the kernelized Rényi distance
2009
2009 IEEE 12th International Conference on Computer Vision
The algorithm is first validated and then applied to two sample applications where machine learning and data pruning are used. ...
With improved sensors, the amount of data available in many vision problems has increased dramatically and allows the use of sophisticated learning algorithms to perform inference on the data. ...
Sparse approaches fall in three classes; (1) learning from a subset of the original data like in [12, 5, 23] ; (2) a low rank approximation (chapter 8 in [19] ); (3) using mixture of experts. ...
doi:10.1109/iccv.2009.5459395
dblp:conf/iccv/SrinivasanD09
fatcat:65tumgf35nasnfwjvyvqoq5ne4
Sparse regression mixture modeling with the multi-kernel relevance vector machine
2013
Knowledge and Information Systems
A regression mixture model is proposed where each mixture component is a multi-kernel version of the Relevance Vector Machine (RVM). ...
proper number of mixture components. ...
-This manuscript is dedicated to the memory of our friend and colleague Professor Nikolaos P. Galatsanos who contributed significantly to the research and preparation of this work. ...
doi:10.1007/s10115-013-0704-0
fatcat:akgsnbplhffalhdzhgelhiwzxi
Online Sparse Matrix Gaussian Process Regression and Vision Applications
[chapter]
2008
Lecture Notes in Computer Science
We demonstrate that, using these matrix downdates, online hyperparameter estimation can be included without affecting the linear runtime complexity of the algorithm. ...
Maintaining and updating the sparse Cholesky factor of the Gram matrix can be done efficiently using Givens rotations. ...
In this paper, we propose a new Gaussian Process (GP) regression algorithm, called Online Sparse Matrix Gaussian Process (OSMGP) regression, that is exact and allows fast online updates in linear time ...
doi:10.1007/978-3-540-88682-2_36
fatcat:hjsel5t3vbcylhbnnxgmw5wncy
Context-GMM: Incremental learning of sparse priors for Gaussian mixture regression
2012
2012 IEEE International Conference on Robotics and Biomimetics (ROBIO)
In this paper we introduce the Context-GMM, a method to learn sparse priors over the mixture components. ...
Such priors are stable over large amounts of time and provide a way of selecting very small subsets of mixture components without significant loss in accuracy and with huge computational savings. aribes ...
This motivated the learning of sparse priors induced by different behaviours being executed by the robot. ...
doi:10.1109/robio.2012.6491172
dblp:conf/robio/RibesBDM12
fatcat:lej6rsnunfdsdizzdm5ur3hhii
Cadre Modeling: Simultaneously Discovering Subpopulations and Predictive Models
[article]
2018
arXiv
pre-print
We learn models using adaptive step size stochastic gradient descent, and we assess cadre quality with bootstrapped sample analysis. ...
Further experimental results show that cadre methods have generalization that is competitive with linear and nonlinear regression models and can identify robust subpopulations. ...
Fig. 6 . 6 Distribution of cadre regression weights w m p . The different cadres learned different linear models.
GENERALIZATION ERROR AND MODEL COMPLEXITY ON TG DATASET. ...
arXiv:1802.02500v1
fatcat:ff7xvdcprnf6nillay7ngmnota
Learning Sparse Mixture Models
[article]
2022
arXiv
pre-print
This work approximates high-dimensional density functions with an ANOVA-like sparse structure by the mixture of wrapped Gaussian and von Mises distributions. ...
The learning procedure considerably reduces the algorithm's complexity for the input dimension d and increases the model's accuracy for the given samples, as the numerical examples show. ...
Determination of Active Variables Assuming that the above assumptions are fulfilled, we can considerably reduce the complexity of learning the parameters of the sparse mixture models by removing the independent ...
arXiv:2203.15615v1
fatcat:fww5szt4jzbrle7mxyuwvnvjqa
Recovery of Sparse Signals from a Mixture of Linear Samples
[article]
2020
arXiv
pre-print
Mixture of linear regressions is a popular learning theoretic model that is used widely to represent heterogeneous data. ...
When queried, an oracle randomly selects one of the two different sparse linear models and generates a label accordingly. ...
Mixture of linear regressions is a natural synthesis of mixture models and linear regression; a generalization of the basic linear regression problem of learning the best linear relationship between the ...
arXiv:2006.16406v2
fatcat:pv34aa5vkjd3reyzk3vavybts4
A Discriminative Gaussian Mixture Model with Sparsity
[article]
2021
arXiv
pre-print
Using this sparse learning framework, we can simultaneously remove redundant Gaussian components and reduce the number of parameters used in the remaining components during learning; this learning method ...
The mixture model can address this issue, although it leads to an increase in the number of parameters. ...
Sparse Gaussian Sparse GMM (Gaiffas, 2014) Discriminative GMM (Klautau, 2003) Mixture model Discriminative Sparse Bayes Figure 5 : Relationship of our study with other studies. the logistic regression ...
arXiv:1911.06028v2
fatcat:p56ubno5vjgdfctt7rtm4qxoia
Efficient Sparse Clustering of High-Dimensional Non-spherical Gaussian Mixtures
[article]
2014
arXiv
pre-print
The method we propose is a combination of a recent approach for learning parameters of a Gaussian mixture model and sparse linear discriminant analysis (LDA). ...
Our results indicate that the sample complexity of clustering depends on the sparsity of the relevant feature set, while only scaling logarithmically with the ambient dimension. ...
Assuming mixtures of equal weight spherical components with sparse mean separation, [19] provide some minimax bounds for the problem with sample complexity that scales with the number of relevant features ...
arXiv:1406.2206v1
fatcat:fuhe65ojezbqvbr7dwe7zesdyq
Gas Distribution Modeling using Sparse Gaussian Process Mixture Models
[chapter]
2009
Robotics
We present an approach that formulates this task as a regression problem. To deal with the specific properties of typical gas distributions, we propose a sparse Gaussian process mixture model. ...
We integrate the sparsification of the training data into an EM procedure used for learning the mixture components and the gating function. ...
However, several methods for learning sparse GP models [18, 19] have been presented that overcome this limitation and lead to a near-linear complexity [19] . ...
doi:10.7551/mitpress/8344.003.0044
fatcat:phhemp35pvgy7a2rfn6nntjari
Gas Distribution Modeling using Sparse Gaussian Process Mixture Models
2008
Robotics: Science and Systems IV
We present an approach that formulates this task as a regression problem. To deal with the specific properties of typical gas distributions, we propose a sparse Gaussian process mixture model. ...
We integrate the sparsification of the training data into an EM procedure used for learning the mixture components and the gating function. ...
However, several methods for learning sparse GP models [18, 19] have been presented that overcome this limitation and lead to a near-linear complexity [19] . ...
doi:10.15607/rss.2008.iv.040
dblp:conf/rss/StachnissPLB08
fatcat:wjmkiibojvh25dgvemiywllmfq
« Previous
Showing results 1 — 15 out of 18,557 results