Filters








9,513 Hits in 4.4 sec

ICA mixture models for unsupervised classification of non-Gaussian classes and automatic context switching in blind signal separation

Te-Won Lee, M.S. Lewicki, T.J. Sejnowski
2000 IEEE Transactions on Pattern Analysis and Machine Intelligence  
The algorithm estimates the density of each class and is able to model class distributions with non-Gaussian structure.  ...  The algorithm can learn efficient codes for images containing both natural scenes and text.  ...  ACKNOWLEDGMENTS The authors would like to thank the anonymous reviewers for their detailed comments and questions which improved the quality of the presentation of this paper. T.-W.  ... 
doi:10.1109/34.879789 fatcat:lqkvyynrsbaphnz7s6ybqvkmki

Advances in speech transcription at IBM under the DARPA EARS program

S.F. Chen, B. Kingsbury, Lidia Mangu, D. Povey, G. Saon, H. Soltau, G. Zweig
2006 IEEE Transactions on Audio, Speech, and Language Processing  
models, the use of septaphone acoustic context in static decoding graphs, and improvements in basic decoding algorithms.  ...  At a technical level, these advances include the development of a new form of feature-based minimum phone error training (fMPE), the use of large-scale discriminatively trained full-covariance Gaussian  ...  First, the Gaussians in the baseline HMM set were clustered using a maximum likelihood technique to obtain 100 000 Gaussians.  ... 
doi:10.1109/tasl.2006.879814 fatcat:6g2dqr5qsnbehcyf5g7cu4x6ca

Investigation of Parameter Uncertainty in Clustering Using a Gaussian Mixture Model Via Jackknife, Bootstrap and Weighted Likelihood Bootstrap [article]

Adrian O'Hagan, Thomas Brendan Murphy, Luca Scrucca, Isobel Claire Gormley
2019 arXiv   pre-print
This paper provides an empirical comparison of these methods (along with the jackknife method) for producing standard errors and confidence intervals for mixture parameters.  ...  Mixture models are a popular tool in model-based clustering. Such a model is often fitted by a procedure that maximizes the likelihood, such as the EM algorithm.  ...  computationally more efficient than the WLBS approach because the latter requires the computation of the log-likelihood as a weighted sum of densities for each data point.  ... 
arXiv:1510.00551v5 fatcat:h2u36rpktrch3pnxwwb47mhpcm

Regularized Parameter Estimation in High-Dimensional Gaussian Mixture Models

Lingyan Ruan, Ming Yuan, Hui Zou
2011 Neural Computation  
To illustrate the practical merits of the proposed method, we consider its applications in model-based clustering and mixture discriminant analysis.  ...  However, parameter estimation for gaussian mixture models with high dimensionality can be challenging because of the large number of parameters that need to be estimated.  ...  Applications The proposed method for estimating high-dimensional gaussian mixture models could be useful for a variety of applications.  ... 
doi:10.1162/neco_a_00128 pmid:21395439 pmcid:PMC5638044 fatcat:tsgewcb3dzahdhrwbb4zh3unla

:{unav)

Chris S. Wallace, David L. Dowe
2012 Statistics and computing  
We then outline how MML is used for statistical parameter estimation, and how the MML mixture modelling program, Snob (Wallace C.the message lengths from various parameter estimates to enable it to combine  ...  Additionally, Snob can do fully-parameterised mixture modelling, estimating the latent class assignments in addition to estimating the number of components, the relative abundances of the parameters and  ...  The authors wish to thank and do thank the referees and Dr Graham Farr for helpful feedback. Notes  ... 
doi:10.1023/a:1008992619036 fatcat:jkojbq4wqfhyppwfylparj2xxy

Semi-supervised Feature Extraction Using Independent Factor Analysis

L. Oukhellou, E. Come, P. Aknin, T. Denoeux
2011 2011 10th International Conference on Machine Learning and Applications and Workshops  
Both the mapping matrix (assumed to be linear) and the latent variable densities (that are assumed to be mutually independent and generated according to mixtures of Gaussians) are learned from observed  ...  We propose to learn this model within semisupervised framework where the likelihood of both labeled and unlabeled samples is maximized by a generalized expectationmaximization (GEM) algorithm.  ...  Furthermore, it considers that each individual latent variable has its own distribution, modeled by a mixture of Gaussians (MOG).  ... 
doi:10.1109/icmla.2011.183 dblp:conf/icmla/OukhellouCAD11 fatcat:s2zdqpykijgdldeyf7rlejhv4m

Improved demapping for channels with data-dependent noise

Kelvin J. Layton, Azam Mehboob, William G. Cowley, Gottfried Lechner
2018 EURASIP Journal on Wireless Communications and Networking  
A new demapper is presented for communication channels that can be modeled with data-dependent noise on the received symbols.  ...  This includes optical and satellite channels with various types of distortion.  ...  It is important to consider that our ultimate goal of modeling the PDF is to efficiently compute likelihoods to provide to the decoder.  ... 
doi:10.1186/s13638-018-1136-z fatcat:a7c2r2i7abg2tn7dz654kvg7h4

Adaptive Independent Metropolis–Hastings by Fast Estimation of Mixtures of Normals

Paolo Giordani, Robert Kohn
2010 Journal of Computational And Graphical Statistics  
To take full advantage of the potential of adaptive samplers it is often desirable to update the mixture of normals frequently and starting early in the chain.  ...  The sampler performance is evaluated with simulated examples and with applications to time-varyingparameter, semi-parametric, and stochastic volatility models.  ...  B Appendix: Computing the marginal likelihood for the semiparametric Gaussian model In the semiparametric example of section 6.2 we wish to efficiently compute the likelihood y|θ ∼ N(0, σ 2 I + ZV γ (τ  ... 
doi:10.1198/jcgs.2009.07174 fatcat:zg7vjj4ojjg65jrss6mpuqooxe

Adaptive Independent Metropolis-Hastings by Fast Estimation of Mixtures of Normals

Paolo Giordani, Robert Kohn
2008 Social Science Research Network  
To take full advantage of the potential of adaptive samplers it is often desirable to update the mixture of normals frequently and starting early in the chain.  ...  The sampler performance is evaluated with simulated examples and with applications to time-varyingparameter, semi-parametric, and stochastic volatility models.  ...  B Appendix: Computing the marginal likelihood for the semiparametric Gaussian model In the semiparametric example of section 6.2 we wish to efficiently compute the likelihood y|θ ∼ N(0, σ 2 I + ZV γ (τ  ... 
doi:10.2139/ssrn.1082955 fatcat:pbct6iq665epfkwttk3n2wigla

CONSTRUCTION AND VERIFICATION OF MATHEMATICAL MODEL OF MASS SPECTROMETRY DATA

Małgorzata Plechawska-Wójcik
2013 Informatyka Automatyka Pomiary w Gospodarce i Ochronie Środowiska  
This task is essential to the analysis and it needs specification of many parameters of the model.  ...  The article presents issues concerning construction, adjustment and implementation of mass spectrometry mathematical model based on Gaussians and Mixture Models and the mean spectrum.  ...  Therefore, likelihood maximisation of the data fit to the Gaussian mixture model can be performed with Expectation-Maximization algorithm (EM) [12] .  ... 
doi:10.35784/iapgos.1430 fatcat:wfbvpceqzzfungm5r6jgzh7jze

Directional Statistics in Machine Learning: a Brief Review [article]

Suvrit Sra
2016 arXiv   pre-print
For such data, we briefly review common mathematical models prevalent in machine learning, while also outlining some technical aspects, software, applications, and open mathematical challenges.  ...  Consequently, statistical and machine learning models tailored to different data encodings are important.  ...  For both of these distributions, we recapped maximum likelihood parameter estimation as well as mixture modeling using the EM algorithm.  ... 
arXiv:1605.00316v1 fatcat:3oie3gz3wnbibddajfdpoangfa

Fuzzy Subspace Hidden Markov Models for Pattern Recognition

Dat Tran, Wanli Ma, Dharmendra Sharma
2009 2009 IEEE-RIVF International Conference on Computing and Communication Technologies  
Weights can be computed if a learning estimation method such as maximum likelihood is given. Experimental results in network intrusion detection based on the proposed approach show promising results.  ...  We propose to consider subspaces in the feature space and assign a weight to each feature to determine the contribution of that feature in different subspaces to modeling and recognizing patterns.  ...  For training, the number of feature vectors for training the normal model was set to 5000.  ... 
doi:10.1109/rivf.2009.5174640 dblp:conf/rivf/TranMS09 fatcat:7lkl7h2nmrg6xhjgtyfc5cghoe

Analysis of Persistent Motion Patterns Using the 3D Structure Tensor

John Wright, Robert Pless
2005 2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION'05) - Volume 1  
In scenes with multiple global motion patterns, a mixture model (of these global distributions) automatically factors background motion into a set of flow fields corresponding to the different motions.  ...  Capturing statistics of the spatiotemporal derivatives at each pixel can efficiently model surprisingly complicated motion patterns.  ...  We have shown how a set of tensor fields can be viewed as a single Gaussian mixture model, leading to compact and computationally efficient representation of inter-pixel dependencies.  ... 
doi:10.1109/acvmot.2005.21 dblp:conf/wacv/WrightP05 fatcat:pcjyxsjlibbu7lsqtkwv6fnp3y

Fast Nonparametric Clustering of Structured Time-Series

James Hensman, Magnus Rattray, Neil D. Lawrence
2015 IEEE Transactions on Pattern Analysis and Machine Intelligence  
In a biological time series application we show how our model better captures salient features of the data, leading to better consistency with existing biological classifications, while the associated  ...  In this publication, we combine two Bayesian nonparametric models: the Gaussian Process (GP) and the Dirichlet Process (DP).  ...  The Gaussian mixture model also fails to infer the correct structure: without prior knowledge of signal correlations, it is unable to separate the groups.  ... 
doi:10.1109/tpami.2014.2318711 pmid:26353249 fatcat:6t5vjqyfrjavjhhkk43pans7mu

On-Line Color Calibration in Non-stationary Environments [chapter]

Federico Anzani, Daniele Bosisio, Matteo Matteucci, Domenico G. Sorrenti
2006 Lecture Notes in Computer Science  
Our goal is to cope with changing illumination condition by on-line adapting both the parametric color model and its structure/complexity.  ...  Our approach is able to on-line adapt also the complexity of the model, to cope with large variations in the scene illumination and color temperature.  ...  Adaptation of Model Complexity In modeling with mixture of Gaussians, the main problem is to choose the right structure for the model, i.e, the right number of components of the Gaussian mixture.  ... 
doi:10.1007/11780519_35 fatcat:76huc2hmajbzfnm5fa5dirtb3q
« Previous Showing results 1 — 15 out of 9,513 results