Filters








14,468 Hits in 4.7 sec

Estimating conditional density of missing values using deep Gaussian mixture model [article]

Marcin Przewięźlikowski, Marek Śmieja, Łukasz Struski
2020 arXiv   pre-print
We consider the problem of estimating the conditional probability distribution of missing values given the observed ones.  ...  We propose an approach, which combines the flexibility of deep neural networks with the simplicity of Gaussian mixture models (GMMs).  ...  Acknowledgements The work of M. Śmieja  ... 
arXiv:2010.02183v2 fatcat:mowle3tuazeivpnjg46jvx6fj4

Bayesian Nonparametric Classification for Incomplete Data With a High Missing Rate: an Application to Semiconductor Manufacturing Data [article]

Sewon Park, Kyeongwon Lee, Da-Eun Jeong, Heung-Kook Ko, Jaeyong Lee
2021 arXiv   pre-print
We propose Dirichlet process - naive Bayes model (DPNB), a classification method based on the mixtures of Dirichlet process and naive Bayes model.  ...  missing values increases.  ...  First, the class conditional distribution is an infinite Gaussian mixture model instead of a Gaussian used in a standard naive Bayes classifier.  ... 
arXiv:2107.14409v2 fatcat:ia4zwljygjezzcufepdnc3xy5y

Leveraging the Exact Likelihood of Deep Latent Variable Models [article]

Pierre-Alexandre Mattei, Jes Frellsen
2018 arXiv   pre-print
Finally, we describe an algorithm for missing data imputation using the exact conditional likelihood of a deep latent variable model.  ...  We also show how to ensure the existence of maximum likelihood estimates, and draw useful connections with nonparametric mixture models.  ...  However, under the conditions of boundedness of the likelihood of deep Gaussian models, the bound is finite and attained for a finite mixture model with at most n components. Theorem 2.  ... 
arXiv:1802.04826v4 fatcat:skj22sajfndahftqhxaamrqcpy

MisConv: Convolutional Neural Networks for Missing Data [article]

Marcin Przewięźlikowski, Marek Śmieja, Łukasz Struski, Jacek Tabor
2021 arXiv   pre-print
By modeling the distribution of missing values by the Mixture of Factor Analyzers, we cover the spectrum of possible replacements and find an analytical formula for the expected value of convolution operator  ...  While imputation-based techniques are still one of the most popular solutions, they frequently introduce unreliable information to the data and do not take into account the uncertainty of estimation, which  ...  Acknowledgement The research of M.  ... 
arXiv:2110.14010v2 fatcat:5l6kll3qevhyzdi4oklrnkfjmu

Processing of missing data by neural networks [article]

Marek Smieja, Łukasz Struski, Jacek Tabor, Bartosz Zieliński, Przemysław Spurek
2019 arXiv   pre-print
Our idea is to replace typical neuron's response in the first hidden layer by its expected value. This approach can be applied for various types of networks at minimal cost in their modification.  ...  We propose a general, theoretically justified mechanism for processing missing data by neural networks.  ...  F Learning missing data density To run our model we need to define initial mixture of Gaussians.  ... 
arXiv:1805.07405v3 fatcat:m6jujnfc65h5pot2hv7mjk6i2u

Semi-supervised Learning with Missing Values Imputation [article]

Buliao Huang and Yunhui Zhu and Muhammad Usman and Huanhuan Chen
2021 arXiv   pre-print
Missing values imputation methods are often employed to replace the missing values with substitute values.  ...  SSCFlow explicitly utilizes the label information to facilitate the imputation and classification simultaneously by estimating the conditional distribution of incomplete instances with a novel semi-supervised  ...  denoising autoencoder to approximate the true distribution of missing values for imputation. proposed FlowGMM, which models the density in the latent space as a Gaussian mixture with each mixture component  ... 
arXiv:2106.01708v2 fatcat:4fr74ttuljgmxiexjrva2too7e

Uncertainty estimation with deep learning for rainfall–runoff modeling

Daniel Klotz, Frederik Kratzert, Martin Gauch, Alden Keefe Sampson, Johannes Brandstetter, Günter Klambauer, Sepp Hochreiter, Grey Nearing
2022 Hydrology and Earth System Sciences  
We establish an uncertainty estimation benchmarking procedure and present four deep learning baselines. Three baselines are based on mixture density networks, and one is based on Monte Carlo dropout.  ...  Uncertainty estimations are critical for actionable hydrological prediction, and while standardized community benchmarks are becoming an increasingly important part of hydrological model development and  ...  The output of an MDN is an estimation of a conditional density, since the mixture directly depends on a given input (Fig. 4 ).  ... 
doi:10.5194/hess-26-1673-2022 fatcat:2zl6wz4cqveytmqy36ucaxfkom

Noisy Expectation-Maximization: Applications and Generalizations [article]

Osonde Osoba, Bart Kosko
2018 arXiv   pre-print
We demonstrate these noise benefits on EM algorithms for the Gaussian mixture model (GMM) with both additive and multiplicative NEM noise injection.  ...  The NEM algorithm uses noise to speed up the convergence of the EM algorithm.  ...  The task is threefold: Estimate the unknown means of the two mixed Gaussian densities. Estimate the unknown variances of the mixed densities. And estimate the unknown mixture weights.  ... 
arXiv:1801.04053v1 fatcat:6hhtfixbrnfhjetm7hysp3b7hi

Uncertainty Estimation with Deep Learning for Rainfall-Runoff Modelling [article]

Daniel Klotz, Frederik Kratzert, Martin Gauch, Alden Keefe Sampson, Günter Klambauer, Sepp Hochreiter, Grey Nearing
2020 arXiv   pre-print
We establish an uncertainty estimation benchmarking procedure and present four Deep Learning baselines, out of which three are based on Mixture Density Networks and one is based on Monte Carlo dropout.  ...  Additionally, we provide a post-hoc model analysis to put forward some qualitative understanding of the resulting models.  ...  Figure 3 . 3 Illustration of the concept of a mixture density using Gaussian distributions.  ... 
arXiv:2012.14295v1 fatcat:xwuips3bjjhcbcwsefqpjypeva

Generating Multiple Hypotheses for 3D Human Pose Estimation with Mixture Density Network [article]

Chen Li, Gim Hee Lee
2019 arXiv   pre-print
on an unimodal Gaussian distribution, our method is able to generate multiple feasible hypotheses of 3D pose based on a multimodal mixture density networks.  ...  Furthermore, we show state-of-the-art performance on the Human3.6M dataset in both best hypothesis and multi-view settings, and we demonstrate the generalization capacity of our model by testing on the  ...  They first learned a 3D Gaussian mixture model (GMM) model [22] from a uniformly sampled set of Human3.6M poses, and then use conditional sampling to get samples of the 3D poses with reprojected joints  ... 
arXiv:1904.05547v1 fatcat:tuzgyexckferpjlymdatrslfgi

Generating Multiple Hypotheses for 3D Human Pose Estimation With Mixture Density Network

Chen Li, Gim Hee Lee
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
on a multimodal mixture density networks.  ...  In contrast to existing deep learning approaches which minimize a mean square error based on an unimodal Gaussian distribution, our method is able to generate multiple feasible hypotheses of 3D pose based  ...  They first learned a 3D Gaussian mixture model (GMM) model [22] from a uniformly sampled set of Human3.6M poses, and then use conditional sampling to get samples of the 3D poses with reprojected joints  ... 
doi:10.1109/cvpr.2019.01012 dblp:conf/cvpr/LiL19 fatcat:sse76hrma5dzhktsqqmws7ouk4

Handling Missing Observations with an RNN-based Prediction-Update Cycle [article]

Stefan Becker, Ronny Hug, Wolfgang Hübner, Michael Arens, Brendan T. Morris
2021 arXiv   pre-print
By providing the model with masking information, binary-encoded missing events, the model can overcome limitations of standard techniques for dealing with missing input values.  ...  Towards this end, this paper introduces an RNN-based approach that provides a full temporal filtering cycle for motion state estimation.  ...  The model can be trained by maximizing the likelihood of the data given the output Gaussian mixture parameters.  ... 
arXiv:2103.11747v1 fatcat:2njopruapvcnfbrimjsfu4aywu

PriorGAN: Real Data Prior for Generative Adversarial Nets [article]

Shuyang Gu, Jianmin Bao, Dong Chen, Fang Wen
2020 arXiv   pre-print
To be specific, we adopt a simple yet elegant Gaussian Mixture Model (GMM) to build an explicit probability distribution on the feature level for the whole real data.  ...  Second, the missing modes problem where the learned distribution misses some certain regions of the real data distribution.  ...  The complete Gaussian mixture model is parameterized by the mean vectors, covariance matrices and mixture weights from all component densities.  ... 
arXiv:2006.16990v1 fatcat:khd2xanzxrb43eptpygi6ojdqu

Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction

Yu Zhao, Rennong Yang, Guillaume Chevalier, Rajiv C. Shah, Rob Romijnders
2018 Optik (Stuttgart)  
Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach.  ...  In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy.  ...  They modeled visual attention with a mixture of Gaussians at each frame.  ... 
doi:10.1016/j.ijleo.2017.12.038 fatcat:nsviefmu2zh3rnryjctjba6nhq

Normalizing Flows for Knockoff-free Controlled Feature Selection [article]

Derek Hansen, Brian Manzo, Jeffrey Regier
2021 arXiv   pre-print
To more accurately model the features, FlowSelect uses normalizing flows, the state-of-the-art method for density estimation.  ...  Recently, multiple deep-learning-based methods have been proposed to perform controlled feature selection through the Model-X knockoff framework.  ...  The HRT, introduced in Section 2, uses separate mixture density networks to model each feature's complete conditional distribution.  ... 
arXiv:2106.01528v2 fatcat:qkyc7jkqczakvdxn63htzwoevi
« Previous Showing results 1 — 15 out of 14,468 results