Filters








376,049 Hits in 5.8 sec

Optimal approximate sampling from discrete probability distributions

Feras A. Saad, Cameron E. Freer, Martin C. Rinard, Vikash K. Mansinghka
2019 Proceedings of the ACM on Programming Languages (PACMPL)  
from a discrete probability distribution (p_1, ..., p_n), where the probabilities of the output distribution (p̂_1, ..., p̂_n) of the sampling algorithm must be specified using at most k bits of precision  ...  same distribution (i.e., is "entropy-optimal") and whose output distribution (p̂_1, ..., p̂_n) is a closest approximation to the target distribution (p_1, ..., p_n) among all entropy-optimal sampling  ...  ACKNOWLEDGMENTS This research was supported by a philanthropic gift from the Aphorism Foundation.  ... 
doi:10.1145/3371104 fatcat:i6id26wdvfh2dik6pcvn3okij4

Approximating discrete probability distributions with dependence trees

C. Chow, C. Liu
1968 IEEE Transactions on Information Theory  
Absfracf-A method is presented to approximate optimally an n-dimensional discrete probability distribution by a product of second-order distributions, or the distribution of the first-order tree dependence  ...  It is further shown that when this procedure is applied to empirical observations from an unknown distribution of tree dependence, the procedure is the maximum-likelihood estimate of the distribution.  ...  method is presented to approximate optimally an n-dimensional discrete probability distribution by a product of second-order distributions, or the distribution of the first-order tree dependence.  ... 
doi:10.1109/tit.1968.1054142 fatcat:y7idq3rkrze45orcnxvpgeq3k4

Latent Variable Modelling Using Variational Autoencoders: A survey [article]

Vasanth Kalingeri
2022 arXiv   pre-print
The focus of this report is on Variational autoencoders, a method to learn the probability distribution of large complex datasets.  ...  A probability distribution allows practitioners to uncover hidden structure in the data and build models to solve supervised learning problems using limited data.  ...  The operation of sampling in discrete latent states involves mapping the continuous parameters of the discrete distribution to discrete samples, any function that maps from continuous space to a discrete  ... 
arXiv:2206.09891v1 fatcat:f7zjh3t7srbadf7wld6p2n2n7u

Efficient Combinatorial Optimization under Uncertainty. 1. Algorithmic Development

Ki-Joo Kim, Urmila M. Diwekar
2002 Industrial & Engineering Chemistry Research  
The Hammersley sequence sampling technique is used for updating discrete combinations, reducing the Markov chain length, determining the number of samples automatically, and embedding better confidence  ...  intervals of the samples.  ...  A common method is to propagate N samp samples generated from the random values of and optimize the following approximated problem: Similarly, the optimal solution and optimal value of this approximation  ... 
doi:10.1021/ie0101689 fatcat:4s5lmj53mvgpjadmclfmx6h6s4

Brenier approach for optimal transportation between a quasi-discrete measure and a discrete measure [article]

Ying Lu, Liming Chen, Alexandre Saidi, Xianfeng Gu
2018 arXiv   pre-print
Recently, Cuturi proposed the Sinkhorn distance which makes use of an approximate Optimal Transport cost between two distributions as a distance to describe distribution discrepancy.  ...  In this paper, we introduce a new Brenier approach for calculating a more accurate Wasserstein distance between two discrete distributions, this approach successfully avoids the two limitations shown above  ...  Apart from the small hypercubes around source samples, probability is uniformly distributed in Ω, and the total mass in the rest volume is p 0 .  ... 
arXiv:1801.05574v1 fatcat:5qb4pnxnfvdbbbcz5qzm7hp7gu

Approximated maximum likelihood estimation of parameters of discrete stable family

Lenka Slámová, Lev B. Klebanov
2015 Kybernetika (Praha)  
First we show the asymptotic behaviour of the AML estimator on simulated samples from SDS distribution and prove that the optimal choice of z significantly improves the results of the estimation.  ...  where x 1 , . . . , x n is the observed sample. The characteristic function can be obtained from probability generating function as f θ (t) = P(e it ).  ... 
doi:10.14736/kyb-2014-6-1065 fatcat:l57iu4mq2zg6zpcdkf355ryc2a

Discrete flow posteriors for variational inference in discrete dynamical systems [article]

Laurence Aitchison, Vincent Adam, Srinivas C. Turaga
2018 arXiv   pre-print
To optimize the variational bound, we considered two ways to evaluate probabilities: inserting the relaxed samples directly into the pmf for the discrete distribution, or converting to continuous logistic  ...  Each training step for a variational autoencoder (VAE) requires us to sample from the approximate posterior, so we usually choose simple (e.g. factorised) approximate posteriors in which sampling is an  ...  u)) , (3) Instead of sampling z directly from a Bernoulli, we can obtain samples of z from the same distribution by first sampling from a Logistic, and thresholding that sample using, P (l) = Logistic  ... 
arXiv:1805.10958v1 fatcat:2mgiyebewvgyjav73j7fyvjw4y

Multilevel Sequential Importance Sampling for Rare Event Estimation [article]

Fabian Wagner, Jonas Latz, Iason Papaioannou, Elisabeth Ullmann
2020 arXiv   pre-print
Instead of the popular adaptive conditional sampling method, we propose a new algorithm that uses independent proposals from an adaptively constructed von Mises-Fischer-Nakagami distribution.  ...  Since numerical evaluations of PDEs are computationally expensive, estimating such probabilities of failure by Monte Carlo sampling is intractable.  ...  In contrast, SIS achieves an approximation of p opt, by approximating the optimal IS distribution in a sequential manner while starting from a known prior density p 0 .  ... 
arXiv:1909.07680v2 fatcat:bauvqewxijcfphee4o52z6gpve

An empirical analysis of scenario generation methods for stochastic optimization

Nils Löhndorf
2016 European Journal of Operational Research  
The empirical analysis identifies Voronoi cell sampling as the method that provides the lowest errors, with particularly good results for heavy-tailed distributions.  ...  This work presents an empirical analysis of popular scenario generation methods for stochastic optimization, including quasi-Monte Carlo, moment matching, and methods based on probability metrics, as well  ...  The most widely used technique to solve real-world stochastic optimization problems is sample average approximation, where the probability distribution is approximated by a set of discrete scenarios (  ... 
doi:10.1016/j.ejor.2016.05.021 fatcat:cvlbpb6yfrfndk2zqjszuco7q4

Monte Carlo POMDPs

Sebastian Thrun
1999 Neural Information Processing Systems  
Our approach uses importance sampling for representing beliefs, and Monte Carlo approximation for belief propagation.  ...  Our approach represents all belief distributions using samples drawn from these distributions.  ...  Reinforcement learning in belief space is applied to learn optimal policies, using a sample-based version of nearest neighbor for generalization. Backups are performed using Monte Carlo sampling.  ... 
dblp:conf/nips/Thrun99 fatcat:yxijlmjzvrczfkigykxxdwmubq

Reliable Categorical Variational Inference with Mixture of Discrete Normalizing Flows [article]

Tomasz Kuśmierczyk, Arto Klami
2021 arXiv   pre-print
Variational approximations are increasingly based on gradient-based optimization of expectations estimated by sampling.  ...  Continuous relaxations, such as the Gumbel-Softmax for categorical distribution, enable gradient-based optimization, but do not define a valid probability mass for discrete observations.  ...  MDNF can be trained by gradient optimization, provide discrete samples following the true distribution, andin contrast to GS relaxation that fallaciously employs continuous probability density -enables  ... 
arXiv:2006.15568v2 fatcat:delvtnvefrflno5tyt7bzpd2ei

Multilevel weighted least squares polynomial approximation

Abdul-Lateef Haji-Ali, Fabio Nobile, Sören Wolfers, Raúl Tempone
2019 Mathematical Modelling and Numerical Analysis  
Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions.  ...  It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic  ...  By the upper bound, we may use samples from the -dimensional arcsine distribution instead of the optimal distribution.  ... 
doi:10.1051/m2an/2019045 fatcat:edwl4tjo2bfzdlu2bdl7xb6p3a

A discretization procedure for rare events in Bayesian networks

Kilian Zwirglmaier, Daniel Straub
2016 Reliability Engineering & System Safety  
approximation error of the discretization is associated with 275 the change from the prior to the posterior distribution of the basic random variables.  ...  present the optimal discretization for the FORM approximation !  ... 
doi:10.1016/j.ress.2016.04.008 fatcat:t6whxjobijdyhmfvcjq2nh7kdm

Embed and Project: Discrete Sampling with Universal Hashing

Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, Bart Selman
2013 Neural Information Processing Systems  
We consider the problem of sampling from a probability distribution defined over a high-dimensional discrete set, specified for instance by a graphical model.  ...  Our scheme can leverage fast combinatorial optimization tools as a blackbox and, unlike MCMC methods, samples produced are guaranteed to be within an (arbitrarily small) constant factor of the true probability  ...  samples are produced from an approximately correct distribution.  ... 
dblp:conf/nips/ErmonGSS13 fatcat:3sfucsgamzfrjick63y4cyeemy

Maneuvering Target Tracking in Dense Clutter Based on Particle Filtering

Xiaojun YANG, Keyi XING, Xingle FENG
2011 Chinese Journal of Aeronautics  
Within the framework of a hybrid state estimation, each particle samples a discrete mode from its posterior distribution and the continuous state variables are approximated by a multivariate Gaussian mixture  ...  The uncertainty of measurement origin is solved by Monte Carlo probabilistic data association method where the distribution of interest is approximated by particle filtering and UKF.  ...  Instead of sampling from transition prior probability distribution p(r t | ( ) 1 i t r ), we sample from the optimal importance probability distribution which is also the true posterior distribution of  ... 
doi:10.1016/s1000-9361(11)60021-6 fatcat:fxwc2f6wz5ag7ptdi5nnq2gtwu
« Previous Showing results 1 — 15 out of 376,049 results