Filters








43,503 Hits in 6.4 sec

Lower Bounds on the Total Variation Distance Between Mixtures of Two Gaussians [article]

Sami Davies, Arya Mazumdar, Soumyabrata Pal, Cyrus Rashtchian
2022 arXiv   pre-print
This enables us to derive new lower bounds on the total variation distance between pairs of two-component Gaussian mixtures that have a shared covariance matrix.  ...  While the total variation distance appears naturally in the sample complexity of distribution learning, it is analytically difficult to obtain tight lower bounds for mixtures.  ...  In the one-dimensional setting, our next theorem shows a novel lower bound on the total variation distance between any two distinct two-component one-dimensional Gaussian mixtures from F. Theorem 2.  ... 
arXiv:2109.01064v2 fatcat:ojonoi247veczoodt2pvqo6kom

Guaranteed Deterministic Bounds on the Total Variation Distance between Univariate Mixtures [article]

Frank Nielsen, Ke Sun
2018 arXiv   pre-print
In this work, we consider two methods for bounding the total variation of univariate mixture models: The first method is based on the information monotonicity property of the total variation to design  ...  Since the total variation distance does not admit closed-form expressions for statistical mixtures (like Gaussian mixture models), one often has to rely in practice on costly numerical integrations or  ...  Conclusion and discussion We described novel deterministic lower and upper bounds on the total variation distance between univariate mixtures, and demonstrated their effectiveness for Gaussian, Gamma and  ... 
arXiv:1806.11311v1 fatcat:7j7d4v6ggjdhlcoxex63q5cjnq

Some techniques in density estimation [article]

Hassan Ashtiani, Abbas Mehrabian
2018 arXiv   pre-print
We review some old and new techniques for bounding the sample complexity of estimating densities of continuous distributions, focusing on the class of mixtures of Gaussians and its subclasses.  ...  In particular, we review the main techniques used to prove the new sample complexity bounds for mixtures of Gaussians by Ashtiani, Ben-David, Harvey, Liaw, Mehrabian, and Plan arXiv:1710.05209.  ...  Our next goal is to give a lower bound on the total variation distance between f a and f b .  ... 
arXiv:1801.04003v2 fatcat:ll4gplqolfhchbwjgdbtouocki

Near-optimal Sample Complexity Bounds for Robust Learning of Gaussians Mixtures via Compression Schemes [article]

Hassan Ashtiani and Shai Ben-David and Nick Harvey and Christopher Liaw and Abbas Mehrabian and Yaniv Plan
2020 arXiv   pre-print
We prove that Θ̃(k d^2 / ε^2) samples are necessary and sufficient for learning a mixture of k Gaussians in R^d, up to error ε in total variation distance.  ...  This improves both the known upper bounds and lower bounds for this problem. For mixtures of axis-aligned Gaussians, we show that Õ(k d / ε^2) samples suffice, matching a known lower bound.  ...  ACKNOWLEDGMENTS We thank the anonymous referees of the Journal of the ACM for their valuable comments which have substantially improved the presentation, and for proposing the new proof of Lemma 5.6 which  ... 
arXiv:1710.05209v5 fatcat:nykqwrveknfblb67dryyrphp3e

Differentially Private Assouad, Fano, and Le Cam [article]

Jayadev Acharya, Ziteng Sun, Huanyu Zhang
2020 arXiv   pre-print
We establish the optimal sample complexity of discrete distribution estimation under total variation distance and ℓ_2 distance.  ...  We also provide lower bounds for several other distribution classes, including product distributions and Gaussian mixtures that are tight up to logarithmic factors.  ...  Acknowledgements The authors thank Gautam Kamath and Ananda Theertha Suresh for their thoughtful comments and insights that helped improve the paper.  ... 
arXiv:2004.06830v3 fatcat:hocqzzcr3jguzg46ucjlnaqmna

On The Chain Rule Optimal Transport Distance [article]

Frank Nielsen, Ke Sun
2020 arXiv   pre-print
We experimentally evaluate our new family of distances by quantifying the upper bounds of several jointly convex distances between statistical mixtures, and by proposing a novel efficient method to learn  ...  between statistical mixtures, and provide an upper bound for jointly convex distances between statistical mixtures.  ...  The authors are grateful to Professor Patrick Forré (University of Amsterdam) for letting us know of an earlier error in the definition of CROT, and to Professor Rüschendorf for sending us his work Rüschendorf  ... 
arXiv:1812.08113v3 fatcat:qudm53o2ljabxdrmtl3vxu3bw4

Tight bounds for learning a mixture of two gaussians [article]

Moritz Hardt, Eric Price
2015 arXiv   pre-print
We consider the problem of identifying the parameters of an unknown mixture of two arbitrary d-dimensional gaussians from a sequence of independent random samples.  ...  Our results also apply to learning each component of the mixture up to small error in total variation distance, where our algorithm gives strong improvements in sample complexity over previous work.  ...  Lower bound for mixtures of k gaussians We can extend these lower bounds to mixtures of k gaussians.  ... 
arXiv:1404.4997v3 fatcat:zd3gvypihnby3h3zm2zibcjdmq

Modeling images as mixtures of reference images

Florent Perronnin, Yan Liu
2009 2009 IEEE Conference on Computer Vision and Pattern Recognition  
We propose two approximate optimization algorithms: the first one based on traditional sampling methods, the second one based on a variational bound approximation of the true objective function.  ...  A state-of-the-art approach to measure the similarity of two images is to model each image by a continuous distribution, generally a Gaussian mixture model (GMM), and to compute a probabilistic similarity  ...  As the direct optimization is difficult, we propose two possible approximations: the first one based on sampling, the second one based on a variational bound of the objective function.  ... 
doi:10.1109/cvpr.2009.5206781 dblp:conf/cvpr/PerronninL09 fatcat:l2mw7zbz4vealhuc3wbn6aucyi

Modeling images as mixtures of reference images

F. Perronnin, Yan Liu
2009 2009 IEEE Conference on Computer Vision and Pattern Recognition  
We propose two approximate optimization algorithms: the first one based on traditional sampling methods, the second one based on a variational bound approximation of the true objective function.  ...  A state-of-the-art approach to measure the similarity of two images is to model each image by a continuous distribution, generally a Gaussian mixture model (GMM), and to compute a probabilistic similarity  ...  As the direct optimization is difficult, we propose two possible approximations: the first one based on sampling, the second one based on a variational bound of the objective function.  ... 
doi:10.1109/cvprw.2009.5206781 fatcat:fm7r5a2lqrdpbgh3qyfdvgxdyq

The More, the Merrier: the Blessing of Dimensionality for Learning Large Gaussian Mixtures [article]

Joseph Anderson, Mikhail Belkin, Navin Goyal, Luis Rademacher, James Voss
2014 arXiv   pre-print
In contrast, much of the existing work on Gaussian Mixtures relies on low-dimensional projections and thus hits an artificial barrier.  ...  , thus establishing exponential information-theoretic lower bounds for underdetermined ICA in low dimension.  ...  In words, the total variation between two measures is the largest difference between the measures on a single event. Clearly, this distance is bounded above by 1.  ... 
arXiv:1311.2891v3 fatcat:zdh6cytjgndcxjiakbcti2rm6q

Robustly Learning Mixtures of k Arbitrary Gaussians [article]

Ainesh Bakshi, Ilias Diakonikolas, He Jia, Daniel M. Kane, Pravesh K. Kothari, Santosh S. Vempala
2021 arXiv   pre-print
separated Gaussians, and (c) a uniform mixture of two Gaussians.  ...  This resolves the main open problem in several previous works on algorithmic robust statistics, which addressed the special cases of robustly estimating (a) a single Gaussian, (b) a mixture of TV-distance  ...  claimed a bound on the variance of a more general class of distributions than what is needed in our approach.  ... 
arXiv:2012.02119v3 fatcat:2j6gb5iax5anfeamjfuiwwof3i

Finite Blocklength Analysis of Gaussian Random coding in AWGN Channels under Covert constraints II: Viewpoints of Total Variation Distance [article]

Xinchun Yu, Shuangqin Wei, Yuan Luo
2020 arXiv   pre-print
The results will be very helpful for understanding the behavior of the total variation distance and practical covert communication.  ...  Moreover, the convergence rate of the total variation with different snr is analyzed when the block length tends to infinity.  ...  The error probabilities of the optimal testing is related to the total variation distance (TVD) V T (P 1 , P 0 ) as [26] 1 − (α + β) = V T (P 1 , P 0 ), (5) where the total variation distance between  ... 
arXiv:1901.03123v5 fatcat:j2kdi5jzd5bt7f54wcoxfo3wr4

Document Hashing with Mixture-Prior Generative Models [article]

Wei Dong, Qinliang Su, Dinghan Shen, Changyou Chen
2019 arXiv   pre-print
Hashing is promising for large-scale information retrieval tasks thanks to the efficiency of distance evaluation between binary codes.  ...  Specifically, a Gaussian mixture prior is first imposed onto the variational auto-encoder (VAE), followed by a separate step to cast the continuous latent representation of VAE into binary code.  ...  Taking the supervised objective into account, the total loss is defined as L total = −L + αL dis (z, y), (11) where L is the lower bound arising in GMSH or BMSH model; α controls the relative weight of  ... 
arXiv:1908.11078v1 fatcat:woxxk3ois5abrc6c5c3u2govyq

Faster and Sample Near-Optimal Algorithms for Proper Learning Mixtures of Gaussians [article]

Constantinos Daskalakis, Gautam Kamath
2014 arXiv   pre-print
Given Õ(1/ε^2) samples from an unknown mixture, our algorithm outputs a mixture that is ε-close in total variation distance, in time Õ(1/ε^5).  ...  We provide an algorithm for properly learning mixtures of two single-dimensional Gaussians without any separability assumptions.  ...  The following proposition, whose proof is deferred to the appendix, provides a bound on the total variation distance between two GMMs in terms of the distance between the constituent Gaussians.  ... 
arXiv:1312.1054v3 fatcat:u623js4q55f25pglmkdgu23l2m

Algebraic and Analytic Approaches for Parameter Learning in Mixture Models [article]

Akshay Krishnamurthy, Arya Mazumdar, Andrew McGregor, Soumyabrata Pal
2020 arXiv   pre-print
We present two different approaches for parameter learning in several mixture models in one dimension.  ...  An example result is that (O(N^1/3)) samples suffice to exactly learn a mixture of k<N Poisson distributions, each with integral rate parameters bounded by N.  ...  Acknowledgements The work was partially supported by NSF grants CCF-1909046, CCF-1934846, CCF-1908849, and CCF-1637536.  ... 
arXiv:2001.06776v1 fatcat:pxmohzogdjcknbn4g6hgcqehim
« Previous Showing results 1 — 15 out of 43,503 results