Filters








1,197 Hits in 2.1 sec

Bayesian Entropy Estimation for Countable Discrete Distributions [article]

Evan Archer and Il Memming Park and Jonathan Pillow
2014 arXiv   pre-print
The Pitman-Yor process, a generalization of Dirichlet process, provides a tractable prior distribution over the space of countably infinite discrete distributions, and has found major applications in Bayesian  ...  We consider the problem of estimating Shannon's entropy H from discrete data, in cases where the number of possible symbols is unknown or even countably infinite.  ...  Shlens for retinal data, and Y. W. Teh and A. Cerquetti for helpful comments on the manuscript.  ... 
arXiv:1302.0328v3 fatcat:dmquqh6p2fh7hbki3wbk5x24z4

Bayesian entropy estimators for spike trains

Il Park, Evan Archer, Jonathan Pillow
2013 BMC Neuroscience  
Our approach follows that of Nemenman et al. [2], who formulated a Bayesian entropy estimator using a mixture-of-Dirichlet prior over the space of discrete distributions on K bins.  ...  For the first estimator, we design a novel mixture prior over countable distributions using the Pitman-Yor (PY) process [3].  ...  [2] , who formulated a Bayesian entropy estimator using a mixture-of-Dirichlet prior over the space of discrete distributions on K bins.  ... 
doi:10.1186/1471-2202-14-s1-p316 pmcid:PMC3704886 fatcat:svy3sb4bhzbubjt5rhkzxer26q

Page 443 of Mathematical Reviews Vol. , Issue 2004a [page]

2004 Mathematical Reviews  
Existence, uniqueness and characterisation theorems are presented for the maxentropic distribution for the case of countable support under some assumptions.  ...  for models for discrete data defined on the cylinder and lattice.  ... 

Consistency of discrete Bayesian learning

Jan Poland
2008 Theoretical Computer Science  
We prove corresponding results for stochastic model selection, for both discrete and continuous observation spaces.  ...  We introduce the entropy potential of a hypothesis class as its worst-case entropy, with regard to the true distribution.  ...  Acknowledgments Thanks to Marcus Hutter, Thomas Zeugmann, and the anonymous reviewers for their valuable comments. This work was supported by JSPS 21st century COE program C01.  ... 
doi:10.1016/j.tcs.2008.06.038 fatcat:fesrb5p7wbhojndnbvippza7yq

Page 8575 of Mathematical Reviews Vol. , Issue 2003k [page]

2003 Mathematical Reviews  
(English and French summaries) [On entropy estimation for distributions with countable support! C. R. Math. Acad. Sci. Paris 335 (2002), no. 9, 763-766.  ...  Let X,, X2,... be a sequence of discrete i.i.d. random variables with finite entropy.  ... 

From ɛ -entropy to KL-entropy: Analysis of minimum information complexity density estimation

Tong Zhang
2006 Annals of Statistics  
We consider an extension of ϵ-entropy to a KL-divergence based complexity measure for randomized density estimation methods.  ...  In particular, we show that this technique can lead to improvements of some classical results concerning the convergence of minimum description length and Bayesian posterior distributions.  ...  The concept of -entropy can be regarded as a notion to measure the complexity of an explicit discretization, usually for a deterministic estimator on a discretenet.  ... 
doi:10.1214/009053606000000704 fatcat:tsnllilqwjcuhgjtgk4cwcbewi

Page 3389 of Mathematical Reviews Vol. , Issue 89F [page]

1989 Mathematical Reviews  
Gull, Bayesian inductive inference and maximum entropy (pp. 53-74); G. Larry Bretthorst, Excerpts from Bayesian spectrum analysis and parame- ter estimation (pp. 75-145); E. T.  ...  Fougere, Maximum entropy calculations on a discrete probability space (pp. 205-234); R. Blankenbecler and M. H. Partovi, Quantum density matrix and entropic uncertainty (pp. 235-244); A. J. M.  ... 

Redundancy of Exchangeable Estimators

Narayana Santhanam, Anand Sarwate, Jae Woo
2014 Entropy  
Exchangeable random partition processes are the basis for Bayesian approaches to statistical inference in large alphabet settings.  ...  This provides an understanding of these estimators in the setting of unknown discrete alphabets from the perspective of universal compression.  ...  Acknowledgments The authors thank the American Institute of Mathematics and NSF for sponsoring a workshop on probability estimation, as well as A. Orlitsky and K.  ... 
doi:10.3390/e16105339 fatcat:4y2xb2cgu5a3dgg2l47qug355a

Bayesian Inference in Auditing With Partial Prior Information Using Maximum Entropy Priors

María Martel-Escobar, Francisco-José Vázquez-Polo, Agustín Hernández-Bastida
2018 Entropy  
entropy prior to incorporate limited auditor information.  ...  for θ .  ...  The authors are grateful to two reviewers for their valuable comments that improved the manuscript. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/e20120919 pmid:33266643 fatcat:bboeew23jjbwtlyo3hlsusgkxe

Predictive Information in Gaussian Processes with Application to Music Analysis [chapter]

Samer Abdallah, Mark Plumbley
2013 Lecture Notes in Computer Science  
to non-stationary processes, using an online Bayesian spectral estimation method to compute the Bayesian surprise.  ...  Information measures for stationary random processes For an infinite stationary discrete-time random process (X t ) t∈Z , the predictive information rate (PIR), as defined in [4] , is global measure of  ...  AR estimation and Bayesian suprise Our method for spectral estimation is based on Kitagawa and Gersch's [12] 'spectral smoothness prior'-they consider autoregressive Gaussian processes and introduce  ... 
doi:10.1007/978-3-642-40020-9_72 fatcat:sntq77l76zfu5icwrapxwmeqgu

Redundancy of exchangeable estimators

N. P. Santhanam, M. M. Madiman, A. D. Sarwate
2010 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton)  
Exchangeable random partition processes are the basis for Bayesian approaches to statistical inference in large alphabet settings.  ...  This provides an understanding of these estimators in the setting of unknown discrete alphabets from the perspective of universal compression.  ...  Acknowledgments The authors thank the American Institute of Mathematics and NSF for sponsoring a workshop on probability estimation, as well as A. Orlitsky and K.  ... 
doi:10.1109/allerton.2010.5707041 fatcat:eed65ujaa5cjjfxc6fi7czhtkm

Bayesian nonparametric estimation of Tsallis diversity indices under Gnedin-Pitman priors [article]

Annalisa Cerquetti
2014 arXiv   pre-print
Here we present a fully general Bayesian nonparametric estimation of the whole class of Tsallis diversity indices under Gnedin-Pitman priors, a large family of random discrete distributions recently deeply  ...  Bayesian nonparametric estimation of Shannon entropy and Simpson's diversity under uniform and symmetric Dirichlet priors has been already advocated as an alternative to maximum likelihood estimation based  ...  Acknowledgement The authors wishes to thank Leopoldo Catania for his kind assistance in the development of the R code used in the paper, Mauro Bernardi for providing the R function to obtain highest posterior  ... 
arXiv:1404.3441v2 fatcat:h7olw45g75arpdalu44hk3g2e4

Coincidences and Estimation of Entropies of Random Variables with Large Cardinalities

Ilya Nemenman
2011 Entropy  
We examine the recently introduced NSB estimator of entropies of severely undersampled discrete variables and devise a procedure for calculating the involved integrals.  ...  Thus one can estimate entropies with no a priori assumptions about these cardinalities, and a closed form solution for such estimates is given.  ...  Conclusions We have calculated various asymptotic properties of the NSB estimator for estimation of entropies of discrete random variables.  ... 
doi:10.3390/e13122013 fatcat:77stortn2jdnpd3lr2jp3h4et4

On the Convergence of MDL Density Estimation [chapter]

Tong Zhang
2004 Lecture Notes in Computer Science  
We present a general information exponential inequality that measures the statistical complexity of some deterministic and randomized density estimators.  ...  Therefore bounds obtained for (1) can also be applied to Bayesian posterior distributions.  ...  measures, then the estimator leads to the Bayesian posterior distribution with λ = 1 (see [11] ).  ... 
doi:10.1007/978-3-540-27819-1_22 fatcat:pdrh6b64jrdn3h2kfjfhrzs32m

Unsupervised Track Classification Based On Hierarchical Dirichlet Processes

Paolo Braca, Kevin LePage, Jüri Sildam, Peter Willett
2013 Zenodo  
Discretizing the results of MMD mapping using a small dictionary, and estimating the entropy of the resulting, we end up associating each detection with a discrete scalar that has only a limited number  ...  bearing can be then estimated as: ℎ = − ∑ ( ) log( ( )) =1 . (2.6) Similarly the entropy at constant range can be then estimated as: ℎ = − ∑ ( ) log ( ) =1 . (2.7) Finally, the entropy difference, which  ... 
doi:10.5281/zenodo.43744 fatcat:nngtbjzwxbcpthaafoyxpc5mnm
« Previous Showing results 1 — 15 out of 1,197 results