Filters








41,356 Hits in 4.8 sec

Consistency Analysis of an Empirical Minimum Error Entropy Algorithm [article]

Jun Fan and Ting Hu and Qiang Wu and Ding-Xuan Zhou
2014 arXiv   pre-print
In this paper we study the consistency of an empirical minimum error entropy (MEE) algorithm in a regression setting. We introduce two types of consistency.  ...  The error entropy consistency, which requires the error entropy of the learned function to approximate the minimum error entropy, is shown to be always true if the bandwidth parameter tends to 0 at an  ...  Note that the error entropy consistency ensures the learnability of the minimum error entropy, as is expected from the motivation of empirical MEE algorithms.  ... 
arXiv:1412.5272v1 fatcat:ymvswvbsxvf4dl7h4v5wbfcvva

Cloud Transform Algorithm Based Model for Hydrological Variable Frequency Analysis

Xia Bai, Juliang Jin, Shaowei Ning, Chengguo Wu, Yuliang Zhou, Libing Zhang, Yi Cui
2021 Remote Sensing  
basically consistent with the result obtained through the traditional empirical frequency formula.  ...  the empirical frequency formula and Pearson type III function-based curve fitting method, the normal cloud transform algorithm-based model for hydrological variable frequency analysis was proposed through  ...  methods was consistent, and the cloud transform algorithm was also suitable for the determination of the empirical frequency of hydrological variables.  ... 
doi:10.3390/rs13183586 fatcat:idla5ubkqfg3ljwgs3gkbkqsvu

Empirical Estimation of Information Measures: A Literature Guide

Sergio Verdú
2019 Entropy  
We give a brief survey of the literature on the empirical estimation of entropy, differential entropy, relative entropy, mutual information and related information measures.  ...  While those quantities are of central importance in information theory, universal algorithms for their estimation are increasingly important in data science, machine learning, biology, neuroscience, economics  ...  Conflicts of Interest: The author declares no conflict of interest.  ... 
doi:10.3390/e21080720 pmid:33267434 fatcat:f3ifrqgomfe5xa4nl5vduaqmr4

Optimal Estimation of Wavelet Decomposition Level for a Matching Pursuit Algorithm

Dmitry Kaplun, Alexander Voznesenskiy, Sergei Romanov, Erivelton Nepomuceno, Denis Butusov
2019 Entropy  
We provide an example of entropy-based estimation for optimal decomposition level in spectral analysis of seismic signals.  ...  We explicitly show that the optimal decomposition level, defined as a level with minimum entropy, in DWT and PWD provides the minimum approximation error and the smallest execution time when applied in  ...  We discuss a new way to optimize the vocabulary by automatically determining an optimal level of wavelet decomposition based on the entropy analysis.  ... 
doi:10.3390/e21090843 fatcat:6pcwt55ryvcudf5m5vrl4fqy4m

Learning Theory Approach to Minimum Error Entropy Criterion [article]

Ting Hu, Jun Fan, Qiang Wu, Ding-Xuan Zhou
2013 arXiv   pre-print
We consider the minimum error entropy (MEE) criterion and an empirical risk minimization learning algorithm in a regression setting.  ...  A learning theory approach is presented for this MEE algorithm and explicit error bounds are provided in terms of the approximation ability and capacity of the involved hypothesis space when the MEE scaling  ...  Minimum error entropy (MEE) is a principle of information theoretical learning and provides a family of supervised learning algorithms.  ... 
arXiv:1208.0848v2 fatcat:5wpyyztxizhevp2nwdu4fi7ex4

The MEE Principle in Data Classification: A Perceptron-Based Analysis

Luís M. Silva, J. Marques de Sá, Luís A. Alexandre
2010 Neural Computation  
The analysis of this so-called minimization of error entropy (MEE) principle is carried out in a single perceptron with continuous activation functions, yielding continuous error distributions.  ...  Our study also clarifies the role of the kernel density estimator of the error density in achieving the minimum probability of error in practice.  ...  The Minimum of the KL Divergence. An important result concerning the (Shannon's) entropy of the error minimum was presented by .  ... 
doi:10.1162/neco_a_00013 pmid:20569178 fatcat:5y6ihdzp6zcqtpybjbqtxpahqi

A high-throughput hardware accelerator for network entropy estimation using sketches

Javier E. Soto, Paulo Ubisse, Yaime Fernandez, Cecilia Hernandez, Miguel Figueroa
2021 IEEE Access  
Tested on real network traces of up to 120 million packets and more than 5 million flows, the accelerator estimates the empirical entropy with less than 1.5% mean relative error and 21 µs latency, and  ...  supports a minimum throughput of 204 gigabits per second.  ...  They compute a bound for the estimation error and use it to construct confidence intervals of empirical entropy.  ... 
doi:10.1109/access.2021.3088500 fatcat:txcsvtqe5rhabpnpi7ipjkpf5e

Page 5346 of Mathematical Reviews Vol. , Issue 89I [page]

1989 Mathematical Reviews  
A comparison of stochastic gradient and minimum entropy deconvolution algorithms. (French and German summaries) Signal Process. 15 (1988), no. 2, 203-211.  ...  the sense that it yields an error exponent which is arbitrarily close to that of the optimal discriminant function.”  ... 

The Minimum Information Principle for Discriminative Learning [article]

Amir Globerson, Naftali Tishby
2012 arXiv   pre-print
We show how the principle of minimum mutual information generalizes that of maximum entropy, and provides a comprehensive framework for building discriminative classiffiers.  ...  It is well known that they can be interpreted as maximum entropy models under empirical expectation constraints.  ...  This work is partially supported by a grant from the Israeli Academy of Science (ISF). We wish to thank the University of Pennsylvania for its hospitality during the writing of this paper.  ... 
arXiv:1207.4110v1 fatcat:znefx44di5d53hcjrkpcrafxg4

Rigorous learning curve bounds from statistical mechanics

David Haussler, Michael Kearns, H. Sebastian Seung, Naftali Tishby
1997 Machine Learning  
The advantage of our theory over the well-established Vapnik-Chervonenkis theory is that our bounds can be considerably tighter in many cases, and are also more reflective of the true behavior of learning  ...  The disadvantages of our theory are that its application requires knowledge of the input distribution, and it is limited so far to finite cardinality function classes.  ...  Recall that in the realizable case, we focused on bounding the error of any consistent algorithm. In the unrealizable case, we analyze an empirical error minimization algorithm.  ... 
doi:10.1007/bf00114010 fatcat:oies5gcckrezdgc2s3sxx5eos4

Beyond Maximum Likelihood: from Theory to Practice [article]

Jiantao Jiao, Kartik Venkat, Yanjun Han, Tsachy Weissman
2014 arXiv   pre-print
which is an implementation of the MLE, to achieve the same accuracy.  ...  The key step in improving the Chow--Liu algorithm is to replace the empirical mutual information with the estimator for mutual information proposed by the authors.  ...  Minimum classification errors for each dataset are emphasized in bold.  ... 
arXiv:1409.7458v1 fatcat:yq6l6hdqnneavjdgwt3oh2fxcm

Bootstrapping the empirical bounds on the variability of sample entropy in 24-hour ECG recordings for 1 hour segments

Sebastian Zurek, Waldemar Grabowski, Marcin Kosmider, Szymon Jurga, Przemyslaw Guzik, Jaroslaw Piskorski
2018 Journal of Applied Mathematics and Computational Mechanics  
We investigate the variability of one of the most often used complexity measures in the analysis of the time series of RR intervals, i.e. Sample Entropy.  ...  The analysis is carried out for a dense matrix of possible r thresholds in 79 24h recordings, for segments consisting of 5000 consecutive beats, randomly selected from the whole recording.  ...  Our aim is to show how to construct bootstrapped standard errors with the use of NCM for a series over an extended amount of time.  ... 
doi:10.17512/jamcm.2018.2.09 fatcat:73fa7lncxrafxo5f2vpxnwclc4

Learning Theory and Approximation

Kurt Jetter, Steve Smale, Ding-Xuan Zhou
2012 Oberwolfach Reports  
This workshop -the second one of this type at the MFO -has concentrated on the following recent topics: Learning of manifolds and the geometry of data; sparsity and dimension reduction; error analysis  ...  and algorithmic aspects, including kernel based methods for regression and classification; application of multiscale aspects and of refinement algorithms to learning. (2000) : 68Q32, 41A35, 41A63, 62Jxx  ...  Wu gave a learning theory perspective on the empirical minimum error entropy (MEE) principle developed in the fields of signal processing and data mining, and he provided a rigorous consistency analysis  ... 
doi:10.4171/owr/2012/31 fatcat:6obnt34cizfvrmpsdlr4b4vnva

Learning Theory and Approximation

Kurt Jetter, Steve Smale, Ding-Xuan Zhou
2008 Oberwolfach Reports  
This workshop -the second one of this type at the MFO -has concentrated on the following recent topics: Learning of manifolds and the geometry of data; sparsity and dimension reduction; error analysis  ...  and algorithmic aspects, including kernel based methods for regression and classification; application of multiscale aspects and of refinement algorithms to learning. (2000) : 68Q32, 41A35, 41A63, 62Jxx  ...  Wu gave a learning theory perspective on the empirical minimum error entropy (MEE) principle developed in the fields of signal processing and data mining, and he provided a rigorous consistency analysis  ... 
doi:10.4171/owr/2008/30 fatcat:wxkujx44g5aebor4ojsv4dbjbi

Page 4687 of Mathematical Reviews Vol. , Issue 84k [page]

1984 Mathematical Reviews  
They derive a sufficient condition for the difference of mean square error matrices of minimum conditional mean square error estimator and minimum average risk linear estimator to be positive definite.  ...  The results of such a study are submitted to statistical analysis to determine the two important variance components in the problem: replication error and laboratory bias.  ... 
« Previous Showing results 1 — 15 out of 41,356 results