Filters








22,146 Hits in 6.0 sec

Rényi entropy yields artificial biases not in the data and incorrect updating due to the finite-size data

Thomas Oikonomou, G. Baris Bagci
2019 Physical review. E  
It is demonstrated that this is so because it does not conform to the system and subset independence axioms of Shore and Johnson.  ...  We show that the R\'enyi entropy implies artificial biases not warranted by the data and incorrect updating information due to the finite-size of the data despite being additive.  ...  T.O. acknowledges the state-targeted program "Center of Excellence for Fundamental and Applied Physics" (BR05236454) by the Ministry of Education and Science of the Republic of Kazakhstan.  ... 
doi:10.1103/physreve.99.032134 fatcat:dmrhxb4mezb7zocpuktm2frx7e

A maximum quadratic entropy principle for capacity identi cation

Ivan Kojadinovic
2005 European Society for Fuzzy Logic and Technology  
In the framework of multicriteria decision making based on Choquet integrals, we present a maximum entropy like method enabling to determine, if it exists, the least specific capacity compatible with the  ...  Then, among all the feasible (admissible) Choquet integrals, if any, choosing the Choquet integral w.r.t the maximum entropy capacity amounts to choosing the Choquet integral that will have the highest  ...  The monotonicity of µ means that the weight of a subset of criteria can only increase when new criteria are added to it.  ... 
dblp:conf/eusflat/Kojadinovic05 fatcat:si2xax6n6bfqnns5nglotdg7ya

Experimental design schemes for learning Boolean network models

N. Atias, M. Gershenzon, K. Labazin, R. Sharan
2014 Bioinformatics  
Unique to our maximum difference approach is the ability to account for all (possibly exponential number of) Boolean models displaying high fit to the available data.  ...  Results: We developed novel design approaches that greedily select an experiment to be performed so as to maximize the difference or the entropy in the results it induces with respect to current best-fit  ...  Safra Center for Bioinformatics at Tel Aviv University (to N.A.) and the I-CORE Program of the Planning and Budgeting Committee and The Israel Science Foundation (757/12 to R.S.).  ... 
doi:10.1093/bioinformatics/btu451 pmid:25161232 pmcid:PMC4147904 fatcat:lsbap47wtnhytodoh5bobro7ei

An asymptotic property of model selection criteria

Yuhong Yang, A.R. Barron
1998 IEEE Transactions on Information Theory  
Probability models are estimated by use of penalized log-likelihood criteria related to AIC and MDL.  ...  The asymptotic risk is determined under conditions on the penalty term, and is shown to be minimax optimal for some cases.  ...  ACKNOWLEDGMENT The authors wish to thank the referees for their many valuable suggestions, which led to a significant improvement on presentation of the results.  ... 
doi:10.1109/18.650993 fatcat:kl6gcyrxqnbihbuywlb57i6jsi

Entropy-based active learning for object recognition

Alex Holub, Pietro Perona, Michael C. Burl
2008 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops  
Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor.  ...  Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images.  ...  (Top) Images the user needs to label in passive learning (randomly choosing images for the user to label) in order to achieve 82% of maximum performance.  ... 
doi:10.1109/cvprw.2008.4563068 dblp:conf/cvpr/HolubPB08 fatcat:aahah5ugqbdobhd6v67q4f2cea

Research on the Application of Multimedia Entropy Method in Data Mining of Retail Business

Ting Li, Can Zhang, Ahmed Farouk
2022 Scientific Programming  
This paper proposes a filtering of high-quality customer system framework based on maximum entropy, which expresses customer data as a feature vector for feature selection and feature smoothing.  ...  Secondly, in the face of these massive data, people pay more attention to how to dig out the important information hidden in these data, rather than the data itself.  ...  Maximum entropy principle is the criterion for choosing the statistical characteristics of random variables which are most suitable for objective conditions.  ... 
doi:10.1155/2022/2520087 fatcat:kfmo76bx6vdffiwuumcibrgrym

Unifying Decision Trees Split Criteria Using Tsallis Entropy [article]

Yisen Wang, Chaobing Song, Shu-Tao Xia
2016 arXiv   pre-print
All the split criteria seem to be independent, actually, they can be unified in a Tsallis entropy framework.  ...  In this paper, a Tsallis Entropy Criterion (TEC) algorithm is proposed to unify Shannon entropy, Gain Ratio and Gini index, which generalizes the split criteria of decision trees.  ...  The optimal q for Tsallis entropy is obtained by cross-validation, which is usually not equal to 1 or 2. This implies better performance than the traditional split criteria.  ... 
arXiv:1511.08136v5 fatcat:cojmvzwncrbwdeg6hec3eaeku4

From Blind Signal Extraction to Blind Instantaneous Signal Separation: Criteria, Algorithms, and Stability

S.A. Cruces-Alvarez, A. Cichocki, S. Amari
2004 IEEE Transactions on Neural Networks  
Then, our contribution fills the theoretical gap that exists between extraction and separation by presenting tools that extend these criteria to allow the simultaneous blind extraction of subsets with  ...  This paper first presents a general overview and unification of several information theoretic criteria for the extraction of a single independent component.  ...  The maximum likelihood and infomax criteria can incorporate relatively easily this additional information in the BSS case . In this section, we will extend these criteria to the case of BSE .  ... 
doi:10.1109/tnn.2004.828764 pmid:15461079 fatcat:3gkrmacfevbnhha7w3q6v3eyxq

A multi-objective artificial bee colony approach to feature selection using fuzzy mutual information

Emrah Hancer, Bing Xue, Mengjie Zhang, Dervis Karaboga, Bahriye Akay
2015 2015 IEEE Congress on Evolutionary Computation (CEC)  
the relevance of feature subsets.  ...  In this paper, a multi-objective artificial bee colony (MOABC) framework is developed for feature selection in classification, and a new fuzzy mutual information based criterion is proposed to evaluate  ...  Comparisons between Different Criteria To compare the three different evaluation criteria, Figs. 1, 2 and 3 show that ABC-IFMI is able to obtain higher classification accuracy than ABC-MI and ABC-FMI.  ... 
doi:10.1109/cec.2015.7257185 dblp:conf/cec/HancerXZKA15 fatcat:6jyc6ow2obhb3gy2efd4lwnina

Near-optimal sensor placements in Gaussian processes

Carlos Guestrin, Andreas Krause, Ajit Paul Singh
2005 Proceedings of the 22nd international conference on Machine learning - ICML '05  
A common strategy is to place sensors at the points of highest entropy (variance) in the GP model. We propose a mutual information criteria, and show that it produces better placements.  ...  When monitoring spatial phenomena, which are often modeled as Gaussian Processes (GPs), choosing sensor locations is a fundamental task.  ...  entropy of all such subsets if and only if the parents Γ A ⊂ U of A have maximum entropy among all such subsets of U.  ... 
doi:10.1145/1102351.1102385 dblp:conf/icml/GuestrinKS05 fatcat:5bqunmlkjra2rpk65lmgqabmpe

A Comparative Review of Dimension Reduction Methods in Approximate Bayesian Computation

M. G. B. Blum, M. A. Nunes, D. Prangle, S. A. Sisson
2013 Statistical Science  
The first is a best subset selection method based on Akaike and Bayesian information criteria, and the second uses ridge regression as a regularization procedure.  ...  The methods are split into three nonmutually exclusive classes consisting of best subset selection methods, projection techniques and regularization.  ...  In contrast, for ridge regression, the relative errors corresponding to κ > 10 8 are not larger than the errors obtained for nonextreme condition numbers.  ... 
doi:10.1214/12-sts406 fatcat:5jw7eozqyjdmxk2toiw5jfevgm

A Review of Feature Selection Techniques for Clustering High Dimensional Structured Data

Bhagyashri A. Kelkar, Dr.S.F. Rodd
2016 Bonfring International Journal of Software Engineering and Soft Computing  
Also in some applications, the cluster structure in the dataset is often limited to a subset of features rather than the entire feature set.  ...  Hence feature selection has become an important preprocessing task for effective application of data mining techniques in real-world high dimensional datasets.  ...  Other Feature Selection Techniques Ensemble feature selection [33] is a relatively new technique used to obtain a stable feature subset.  ... 
doi:10.9756/bijsesc.8270 fatcat:wymiuublczhgvfaxkqllw67imu

Negation of Pythagorean Fuzzy Number Based on a New Uncertainty Measure Applied in a Service Supplier Selection System

Haiyi Mao, Rui Cai
2020 Entropy  
A new entropy of PFN is proposed based on a technique for order of preference by similarity to ideal solution (Topsis) method of revised relative closeness index in this paper.  ...  The newly proposed method is suitable to systematically evaluate the uncertainty of PFN in Topsis. Nowadays, there are no uniform criteria for measuring service quality.  ...  obtaining the revised relative closeness index, which may lead to failure in obtaining PFN entropy.  ... 
doi:10.3390/e22020195 pmid:33285970 pmcid:PMC7516624 fatcat:2h2fahymdbhelo7theoor2v23a

Quantization of Continuous Input Variables for Binary Classification [chapter]

Michał Skubacz, Jaakko Hollmén
2000 Lecture Notes in Computer Science  
In this paper, quantization methods based on equal width interval, maximum entropy, maximum mutual information and the novel approach based on maximum mutual information combined with entropy are considered  ...  Often, the discretization is based on the distribution of the input variables only whereas additional information, for example in form of class membership is frequently present and could be used to improve  ...  Mutual Information with Entropy By combining the maximum entropy and the mutual information approaches, one hopes to obtain a solution with the merits of both.  ... 
doi:10.1007/3-540-44491-2_7 fatcat:a4jqqbaf25czlhb45smmvwcnje

Improvement of the Accuracy of Prediction Using Unsupervised Discretization Method: Educational Data Set Case Study

2018 Tehnički Vjesnik  
The research goal was to transform numeric features into maximum independent discrete values with minimum loss of information and reduction of classification error.  ...  Naïve Bayes classifier was trained for each discretized data set and comparative analysis of prediction models was conducted.  ...  Choosing an adequate discretization method implies obtaining a satisfactory compromise between these two objectives.  ... 
doi:10.17559/tv-20170220135853 fatcat:zy3erqw3hve2zjohs7rmeo2t44
« Previous Showing results 1 — 15 out of 22,146 results