Filters








1,127,966 Hits in 4.2 sec

The Feature Importance Ranking Measure [article]

Alexander Zien, Nicole Kraemer, Soeren Sonnenburg, Gunnar Raetsch
2009 arXiv   pre-print
Here, we introduce the Feature Importance Ranking Measure (FIRM), which by retrospective analysis of arbitrary learning machines allows to achieve both excellent predictive performance and superior interpretation  ...  In contrast to standard raw feature weighting, FIRM takes the underlying correlation structure of the features into account.  ...  Acknowledgements This work was supported in part by the FP7-ICT Programme of the European Community under the PASCAL2 Network of Excellence (ICT-216886), by the Learning and Inference Platform of the Max  ... 
arXiv:0906.4258v1 fatcat:4zomf2msqrc5hpzaphzgkdagl4

The Feature Importance Ranking Measure [chapter]

Alexander Zien, Nicole Krämer, Sören Sonnenburg, Gunnar Rätsch
2009 Lecture Notes in Computer Science  
Here, we introduce the Feature Importance Ranking Measure (FIRM), which by retrospective analysis of arbitrary learning machines allows to achieve both excellent predictive performance and superior interpretation  ...  In contrast to standard raw feature weighting, FIRM takes the underlying correlation structure of the features into account.  ...  Acknowledgements This work was supported in part by the FP7-ICT Programme of the European Community under the PASCAL2 Network of Excellence (ICT-216886), by the Learning and Inference Platform of the Max  ... 
doi:10.1007/978-3-642-04174-7_45 fatcat:kvtdygfl45fd5lpvmwjwsw4l54

Machine learning classifiers provide insight into the relationship between microbial communities and bacterial vaginosis

Daniel Beck, James A. Foster
2015 BioData Mining  
This difference appears to be the result of using different feature importance measures. It is not clear whether machine learning classifiers are capturing patterns different from simple correlations.  ...  We use subsets of the microbial community features in order to determine which features are important to the classification models.  ...  Funding for this project was provided by the NIH INBRE award P20GM016454 and by the NSF STC award DBI0939454. Computational support provided by NIH COBRE award P20GM16448.  ... 
doi:10.1186/s13040-015-0055-3 pmid:26294933 pmcid:PMC4542107 fatcat:w34bq4uokvgsnlcc4xxhlqx3hm

The impact of feature importance methods on the interpretation of defect classifiers

Gopi Krishnan Rajbahadur, Shaowei Wang, Gustavo Ansaldi, Yasutaka Kamei, Ahmed E. Hassan
2021 IEEE Transactions on Software Engineering  
We find that: 1) The computed feature importance ranks by CA and CS methods do not always strongly agree with each other. 2) The computed feature importance ranks by the studied CA methods exhibit a strong  ...  However, different feature importance methods are likely to compute different feature importance ranks even for the same dataset and classifier.  ...  Evaluation Metrics We measure the difference between the different feature importance rank lists by measuring how much they agree with each other.  ... 
doi:10.1109/tse.2021.3056941 fatcat:tu5rac7t3baehnltgseagsmbnq

Feature selection for ranking

Xiubo Geng, Tie-Yan Liu, Tao Qin, Hang Li
2007 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '07  
Specifically, for each feature we use its value to rank the training instances, and define the ranking accuracy in terms of a performance measure or a loss function as the importance of the feature.  ...  While algorithms for learning ranking models have been intensively studied, this is not the case for feature selection, despite of its importance.  ...  NDCG, MAP) to calculate the importance score of each feature. GAS-L In GAS-L we use the empirical loss of ranking model to measure the importance of each feature.  ... 
doi:10.1145/1277741.1277811 dblp:conf/sigir/GengLQL07 fatcat:sbet6naxnvhsfm3cdnmsjpyqpy

A Rank Aggregation Algorithm for Ensemble of Multiple Feature Selection Techniques in Credit Risk Evaluation

Shashi Dahiya, S.S Handa, N.P Singh
2016 International Journal of Advanced Research in Artificial Intelligence (IJARAI)  
Feature selection is one way of improving the accuracy of a classifier. It provides the classifier with important and relevant features for model development.  ...  This algorithm uses the rank order along with the rank score of the features in the ranked list of each feature selection method for rank aggregation.  ...  But the rank score alone can't depict the importance of a feature in the ranked list. The order of the feature in the ranked list is also crucial for considering the importance of a feature.  ... 
doi:10.14569/ijarai.2016.050901 fatcat:lgykg57nfbe2bh7djvibflx7iq

Study of Feature Importance for Quantum Machine Learning Models [article]

Aaron Baughman, Kavitha Yogaraj, Raja Hebbar, Sudeep Ghosh, Rukhsan Ul Haq, Yoshika Chhabra
2022 arXiv   pre-print
Notably, the feature importance magnitudes from the quantum models had a much higher variation when contrasted to classical models.  ...  This work presents the first study of its kind in which feature importance for QML models has been explored and contrasted against their classical machine learning (CML) equivalents.  ...  Arjun Kashyap, Daniel Fry, Nicolas Robles, Noelle Ibrahim, Bruce D'Amora, Heather Higgins, Noah Syken, Elizabeth O'Brien, John Kent, Tyler Sidell, Stephen Hammer, Micah Forster, Eduardo Morales, and the  ... 
arXiv:2202.11204v4 fatcat:hhjenxzy5fbbpfxuiabkam5oba

Fuzzified MCDM Consistent Ranking Feature Selection with Hybrid Algorithm for Credit Risk Assessment

Y. Beulah Jeba Jaya, J. Jebamalar Tamilselvi
2015 Research Journal of Applied Sciences Engineering and Technology  
) results in enabling Consistent Ranking Feature Selection (CRFS) and significant improvement over classification performance measures.  ...  In contrary, Multiple Criteria Decision Making (MCDM) with Fuzzified Feature Selection methodology brings consistency in feature selection ranking with optimal features and improving the classification  ...  Ranking the features shows the importance of the individual feature (Ramaswami and Bhaskaran, 2009) .  ... 
doi:10.19026/rjaset.11.2246 fatcat:4xqfooudhzc7njwxjjjt7ikwtq

A new approach for interpreting Random Forest models and its application to the biology of ageing

Fabio Fabris, Aoife Doherty, Daniel Palmer, João Pedro de Magalhães, Alex A Freitas, Jonathan Wren
2018 Bioinformatics  
The version in the Kent Academic Repository may differ from the final published version. Users are advised to check http://kar.kent.ac.uk for the status of the paper.  ...  Users should always cite the published version of record.  ...  Table 3 shows the most important GO terms based on the ranking by the Intervention in Prediction measure (Section 2.2), a state-of-the-art measure of feature importance.  ... 
doi:10.1093/bioinformatics/bty087 pmid:29462247 pmcid:PMC6041990 fatcat:psb7taokxjckvewd2mwumpxn2q

MIIB: A Metric to Identify Top Influential Bloggers in a Community

Hikmat Ullah Khan, Ali Daud, Tahir Afzal Malik, Peter Csermely
2015 PLoS ONE  
Evaluation has been performed using the standard ranking performance measures of Osim, Kendall Rank-Order Correlation and Spearman Rank Correlation.  ...  Existing approaches consider some basic features, but lack to consider some other features like the importance of the blog on which the post has been created.  ...  So it depicts that the baseline gives too much importance to the inlinks feature while the MIIB gives importance to all the other features.  ... 
doi:10.1371/journal.pone.0138359 pmid:26414063 pmcid:PMC4587377 fatcat:654dllxpobefnpcux6yaac4g4i

A New Noisy Random Forest Based Method for Feature Selection

Yassine Akhiat, Youness Manzali, Mohamed Chahhou, Ahmed Zinedine
2021 Cybernetics and Information Technologies  
variable importance measures.  ...  Whereas, eliminating and preventing those noisy features first, the low ranked features may become more important.  ...  The result displayed in the Fig. 4 shows the ranking of features according to their importance. The high-ranked features are more important than noisy feature.  ... 
doi:10.2478/cait-2021-0016 fatcat:4vtfk4p2kbebver4vzugsgez6u

Aspect Ranking Technique for Efficient Opinion Mining using Sentiment Analysis : Review

Sonali D. Borase, Prasad P. Mahale
2019 International Journal of Scientific Research in Computer Science Engineering and Information Technology  
Opinions expressed in blogs and social networks are playing an important role influencing everything from the products people buy to the presidential candidate they support.  ...  This paper consist review works have been designed for opinion mining by using classification and ranking techniques.  ...  The second is Chi-squared feature ranking evaluates the merit of each feature individually with the chi-squared statistical measure.  ... 
doi:10.32628/cseit183812 fatcat:nee3ulw6ubacbivh7k6wkjl6ay

Feature Selection for Clustering [chapter]

Manoranjan Dash, Huan Liu
2000 Lecture Notes in Computer Science  
Our approach: rst features are ranked according to their importance on clustering and then a subset of important features are selected. For large data we use a scalable method using sampling.  ...  Di erent features a ect clusters di erently, some are important for clusters while others may hinder the clustering task.  ...  The results for ranking are shown in Table 1 . Our method is able to rank the important features in the top ranks for all data. For CorrAL our method ranks feature F6 higher.  ... 
doi:10.1007/3-540-45571-x_13 fatcat:gc7w7uh4v5chrig57gzw2e7qcm

Feature Selection for Clustering [chapter]

Manoranjan Dash, Poon Wei Koot
2016 Encyclopedia of Database Systems  
Our approach: rst features are ranked according to their importance on clustering and then a subset of important features are selected. For large data we use a scalable method using sampling.  ...  Di erent features a ect clusters di erently, some are important for clusters while others may hinder the clustering task.  ...  The results for ranking are shown in Table 1 . Our method is able to rank the important features in the top ranks for all data. For CorrAL our method ranks feature F6 higher.  ... 
doi:10.1007/978-1-4899-7993-3_613-2 fatcat:ntjx73g3avbd7cmmbaxz2f65c4

Decision Tree-Based Feature Ranking Using Manhattan Hierarchical Cluster Criterion

Yasmin Mohd Yacob, Harsa A. Mat Sakim, Nor Ashidi Mat Isa
2012 Zenodo  
This paper proposed threshold measure using Manhattan Hierarchical Cluster distance to be utilized in feature ranking in order to choose relevant features as part of the feature selection process.  ...  In decision tree-based feature selection, some studies used decision tree as a feature ranker with a direct threshold measure, while others remain the decision tree but utilized pruning condition that  ...  The computed mutual information value generated the ranking of importance. Battiti selected relevant features using MI ranking with specified threshold value [2] .  ... 
doi:10.5281/zenodo.1056236 fatcat:wwqjxxroizdqzbdx7drzaovfkm
« Previous Showing results 1 — 15 out of 1,127,966 results