Bayesian instance selection for the nearest neighbor rule

Sylvain Ferrandiz, Marc Boullé
2010 Machine Learning  
The nearest neighbors rules are commonly used in pattern recognition and statistics. The performance of these methods relies on three crucial choices: a distance metric, a set of prototypes and a classification scheme. In this paper, we focus on the second, challenging issue: instance selection. We apply a maximum a posteriori criterion to the evaluation of sets of instances and we propose a new optimization algorithm. This gives birth to Eva, a new instance selection method. We benchmark this
more » ... ethod on real datasets and perform a multi-criteria analysis: we evaluate the compression rate, the predictive accuracy, the reliability and the computational time. We also carry out experiments on synthetic datasets in order to discriminate the respective contributions of the criterion and the algorithm, and to illustrate the advantages of Eva over the state-of-the-art algorithms. The study shows that Eva outputs smaller and more reliable sets of instances, in a competitive time, while preserving the predictive accuracy of the related classifier. 1 Classification by the nearest neighbor Supervised algorithms that learn classifiers input a finite sample {x n , y n } of N instances {x n } and their respective labels {y n }. The nearest neighbor rule (Fix and Hodges 1951; Cover and Hart 1967) classifies any previously unseen instance according to a vote among its nearest neighbor(s) in a set of prototypes derived from the sample. Building such a rule relies on the choice of a set of prototypes, the definition of a distance metric and the choice of a classification scheme.
doi:10.1007/s10994-010-5170-2 fatcat:ga5qytsttnb3pfpkvozse36k2e