Filters








73,811 Hits in 6.0 sec

A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting

Yoav Freund, Robert E Schapire
1997 Journal of computer and system sciences (Print)  
The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting.  ...  We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary nite set or a bounded segment of the real line.  ...  Acknowledgments Thanks to Corinna Cortes, Harris Drucker, David Helmbold, Keith Messer, Volodya V ovk and Manfred Warmuth for helpful discussions.  ... 
doi:10.1006/jcss.1997.1504 fatcat:kivo24eueve2jo6o2rnr2hznry

Page 4957 of Mathematical Reviews Vol. , Issue 99g [page]

1999 Mathematical Reviews  
A decision-theoretic generalization of on-line learning and an application to boosting.  ...  The model we study can be interpreted as a broad, abstract extension of the well-studied on- line prediction model to a general decision-theoretic setting.  ... 

A new classifier based on information theoretic learning with unlabeled data

Kyu-Hwa Jeong, Jian-Wu Xu, Deniz Erdogmus, Jose C. Principe
2005 Neural Networks  
In this paper, we present an information theoretic learning (ITL) approach based on density divergence minimization to obtain an extended training algorithm using unlabeled data during the testing.  ...  The method uses a boosting-like algorithm with an ITL based cost function.  ...  Acknowledgements This work was supported in part by the National Science Foundation under grant ECS-0300340, Graduate Alumni Fellowship from University of Florida and Korea Science and Engineering Foundation  ... 
doi:10.1016/j.neunet.2005.06.018 pmid:16102941 fatcat:vgco272jrzdv5oxwvehd4n5ibi

A Brief Introduction to Boosting

Robert E. Schapire
1999 International Joint Conference on Artificial Intelligence  
Boosting is a general method for improving the accuracy of any given learning algorithm.  ...  This short paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting.  ...  Background Boosting is a general method which attempts to "boost" the accuracy of any given learning algorithm.  ... 
dblp:conf/ijcai/Schapire99 fatcat:g2oxwn7xxbawbpnyh3pynhplq4

Theoretical Views of Boosting and Applications [chapter]

Robert E. Schapire
1999 Lecture Notes in Computer Science  
Boosting is a general method for improving the accuracy of any given learning algorithm.  ...  Focusing primarily on the AdaBoost algorithm, we brie y survey theoretical work on boosting including analyses of AdaBoost's training error and generalization error, connections between boosting and game  ...  In other work, Freund and Mason 19] showed how to apply boosting to learn a generalization of decision trees called \alternating trees."  ... 
doi:10.1007/3-540-46769-6_2 fatcat:jqgoqtfgovcdtm4lxzj36qf33u

On robustness of on-line boosting - a competitive study

Christian Leistner, Amir Saffari, Peter M. Roth, Horst Bischof
2009 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops  
We evaluate various on-line boosting algorithms in form of a competitive study on standard machine learning problems as well as on common computer vision applications such as tracking and autonomous training  ...  However, even though boosting, in general, is well known to be susceptible to class-label noise, on-line boosting is mostly applied to self-learning applications such as visual object tracking, where label-noise  ...  is another typical application of on-line boosting.  ... 
doi:10.1109/iccvw.2009.5457451 dblp:conf/iccvw/LeistnerSRB09 fatcat:itandu4rvnb6bel23grk7ovrue

ブースティング技法のアルゴリズム的考察

渡辺 治
2002 The Brain & Neural Networks  
E.(1997):A decisiontheoretic generalization of online learning and an apPlication to boosting, J. Comput. Syst.  ...  G.(1998):An experimental com− parison of three methods for constructing ensembles of decision trees:bagging, boosting and randomization, Machine Learning, Vol.32, pp.1−22 14)Freund, Y.(1999):An adaptive  ... 
doi:10.3902/jnns.9.196 fatcat:xeovtbwuuvfczb5jw6fvf3cg3q

Performance of Resampling Methods Based on Decision Trees, Parametric and Nonparametric Bayesian Classifiers for Three Medical Datasets

Małgorzata M. Ćwiklińska-Jurkowska
2013 Studies in Logic, Grammar and Rhetoric  
Diversity important in the success of boosting and bagging may be assessed by concordance of base classifiers with the learning vector.  ...  boosting combined models and confirm some theoretical outcomes suggested by other authors.  ...  Acknowledgments The author is grateful to Prof. Wiktor Dróżdż from Department of Psychiatry at Nicolaus Copernicus University for the schizophrenic patients data set.  ... 
doi:10.2478/slgr-2013-0045 fatcat:p6rutkot55auvgy5gbke4qbrzm

On-line Random Forests

Amir Saffari, Christian Leistner, Jakob Santner, Martin Godec, Horst Bischof
2009 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops  
In this paper, we propose a novel on-line random forest algorithm. We combine ideas from on-line bagging, extremely randomized forests and propose an on-line decision tree growing procedure.  ...  The experiments on common machine learning data sets show that our algorithm converges to the performance of the off-line RF.  ...  Hence, for an on-line version one has to combine on-line bagging [15] and on-line decision trees with random feature-selection.  ... 
doi:10.1109/iccvw.2009.5457447 dblp:conf/iccvw/SaffariLSGB09 fatcat:dmpj2jya4bg4loowyj6qfv6nce

Boosting-Based Learning Agents for Experience Classification

Po-chun Chen, Xiaocong Fan, Shizhuo Zhu, John Yen
2006 2006 IEEE/WIC/ACM International Conference on Intelligent Agent Technology  
In this paper, we introduce a hierarchical learning approach, aiming to support hierarchical group decision making where the decision makers at lower levels only have partial view of the whole picture.  ...  The boosting-based learning agents were then used in our experiments to classify experience instances.  ...  Taking a group of distributed decision makers as an example, as a team, all the decision makers share the same goal and decision making context; however, they can propose different decisions due to their  ... 
doi:10.1109/iat.2006.44 dblp:conf/iat/ChenFZY06 fatcat:qmsjual7kveipobr7fixmfkslm

Efficient Boosting-Based Active Learning For Specific Object Detection Problems

Thuy Thi Nguyen, Nguyen Dang Binh, Horst Bischof
2008 Zenodo  
In the core is a combination of a bootstrap procedure and a semi automatic learning process based on the online boosting procedure.  ...  Our system is composed of an active learning mechanism as wrapper around a sub-algorithm which implement an online boosting-based learning object detector.  ...  On-line boosting for feature selection is based on introducing "selectors" and performing on-line boosting on these selectors.  ... 
doi:10.5281/zenodo.1073015 fatcat:5vu6a2gkxzdybpnvypgexvbqzq

Improving Part based Object Detection by Unsupervised, Online Boosting

Bo Wu, Ram Nevatia
2007 2007 IEEE Conference on Computer Vision and Pattern Recognition  
We propose an unsupervised, incremental learning approach based on online boosting to improve the performance on special applications of a set of general part detectors, which are learned from a small  ...  Our oracle for unsupervised learning, which has high precision, is based on a combination of a set of shape based part detectors learned by off-line boosting.  ...  Oza and Russell [14] propose an online version of boosting algorithm to learn ensemble classifier in an incremental way.  ... 
doi:10.1109/cvpr.2007.383148 dblp:conf/cvpr/WuN07a fatcat:pmk5j5e25veppl3t4dv7nwow6u

Generating highly accurate prediction hypotheses through collaborative ensemble learning

Nino Arsov, Martin Pavlovski, Lasko Basnarkov, Ljupco Kocarev
2017 Scientific Reports  
that outperforms best-case boosting/bagging for a broad range of applications and under a variety of scenarios.  ...  Applied among a crowd of Gentle Boost ensembles, the ability of the two suggested algorithms to generalize is inspected by comparing them against Subbagging and Gentle Boost on various real-world datasets  ...  Acknowledgements This work was supported by the Faculty of Computer Science and Engineering, S.s. Cyril and Methodius University, Skopje, Macedonia.  ... 
doi:10.1038/srep44649 pmid:28304378 pmcid:PMC5356335 fatcat:kymz6d7fofcnnfs3gtjwxjse5u

AdaBoosting neural networks: Application to on-line character recognition [chapter]

Holger Schwenk, Yoshua Bengio
1997 Lecture Notes in Computer Science  
In this paper we use AdaBoost to improve the performances of a strong learning algorithm: a neural network based on-line character recognition system.  ...  Boosting" is a general method for improving the performance of any weak learning algorithm that consistently generates classifiers which need to perform only slightly better than random guessing.  ...  AdaBoost has been applied to rather weak learning algorithms (with low capacity) [3] and to decision trees [1, 2, 5] , and not yet, until now, to the best of our knowledge, to artificial neural networks  ... 
doi:10.1007/bfb0020278 fatcat:bxpailw2zvgc7lkyg5l6jlofp4

The Boosting Approach to Machine Learning: An Overview [chapter]

Robert E. Schapire
2003 Nonlinear Estimation and Classification  
Boosting is a general method for improving the accuracy of any given learning algorithm.  ...  Focusing primarily on the AdaBoost algorithm, this chapter overviews some of the recent work on boosting including analyses of AdaBoost's training error and generalization error; boosting's connection  ...  We also have discussed a few of the growing number of applications of Ada-Boost to practical machine learning problems, such as text and speech categorization.  ... 
doi:10.1007/978-0-387-21579-2_9 fatcat:qz5deyxarrgcrpz4skcokx44ky
« Previous Showing results 1 — 15 out of 73,811 results