10,711 Hits in 4.8 sec

Post-Learning Optimization of Tree Ensembles for Efficient Ranking

Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Fabrizio Silvestri, Salvatore Trani
2016 Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval - SIGIR '16  
In this paper we propose a new framework, named CLEaVER, for optimizing machine-learned ranking models based on ensembles of regression trees.  ...  Learning to Rank (LtR) is the machine learning method of choice for producing high quality document ranking functions from a ground-truth of training examples.  ...  We present a framework, named CLEaVER, for the optimization of tree ensemble ranking models after the learning phase has completed.  ... 
doi:10.1145/2911451.2914763 dblp:conf/sigir/LuccheseNOPST16 fatcat:wzt4avnpbbfjbdbu5gnssemryq

Training Efficient Tree-Based Models for Document Ranking [chapter]

Nima Asadi, Jimmy Lin
2013 Lecture Notes in Computer Science  
Gradient-boosted regression trees (GBRTs) have proven to be an effective solution to the learning-to-rank problem.  ...  This work proposes and evaluates techniques for training GBRTs that have efficient runtime characteristics.  ...  Any opinions, findings, or conclusions are the authors' and do not necessarily reflect those of the sponsor.  ... 
doi:10.1007/978-3-642-36973-5_13 fatcat:ffyx22fkzjasrdibey25j6bxee

Auto-sklearn: Efficient and Robust Automated Machine Learning [chapter]

Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Tobias Springenberg, Manuel Blum, Frank Hutter
2019 Automated Machine Learning  
Recent work has started to tackle this automated machine learning (AutoML) problem with the help of efficient Bayesian optimization methods.  ...  The success of machine learning in a broad range of applications has led to an ever-growing demand for machine learning systems that can be used off the shelf by non-experts.  ...  Rather than discarding these models, we propose to store them and to use an efficient post-processing method (which can be run in a second process on-the-fly) to construct an ensemble out of them.  ... 
doi:10.1007/978-3-030-05318-5_6 fatcat:wmhwwjuva5cwdmldqhxukjzpuq

autoBagging: Learning to Rank Bagging Workflows with Metalearning [article]

Fábio Pinto, Vítor Cerqueira, Carlos Soares, João Mendes-Moreira
2017 arXiv   pre-print
One of the techniques behind most of these successful applications is Ensemble Learning (EL), the field of ML that gave birth to methods such as Random Forests or Boosting.  ...  Our approach differs from these systems by making use of the most recent advances on metalearning and a learning to rank approach to learn from metadata.  ...  We used the XGBoost learning to rank implementation for gradient boosting of decision trees (Chen and Guestrin, 2016) to learn the metamodel as described in Section 4.  ... 
arXiv:1706.09367v1 fatcat:laizsl6lvnc3neujjw3eyv67um

Early exit optimizations for additive machine learned ranking systems

B. Barla Cambazoglu, Hugo Zaragoza, Olivier Chapelle, Jiang Chen, Ciya Liao, Zhaohui Zheng, Jon Degenhardt
2010 Proceedings of the third ACM international conference on Web search and data mining - WSDM '10  
Some commercial web search engines rely on sophisticated machine learning systems for ranking web documents.  ...  Due to very large collection sizes and tight constraints on query response times, online efficiency of these learning systems forms a bottleneck.  ...  CONCLUSIONS We presented and evaluated four different early exit optimization strategies for improving performance of additive machine learning ensembles, which are used by some search engines for ranking  ... 
doi:10.1145/1718487.1718538 dblp:conf/wsdm/CambazogluZCCLZD10 fatcat:s52wewestvde5j4igfc3x66mwu

Fast Ranking with Additive Ensembles of Oblivious and Non-Oblivious Regression Trees

Domenico Dato, Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Nicola Tonellotto, Rossano Venturini
2016 ACM Transactions on Information Systems  
Learning-to-Rank models based on additive ensembles of regression trees have been proven to be very effective for scoring query results returned by large-scale Web search engines.  ...  This paper is an extension of ; it adds an additional scoring algorithm for ensembles of obvious trees, a blockwise version of the scoring algorithm, a new large-scale learning to rank dataset as well  ...  Post-learning strategies aimed at simplifying a given tree ensemble were proposed by .  ... 
doi:10.1145/2987380 fatcat:ku3cfzwjhfbebnsexnh7xyjy74

Runtime Optimizations for Tree-Based Machine Learning Models

Nima Asadi, Jimmy Lin, Arjen P. de Vries
2014 IEEE Transactions on Knowledge and Data Engineering  
This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression trees for learning to rank.  ...  Tree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains.  ...  [34] : in the context of additive ensembles (of which boosted trees are an example) for learning to rank, they explored early-exit optimizations for top k ranking.  ... 
doi:10.1109/tkde.2013.73 fatcat:wktx6krlhrb2petdm4q3755hi4

Heterogeneous Ensemble Structure based Universal Spam Profile Detection System for Social Media Networks

2020 International journal of recent technology and engineering  
retaining optimal feature sets for further classification.  ...  Subsequently, applying an array of machine learning methods, including Logistic regression, decision tree, Support Vector Machine variants with Linear, Polynomial and RBF kernels, Least Square SVM with  ...  To alleviate such problems and to gain zero-error condition ANN requires optimal weight estimation and respective learning efficiency.  ... 
doi:10.35940/ijrte.a2179.059120 fatcat:7c6tp4n2y5audc2bewtottxk3i

X-CLEaVER: Learning Ranking Ensembles by Growing and Pruning Trees

Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Fabrizio Silvestri, Salvatore Trani
2018 Zenodo  
First, redundant trees are removed from the given ensemble, then the weights of the remaining trees are fine-tuned by optimizing the desired ranking quality metric.  ...  In this paper, we propose X-CLEaVER, an iterative meta-algorithm able to build more efficient and effective ranking ensembles.  ...  as optimal in [6] ), while for Ω λ MART we obtained the best results by using trees with a maximum depth of 5 and a learning rate of 0, 1.  ... 
doi:10.5281/zenodo.2668361 fatcat:q7bixjdugjbtlnrs26yhungf5u

Decision Jungles: Compact and Rich Models for Classification

Jamie Shotton, Toby Sharp, Pushmeet Kohli, Sebastian Nowozin, John M. Winn, Antonio Criminisi
2013 Neural Information Processing Systems  
We present and compare two new node merging algorithms that jointly optimize both the features and the structure of the DAGs efficiently.  ...  Randomized decision trees and forests have a rich history in machine learning and have seen considerable success in application, perhaps particularly so for computer vision.  ...  The authors would like to thank Albert Montillo for initial investigation of related ideas.  ... 
dblp:conf/nips/ShottonSKNWC13 fatcat:gdkrtgwkfjealf7y35ooadxedm

Ensemble‐ and distance‐based feature ranking for unsupervised learning

Matej Petković, Dragi Kocev, Blaž Škrlj, Sašo Džeroski
2021 International Journal of Intelligent Systems  
K E Y W O R D S extra trees, feature ranking, relief, tree ensembles, unsupervised learning This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which  ...  The first group includes feature ranking scores (Genie3 score, RandomForest score) that are computed from ensembles of predictive clustering trees.  ...  The computational experiments presented here were executed on a computing infrastructure from the Slovenian Grid (SLING) initiative, and we thank the administrators Barbara Krašovec and Janez Srakar for  ... 
doi:10.1002/int.22390 fatcat:33t63h5hvbe3fnug5jk3lwk4bq

Feature Ranking for Hierarchical Multi-Label Classification with Tree Ensemble Methods

Matej Petković, Sašo Džeroski, Dragi Kocev
2020 Acta Polytechnica Hungarica  
Here, we propose a group of feature ranking methods based on three established ensemble methods of predictive clustering trees: Bagging, Random Forests and Extra Trees.  ...  In this work, we address the task of feature ranking for hierarchical multi-label classification (HMLC).  ...  Acknowledgement We acknowledge the financial support of the Slovenian Research Agency (grant P2-0103 and a young researcher grant to MP), the European Commission (the grants MAESTRA (Learning from Massive  ... 
doi:10.12700/aph.17.10.2020.10.8 fatcat:5kdgkh3qpjbajif7ffo2scajwe

Ensemble of heterogeneous flexible neural trees using multiobjective genetic programming

Varun Kumar Ojha, Ajith Abraham, Václav Snášel
2017 Applied Soft Computing  
MOGP guided an initial HFNT population towards Pareto-optimal solutions, where the final population was used for making an ensemble system.  ...  Moreover, the heterogeneous creation of HFNT proved to be efficient in making ensemble system from the final population.  ...  i Structure and Parameter Learning (Near optimal Tree) A tree that offers the lowest approximation error and the simplest structure is a near optimal tree, which can be obtained by using an evolutionary  ... 
doi:10.1016/j.asoc.2016.09.035 fatcat:y23x4opvifcareeddzmtqkfcfa

Runtime Optimizations for Prediction with Tree-Based Models [article]

Nima Asadi, Jimmy Lin, Arjen P. de Vries
2013 arXiv   pre-print
Tree-based models have proven to be an effective solution for web ranking as well as other problems in diverse domains.  ...  Although exceedingly simple conceptually, most implementations of tree-based models do not efficiently utilize modern superscalar processor architectures.  ...  GBRTs are ensembles of regression trees that yield state-of-the-art effectiveness on learning-to-rank tasks.  ... 
arXiv:1212.2287v2 fatcat:k6xw6a54bvaqvjli7d4xn7wcke

Option predictive clustering trees for multi-target regression

Tomaz Stepisnik, Aljaz Osojnik, Saso Dzeroski, Dragi Kocev
2020 Computer Science and Information Systems  
Finally, we demonstrate the potential of OPCTs for multifaceted interpretability and illustrate the potential for inclusion of domain knowledge in the tree learning process.  ...  Considering all of this, an option tree can be also regarded as a condensed representation of an ensemble.  ...  ., greediness, of the tree construction process can lead to learning sub-optimal models.  ... 
doi:10.2298/csis190928006s fatcat:i43dg327ajgupjnnn3j7nrqv6m
« Previous Showing results 1 — 15 out of 10,711 results