Learning Ranking Functions For Information Retrieval Using Layered Multi-Population Genetic Programming

Jen-Yuan Yeh, Jung-Yi Lin
2017 Malaysian Journal of Computer Science  
Ranking plays a key role in many applications, such as document retrieval, recommendation, question answering, and machine translation. In practice, a ranking function (or model) is exploited to determine the rank-order relations between objects, with respect to a particular criterion. In this paper, a layered multipopulation genetic programming based method, known as RankMGP, is proposed to learn ranking functions for document retrieval by incorporating various types of retrieval models into a
more » ... singular one with high effectiveness. RankMGP represents a potential solution (i.e., a ranking function) as an individual in a population of genetic programming and aims to directly optimize information retrieval evaluation measures in the evolution process. Overall, RankMGP consists of a set of layers and a sequential workflow running through the layers. In one layer, multiple populations evolve independently to generate a set of the best individuals. When the evolution process is completed, a new training dataset is created using the best individuals and the input training set of the layer. Then, the populations in the next layer evolve with the new training dataset. In the final layer, the best individual is obtained as the output ranking function. The proposed method is evaluated using the LETOR datasets and is found to be superior to classical information retrieval models, such as Okapi BM25. It is also statistically competitive with the state-of-the-art methods, including Ranking SVM, ListNet, AdaRank and RankBoost. Keywords: learning to rank for information retrieval, ranking function, supervised learning, layered multipopulation genetic programming, LAGEP, LETOR Traditional IR models, including the Boolean model, the vector space model, and the probabilistic model, are developed based on the bag-of-words model. In short, a document is decomposed into keywords (i.e., index terms) and a ranking function (or retrieval function) is defined to associate a relevance degree with the document and query [3] . The aforementioned models are typically realized in an unsupervised manner and thus, the parameters of underlying ranking functions are usually tuned empirically. However, manual tuning suffers high costs and sometimes leads to over-fitting, especially when the functions are carefully tuned to fit particular needs [30] . Nowadays, as increasingly many IR results are accompanied by relevance judgments (e.g., query and clickthrough logs), supervised learning-based methods, referred to as "learning to rank (LTR)" methods, e.g., [13], Ranking SVM [23][26], ListNet [8], AdaRank [56], RankBoost [21] and RankNet [7] , have been devoted to automatically learning an effective ranking function from training data, for tuning parameters or for incorporating distinct retrieval models into a singular one with high effectiveness. Since the performance of IR systems is generally evaluated in terms of measures, such as Mean Average Precision (MAP) [3] and Normalized Discounted Cumulative Gain (NDCG) [25] , LTR methods are practically designed to optimize loss functions loosely related to IR evaluation measures [57] . A straightforward way of efficiently finding a solution by directly optimizing evaluation measures is to use genetic programming (GP). This paper proposes a GP-based LTR method, known as RankMGP, to learn ranking functions for document retrieval by incorporating various types of IR evidences, such as classical content features, structure features and query-independent features. RankMGP represents a potential solution (i.e., a ranking function) as an individual in a population of GP. Instead of using traditional GP that works with only a single population, RankMGP utilizes multi-population GP and a layered architecture that has proven effective in [29] to arrange multiple populations. Overall, RankMGP consists of a set of layers and a sequential workflow running through the layers. In one layer, multiple populations evolve independently to generate a set of the best individuals. In each generation of evolution, a novel fitness function, which is modelled as the weighted average of NDCG scores, is exploited to measure the performance of each individual in each population. When the evolution process is completed, a new training dataset is created using the best individuals and the input training set of the layer. Then, populations in the next layer evolve with the new training dataset. In the final layer, the best individual is obtained as the output ranking function. The main contributions of this study are summarized as below: 1. The use of layered multi-population GP in the context of LTR is investigated, and a novel learning method, known as RankMGP, is proposed. In addition, a new fitness function, which is modelled as the weighted average of NDCG scores, is introduced; 2. RankMGP is evaluated in a case study using the LETOR datasets. The results show that RankMGP is superior to classical IR models, such as Okapi BM25 [40] and LMIR [62]. It is also suggested that RankMGP obtains statistically competitive results as compared to the state-of-the-art methods, including Ranking SVM [23][26], ListNet [8], AdaRank [56] and RankBoost [21] ; 3. In-depth discussions are given from various perspectives behind the design and the effectiveness of RankMGP, e.g., the pros and cons, and the learning behaviors over layers. The rest of this paper is organized as follows. Section 1.1 elaborates the general paradigm of LTR for IR. Section 2.0 provides a brief review of related works. Section 3.0 introduces the proposed learning method, RankMGP. The experimental results and discussions are provided in Sections 4.0 and 5.0, respectively. Finally, Section 6.0 concludes this paper and points out possible directions for further research. 1.1. The general paradigm of LTR for IR Fig. 1. The general paradigm of LTR for IR [58].
doi:10.22452/mjcs.vol30no1.3 fatcat:y2boc575xjhu7nkw5eutgpqioi