A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2008; you can also visit the original URL.
The file type is application/pdf
.
Filters
SoftRank
2008
Proceedings of the international conference on Web search and web data mining - WSDM '08
However, most ranking functions generate document scores, which are sorted to produce a ranking. Hence IR metrics are innately non-smooth with respect to the scores, due to the sort. ...
Unfortunately, many machine learning algorithms require the gradient of a training objective in order to perform the optimization of the model parameters, and because IR metrics are non-smooth, we need ...
We believe that SoftRank represents a general and powerful new approach for direct optimization of non-smooth ranking metrics. ...
doi:10.1145/1341531.1341544
dblp:conf/wsdm/TaylorGRM08
fatcat:xe6q2uud65hqvewkaalisb3r4m
Learning to rank with SoftRank and Gaussian processes
2008
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '08
The SoftRank mechanism is a general one; it can be applied to different IR metrics, and make use of different underlying models. ...
Recently Taylor et al. presented a method called SoftRank which allows the direct gradient optimisation of a smoothed version of NDCG using a Thurstonian model. ...
[17] have developed a general method called SoftRank to smooth IR metrics, giving differentiable objective functions suitable for gradient optimization. ...
doi:10.1145/1390334.1390380
dblp:conf/sigir/GuiverS08
fatcat:zkprqowsozgcth5w3m7rr5w7pi
Gradient descent optimization of smoothed information retrieval metrics
2009
Information retrieval (Boston)
Most ranking algorithms are based on the optimization of some loss functions, such as the pairwise loss. ...
The basic idea is to minimize a smooth approximation of these measures with gradient descent. Crucial to this kind of approach is the choice of the smoothing factor. ...
Conclusion In this paper we have shown how to optimize complex information retrieval metrics such as NDCG by performing gradient descent optimization on a smooth approximation of these metrics. ...
doi:10.1007/s10791-009-9110-3
fatcat:4duopbsj6fce3cxoqkx2gv2s4i
Extending average precision to graded relevance judgments
2010
Proceeding of the 33rd international ACM SIGIR conference on Research and development in information retrieval - SIGIR '10
Finally, we show that GAP can reliably be used as an objective metric in learning to rank by illustrating that optimizing for GAP using SoftRank and LambdaRank leads to better performing ranking functions ...
Evaluation metrics play a critical role both in the context of comparative evaluation of the performance of retrieval systems and in the context of learning-to-rank (LTR) as objective functions to be optimized ...
Since most IR metrics are non-smooth as as they depend on the ranks of documents, the main idea used in SoftRank to overcome the problem of optimizing non-smooth IR metrics is based on defining smooth ...
doi:10.1145/1835449.1835550
dblp:conf/sigir/RobertsonKY10
fatcat:yc3hf7j3ubdrpdlivzlmnmisyy
StochasticRank: Global Optimization of Scale-Free Discrete Functions
[article]
2020
arXiv
pre-print
In this paper, we introduce a powerful and efficient framework for direct optimization of ranking metrics. ...
In addition to ranking metrics, our framework applies to any scale-free discrete loss function. ...
Moreover, smoothed ranking loss functions are non-convex, and existing algorithms can guarantee only local optima. ...
arXiv:2003.02122v2
fatcat:w7k6vkcx6jhkbc7r3ciri22cq4
A scale invariant ranking function for learning-to-rank: a real-world use case
[article]
2021
arXiv
pre-print
However, the features' scale does not necessarily stay the same in the real-world production environment, which could lead to unexpected ranking order. ...
To address this issue, in this paper we propose a novel scale-invariant ranking function (dubbed as SIR) which is accomplished by combining a deep and a wide neural network. ...
However, since NDCG is non-smooth with respect to the ranking scores, a smoothed approximation to NDCG was proposed and then used to obtain the best rank. ...
arXiv:2110.11259v1
fatcat:c5i67dmwmrbu7oticj3zcnljna
BoltzRank
2009
Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09
Methods that learn ranking functions are difficult to optimize, as ranking performance is typically judged by metrics that are not smooth. ...
Our method creates a conditional probability distribution over rankings assigned to documents for a given query, which permits gradient ascent optimization of the expected value of some performance measure ...
In this paper we present an expectation-based method which allows direct optimization of such non-smooth evaluation metrics frequently used in information retrieval. ...
doi:10.1145/1553374.1553513
dblp:conf/icml/VolkovsZ09
fatcat:f5ocq6pv6fcntavs6ekgmu7m4m
Smoothing DCG for learning to rank
2009
Proceeding of the 18th ACM conference on Information and knowledge management - CIKM '09
However, DCG is non-smooth, rendering gradient-based optimization algorithms inapplicable. To remedy this, smoothed versions of DCG have been proposed but with only partial success. ...
Discounted cumulative gain (DCG) is widely used for evaluating ranking functions. It is therefore natural to learn a ranking function that directly optimizes DCG. ...
In this paper, we also start from the idea of constructing a smooth DCG objective function. Similar as SoftRank, it is a smooth approximation of DCG. ...
doi:10.1145/1645953.1646266
dblp:conf/cikm/WuCZZ09
fatcat:5lmgm2oiorgo3jcu43fsx3v7sa
NeuralNDCG: Direct Optimisation of a Ranking Metric via Differentiable Relaxation of Sorting
[article]
2021
arXiv
pre-print
As these metrics rely on sorting predicted items' scores (and thus, on items' ranks), their derivatives are either undefined or zero everywhere. ...
Learning to Rank (LTR) algorithms are usually evaluated using Information Retrieval metrics like Normalised Discounted Cumulative Gain (NDCG) or Mean Average Precision. ...
We did not compare with SoftRank, as its O(n 3 ) complexity proved prohibitive. We tuned ApproxNDCG and NeuralNDCG smoothness hyperparameters for optimal performance on the test set. ...
arXiv:2102.07831v2
fatcat:hyc62bdcafa65hjvhehoh3ibke
Optimize What You Evaluate With: A Simple Yet Effective Framework For Direct Optimization Of IR Metrics
[article]
2020
arXiv
pre-print
Thanks to this, the rank positions are differentiable, enabling us to reformulate the widely used IR metrics as differentiable ones and directly optimize them based on neural networks. ...
In this paper, we introduce a simple yet effective framework for directly optimizing information retrieval (IR) metrics. ...
In other words, the IR metrics are non-smooth with respect to the model parameters, being everywhere either flat (with zero gradient) or discontinuous. ...
arXiv:2008.13373v1
fatcat:vwf3cka3dvaadapmfkkkmybkny
Ranking via Sinkhorn Propagation
[article]
2011
arXiv
pre-print
In contrast to many learning objectives, however, the ranking problem presents difficulties due to the fact that the space of permutations is not smooth. ...
In this paper, we examine the class of rank-linear objective functions, which includes popular metrics such as precision and discounted cumulative gain. ...
These experiments used the smoothed indicator function approach described in Section 4.2. Optimization was performed using L-BFGS 1 , annealing the smoothing constant σ as in [6] . ...
arXiv:1106.1925v2
fatcat:usmwvbtmrbderlkow5tysjtppu
A General Framework for Counterfactual Learning-to-Rank
[article]
2019
arXiv
pre-print
Specifically, we derive a relaxation for propensity-weighted rank-based metrics which is subdifferentiable and thus suitable for gradient-based optimization. ...
Moreover, the ability to train non-linear ranking functions via Deep PropDCG further improves performance. ...
A common strategy is to use some smoothed version of the ranking metric for optimization, as seen in SoftRank [24] and others [4, 10, 28, 29] . ...
arXiv:1805.00065v3
fatcat:cjvtma4rrjglrmlgckcn7uh5gu
Global Optimization for Advertisement Selection in Sponsored Search
2015
Journal of Computer Science and Technology
To tackle the challenge, we propose a probabilistic approximation of the marketplace objective, which is smooth and can be effectively optimized by conventional optimization techniques. ...
ranking and second-price rules in the auction mechanism. ...
Smoothed ζ Function The permutation function ζ is relatively more difficult to approximate, because it contains the ranking function. We employ a method similar to SoftRank [15] to smooth it. ...
doi:10.1007/s11390-015-1523-4
fatcat:u7vv4h7v6jc6va5grppbsgwyya
An Alternative Cross Entropy Loss for Learning-to-Rank
[article]
2020
arXiv
pre-print
These algorithms learn to rank a set of items by optimizing a loss that is a function of the entire set---as a surrogate to a typically non-differentiable ranking metric. ...
In particular, none of the empirically-successful loss functions are related to ranking metrics. ...
Listwise learning-to-rank methods either derive a smooth approximation to ranking metrics or use heuristics to construct smooth surrogate loss functions. ...
arXiv:1911.09798v4
fatcat:tmkyuzaq6bgmdjbiikatlko34m
PiRank: Scalable Learning To Rank via Differentiable Sorting
[article]
2021
arXiv
pre-print
Prior works have proposed surrogates that are loosely related to ranking metrics or simple smoothed versions thereof, and often fail to scale to real-world applications. ...
A key challenge with machine learning approaches for ranking is the gap between the performance metrics of interest and the surrogate loss functions that can be optimized with gradient-based methods. ...
A support vector method for
optimizing average precision. In SIGIR, 2007.
[19] Michael Taylor, John Guiver, Stephen Robertson, and Tom Minka. Softrank: optimizing non-smooth rank
metrics. ...
arXiv:2012.06731v2
fatcat:amtphimdijcodefysa67wxv2mu
« Previous
Showing results 1 — 15 out of 45 results