A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
Filters
Neural Networks for Information Retrieval
2018
Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining - WSDM '18
The aim of this full-day tutorial is to give a clear overview of current tried-and-trusted neural methods in IR and how they bene t IR research. ...
The amount of information available can be overwhelming both for junior students and for experienced researchers looking for new research topics and directions. ...
Recently, it was shown that recurrent neural networks can learn to account for biases in user clicks directly from the click-through data, i.e., without the need for a prede ned set of rules as is customary ...
doi:10.1145/3159652.3162009
dblp:conf/wsdm/KenterBGDRM18
fatcat:ybdeuuxcbnh2np34k3y4ve5ovu
Neural Networks for Information Retrieval
2017
Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR '17
The aim of this full-day tutorial is to give a clear overview of current tried-and-trusted neural methods in IR and how they bene t IR research. ...
The amount of information available can be overwhelming both for junior students and for experienced researchers looking for new research topics and directions. ...
Recently, it was shown that recurrent neural networks can learn to account for biases in user clicks directly from the click-through data, i.e., without the need for a prede ned set of rules as is customary ...
doi:10.1145/3077136.3082062
dblp:conf/sigir/KenterBGDRM17
fatcat:yxuiajzjlfaixlnhc6rrsud6ry
Neural Networks for Information Retrieval
[article]
2017
arXiv
pre-print
The aim of this full-day tutorial is to give a clear overview of current tried-and-trusted neural methods in IR and how they benefit IR research. ...
The amount of information available can be overwhelming both for junior students and for experienced researchers looking for new research topics and directions. ...
Recently, it was shown that recurrent neural networks can learn to account for biases in user clicks directly from the click-through data, i.e., without the need for a prede ned set of rules as is customary ...
arXiv:1707.04242v1
fatcat:4idscmq26fa5bjupldwuyghq4m
Unbiased Top-k Learning to Rank with Causal Likelihood Decomposition
[article]
2022
arXiv
pre-print
Unbiased learning to rank has been proposed to alleviate the biases in the search ranking, making it possible to train ranking models with user interaction data. ...
Advantages of CLD include theoretical soundness and a unified framework for pointwise and pairwise unbiased top-k learning to rank. ...
Implementation details: Similar to existing studies [1, 4, 40] , we used a three layers neural networks with 𝑒𝑙𝑢 activation function as the ranking model for Naive, IPS, Oracle and CLD pair , with ...
arXiv:2204.00815v1
fatcat:vfdx4xdvtfet7cg2ujfrihtg7a
Investigating Weak Supervision in Deep Ranking
2019
Data and Information Management
A number of deep neural networks have been proposed to improve the performance of document ranking in information retrieval studies. ...
In this work, we adopt two kinds of weakly supervised relevance, BM25-based relevance and click model-based relevance, and make a deep investigation into their differences in the training of neural ranking ...
In this paper, we used the implementations of these models from RankLib.3 RankNet is a well-known ranking model using a neural network trained with pairwise losses. ...
doi:10.2478/dim-2019-0010
fatcat:gjutpp777vdvljqxtf2r6nvuwy
Neural Networks for Information Retrieval
[article]
2018
arXiv
pre-print
The aim of this full-day tutorial is to give a clear overview of current tried-and-trusted neural methods in IR and how they benefit IR. ...
The amount of information available can be overwhelming both for junior students and for experienced researchers looking for new research topics and directions. ...
Recently, it was shown that recurrent neural networks can learn to account for biases in user clicks directly from click-through, i.e., without the need for a predefined set of rules as is customary for ...
arXiv:1801.02178v1
fatcat:c3kevelcrffodift2vvwnoscjq
A General Framework for Counterfactual Learning-to-Rank
[article]
2019
arXiv
pre-print
metrics (e.g., Discounted Cumulative Gain (DCG)) as well as a broad class of models (e.g., deep networks). ...
Going beyond this special case, this paper provides a general and theoretically rigorous framework for counterfactual learning-to-rank that enables unbiased training for a broad class of additive ranking ...
The first is SVM PropDCG, which generalizes a Ranking SVM to directly optimize a bound on DCG from biased click data. ...
arXiv:1805.00065v3
fatcat:cjvtma4rrjglrmlgckcn7uh5gu
Modeling Relevance Ranking under the Pre-training and Fine-tuning Paradigm
[article]
2021
arXiv
pre-print
Recently, pre-trained language models such as BERT have been applied to document ranking for information retrieval, which first pre-train a general language model on an unlabeled large corpus and then ...
More importantly, the pre-trained representations, are fine-tuned together with handcrafted learning-to-rank features under a wide and deep network architecture. ...
Machine learning models, especially deep neural networks [16] have been applied to relevance ranking and many ranking techniques have been developed [18, 43] . ...
arXiv:2108.05652v1
fatcat:hiafpiym2jeqtdsanl52zfnrq4
Personalized News Recommendation: Methods and Challenges
[article]
2022
arXiv
pre-print
We first review the techniques for tackling each core problem in a personalized news recommender system and the challenges they face. ...
Next, we introduce the public datasets and evaluation methods for personalized news recommendation. ...
There are several methods that use neural networks to learn user representations from users' click behaviors. For example, Okura et al. ...
arXiv:2106.08934v3
fatcat:iagqsw73hrehxaxpvpydvtr26m
Modeling and Simultaneously Removing Bias via Adversarial Neural Networks
[article]
2018
arXiv
pre-print
In this work, we develop a novel Adversarial Neural Network (ANN) model, an alternative approach which creates a representation of the data that is invariant to the bias. ...
In real world systems, the predictions of deployed Machine Learned models affect the training data available to build subsequent models. ...
For comparison, we perform the same evaluations for an ANN with λ = 0. is model can be seen as a complete independent vanilla neural network optimizing over , while a separate Bias network is able to observe ...
arXiv:1804.06909v1
fatcat:k3pqn7btvnfkddengxxxym4khu
Incorporating Vision Bias into Click Models for Image-oriented Search Engine
[article]
2021
arXiv
pre-print
Most typical click models assume that the probability of a document to be examined by users only depends on position, such as PBM and UBM. It works well in various kinds of search engines. ...
Specifically, we apply this assumption to classical click models and propose an extended model, to better capture the examination probabilities of documents. ...
These works use complex deep neural networks to build up click models and encode the context attributes by vectors. ...
arXiv:2101.02459v1
fatcat:hjqgwgzxvfcjdhdzwyoe6bdvte
Debiasing Neural Retrieval via In-batch Balancing Regularization
[article]
2022
arXiv
pre-print
People frequently interact with information retrieval (IR) systems, however, IR models exhibit biases and discrimination towards various demographics. ...
The in-processing fair ranking methods provide a trade-offs between accuracy and fairness through adding a fairness-related regularization term in the loss function. ...
network. ...
arXiv:2205.09240v1
fatcat:75hpf4cuzfaapiggsbbw2jtub4
FairRank: Fairness-aware Single-tower Ranking Framework for News Recommendation
[article]
2022
arXiv
pre-print
In this paper, we propose FairRank, which is a fairness-aware single-tower ranking framework for news recommendation. ...
However, these models can easily inherit the biases related to users' sensitive attributes (e.g., demographics) encoded in training click data, and may generate recommendation results that are unfair to ...
which can be formulated as ŷ = 𝑓 (u 𝑐 , h 𝑐 ), where 𝑓 (•) is a relevance function that is often implemented by inner product or feedforward neural networks. ...
arXiv:2204.00541v1
fatcat:ludwigtdeffdbfjzxfjl55rbwa
Can Clicks Be Both Labels and Features?
2022
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval
Using implicit feedback collected from user clicks as training labels for learning-to-rank algorithms is a well-developed paradigm that has been extensively studied and used in modern IR systems. ...
In this paper, we explore the possibility of incorporating user clicks as both training labels and ranking features for learning to rank. ...
For LTR models, we implemented the ranking function 𝑓 using feed-forward neural networks with 2 hidden layers (32 neurons per layer). ...
doi:10.1145/3477495.3531948
fatcat:6hmvv2qjtvhdda4ydo4j3ai2p4
Scalable Exploration for Neural Online Learning to Rank with Perturbed Feedback
2022
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval
Our solution is based on an ensemble of ranking models trained with perturbed user click feedback. ...
Driven by the recent developments in optimization and generalization of DNNs, learning a neural ranking model online from its interactions with users becomes possible. ...
ACKNOWLEDGEMENTS We want to thank the reviewers for their insightful comments. ...
doi:10.1145/3477495.3532057
fatcat:vdh4qxuxwzbpncjqquqechsq6y
« Previous
Showing results 1 — 15 out of 4,560 results