Filters








46,599 Hits in 6.2 sec

Understanding the Behaviors of BERT in Ranking [article]

Yifan Qiao, Chenyan Xiong, Zhenghao Liu, Zhiyuan Liu
2019 arXiv   pre-print
Experimental results on TREC show the gaps between the BERT pre-trained on surrounding contexts and the needs of ad hoc document ranking.  ...  Analyses illustrate how BERT allocates its attentions between query-document tokens in its Transformer layers, how it prefers semantic matches between paraphrase tokens, and how that differs with the soft  ...  The advantage of BERT in MS MARCO lies in the cross query-document attentions from the Transformers: BERT (Rep) applies BERT on the query and document individually and discard these cross sequence interactions  ... 
arXiv:1904.07531v4 fatcat:sj2wbol6gjcodafs2eemte7o6q

Multi-Task Learning for Document Ranking and Query Suggestion

Wasi Uddin Ahmad, Kai-Wei Chang, Hongning Wang
2018 International Conference on Learning Representations  
We propose a multi-task learning framework to jointly learn document ranking and query suggestion for web search. It consists of two major components, a document ranker and a query recommender.  ...  Document ranker combines current query and session information and compares the combined representation with document representation to rank the documents.  ...  This work was supported in part by National Science Foundation Grant IIS-1760523, IIS-1553568, IIS-1618948, and the NVIDIA Hardware Grant.  ... 
dblp:conf/iclr/AhmadCW18 fatcat:iqcucxmg65gt3a4xgmhf5majny

Integrating Representation and Interaction for Context-aware Document Ranking

Haonan Chen, Zhicheng Dou, Qiannan Zhu, Xiaochen Zuo, Ji-Rong Wen
2022 ACM Transactions on Information Systems  
Existing neural context-aware ranking models usually rank documents based on either latent representations of user search behaviors, or the word-level interactions between the candidate document and each  ...  the current query and the candidate document.  ...  Then it computes the ranking score based on these representations. • CARS [2] solves query suggestion task and document ranking task simultaneously.  ... 
doi:10.1145/3529955 fatcat:sk4fqhqdqzdxhauzkbiyeixuki

Neural Ranking Models for Document Retrieval [article]

Mohamed Trabelsi, Zhiyu Chen, Brian D. Davison, Jeff Heflin
2021 arXiv   pre-print
We also show the analogy between document retrieval and other retrieval tasks where the items to be ranked are structured documents, answers, images and videos.  ...  A variety of deep learning models have been proposed, and each model presents a set of neural network components to extract features that are used for ranking.  ...  Query-level learning to rank using isotonic regression. 46th Annual Allerton Conference on Communication, Control, and Computing (pp. 1108-1115).  ... 
arXiv:2102.11903v1 fatcat:zc2otf456rc2hj6b6wpcaaslsa

Getting Started with Neural Models for Semantic Matching in Web Search [article]

Kezban Dilek Onal, Ismail Sengor Altingovde, Pinar Karagoz, Maarten de Rijke
2016 arXiv   pre-print
tasks: query suggestion, ad retrieval, and document retrieval.  ...  We detail the required background and terminology, a taxonomy grouping the rapidly growing body of work in the area, and then survey work on neural models for semantic matching in the context of three  ...  Mitra [63] uses CLSM vectors to define features to represent the session context and rank suggestion candidates against a prefix in personalized query autocompletion.  ... 
arXiv:1611.03305v1 fatcat:agdgj7allbczxcyteuomswn574

Groupwise Query Performance Prediction with BERT [article]

Xiaoyang Chen, Ben He, Le Sun
2022 arXiv   pre-print
Meanwhile, recent studies suggest that the cross-attention modeling of a group of documents can effectively boost performances for both learning-to-rank algorithms and BERT-based re-ranking.  ...  To this end, a BERT-based groupwise QPP model is proposed, in which the ranking contexts of a list of queries are jointly modeled to predict the relative performance of individual queries.  ...  To incorporate cross-document and cross-query context, we regard each batch as a single group of query-document pairs.  ... 
arXiv:2204.11489v1 fatcat:s4q6cuwnfbabppuklvyh6kghhm

Co-BERT: A Context-Aware BERT Retrieval Model Incorporating Local and Query-specific Context [article]

Xiaoyang Chen, Kai Hui, Ben He, Xianpei Han, Le Sun, Zheng Ye
2021 arXiv   pre-print
In the mean time, the importance and usefulness to consider the cross-documents interactions and the query-specific characteristics in a ranking model have been repeatedly confirmed, mostly in the context  ...  The BERT-based ranking model, however, has not been able to fully incorporate these two types of ranking context, thereby ignoring the inter-document relationships from the ranking and the differences  ...  the two kinds of context into the two-tower ranking models like DPR [12] , ColBERT [13] , and CoRT [38] to further boost their effectiveness; In addition, for the content-context-aware lexicon matching  ... 
arXiv:2104.08523v1 fatcat:w24yxin3vjfqdl4oi74s7lsuba

Suggesting Topic-Based Query Terms as You Type

Ju Fan, Hao Wu, Guoliang Li, Lizhu Zhou
2010 2010 12th International Asia-Pacific Web Conference  
Query term suggestion that interactively expands the queries is an indispensable technique to help users formulate high-quality queries and has attracted much attention in the community of web search.  ...  Existing methods usually suggest terms based on statistics in documents as well as query logs and external dictionaries, and they neglect the fact that the topic information is very crucial because it  ...  It takes the number of documents shared by the suggestions and the context as the ranking mechanism. Bast et al. [9] extends this feature by incorporating the term clusters into the documents.  ... 
doi:10.1109/apweb.2010.13 dblp:conf/apweb/FanWLZ10 fatcat:ohnqpyuuo5gfjeio5bvs3dezku

Ranking Models and Learning to Rank: A Survey

2015 International Journal of Science and Research (IJSR)  
This paper mainly focuses on survey of the ranking models and learning to rank technique for giving the effective and efficient information retrieval.  ...  Learning to rank for information retrieval has gained a lot of interest in the recent years as ranking is the central problem in many information retrieval applications, like document retrieval, multimedia  ...  Thus, Ranking has widespread applications such as commercial search engines and recommendation system that can find out relevance between the relevant documents in context of given user's query and place  ... 
doi:10.21275/v4i12.nov151889 fatcat:svwiztlpr5e2zb2atqaraj5pxy

Automatic Identification of High Impact Relevant Articles to Support Clinical Decision Making Using Attention-Based Deep Learning

Beomjoo Park, Muhammad Afzal, Jamil Hussain, Asim Abbas, Sungyoung Lee
2020 Electronics  
We contextualized word embedding to create vectors of the documents, and user queries combined with genetic information to find contextual similarity for determining the relevancy score to rank the articles  ...  To address the issue of accurate identification of high impact relevant articles, we propose a novel approach of attention-based deep learning for finding and ranking relevant studies against a topic of  ...  rank relevant documents that matched the query.  ... 
doi:10.3390/electronics9091364 fatcat:snijlm73rngjpo2iuwuwhvt25m

Modularized Transfomer-based Ranking Framework [article]

Luyu Gao, Zhuyun Dai, Jamie Callan
2020 arXiv   pre-print
The modular design is also easier to interpret and sheds light on the ranking process in Transformer rankers.  ...  However, these Transformers are computationally expensive, and their opaque hidden states make it hard to understand the ranking process.  ...  We evaluate MS MARCO Dev query sets with its provided evaluation script and the rest with trec eval (https: //github.com/usnistgov/trec_eval).  ... 
arXiv:2004.13313v3 fatcat:5fhavvvaerhoppu73ebm52cjli

CEDR: Contextualized Embeddings for Document Ranking [article]

Sean MacAvaney, Andrew Yates, Arman Cohan, Nazli Goharian
2019 arXiv   pre-print
In this work, we investigate how two pretrained contextualized language modes (ELMo and BERT) can be utilized for ad-hoc document ranking.  ...  We call this joint approach CEDR (Contextualized Embeddings for Document Ranking).  ...  Given these definitions, let the contextualized representation be: S Q,D [l, q, d] = cos(context Q,D (q, l), context Q,D (d, l)) (1) for each query term q ∈ Q, document term d ∈ D, and layer l ∈ [1..L]  ... 
arXiv:1904.07094v2 fatcat:svpfxddayzeyfpbrnkt2ijairm

Deep Relevance Ranking Using Enhanced Document-Query Interactions [article]

Ryan McDonald, Georgios-Ioannis Brokos, Ion Androutsopoulos
2018 arXiv   pre-print
Unlike DRMM, which uses context-insensitive encodings of terms and query-document term interactions, we inject rich context-sensitive encodings throughout our models, inspired by PACRR's (Hui et al., 2017  ...  ) convolutional n-gram matching features, but extended in several ways including multiple views of query and document inputs.  ...  Finally, AUEB's NLP group provided many suggestions over the course of the work.  ... 
arXiv:1809.01682v2 fatcat:w6rk3zvjhvfadkswrrz56llp6a

Deep Relevance Ranking Using Enhanced Document-Query Interactions

Ryan McDonald, George Brokos, Ion Androutsopoulos
2018 Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing  
Unlike DRMM, which uses context-insensitive encodings of terms and query-document term interactions, we inject rich context-sensitive encodings throughout our models, inspired by PACRR's (Hui et al., 2017  ...  ) convolutional n-gram matching features, but extended in several ways including multiple views of query and document inputs.  ...  Finally, AUEB's NLP group provided many suggestions over the course of the work.  ... 
doi:10.18653/v1/d18-1211 dblp:conf/emnlp/McDonaldBA18 fatcat:ruugk4sqefgzvcsul4luhrsofy

Query expansion with terms selected using lexical cohesion analysis of documents

Olga Vechtomova, Murat Karamuftuoglu
2007 Information Processing & Management  
We explore the effectiveness of snippets in providing context in interactive query expansion, compare query expansion from snippets vs. whole documents, and query expansion following snippet selection  ...  We present new methods of query expansion using terms that form lexical cohesive links between the contexts of distinct query terms in documents (i.e., words surrounding the query terms in text).  ...  Acknowledgements We would like to thank Susan Jones (City University, London) and anonymous referees for their valuable comments and suggestions.  ... 
doi:10.1016/j.ipm.2006.09.004 fatcat:cmhbfip4i5b7vm7ukprxyp5f7y
« Previous Showing results 1 — 15 out of 46,599 results