Filters








5,087 Hits in 6.0 sec

Co-BERT: A Context-Aware BERT Retrieval Model Incorporating Local and Query-specific Context [article]

Xiaoyang Chen, Kai Hui, Ben He, Xianpei Han, Le Sun, Zheng Ye
2021 arXiv   pre-print
The BERT-based ranking model, however, has not been able to fully incorporate these two types of ranking context, thereby ignoring the inter-document relationships from the ranking and the differences  ...  To mitigate this gap, in this work, an end-to-end transformer-based ranking model, named Co-BERT, has been proposed to exploit several BERT architectures to calibrate the query-document representations  ...  the two kinds of context into the two-tower ranking models like DPR [12] , ColBERT [13] , and CoRT [38] to further boost their effectiveness; In addition, for the content-context-aware lexicon matching  ... 
arXiv:2104.08523v1 fatcat:w24yxin3vjfqdl4oi74s7lsuba

Groupwise Query Performance Prediction with BERT [article]

Xiaoyang Chen, Ben He, Le Sun
2022 arXiv   pre-print
Meanwhile, recent studies suggest that the cross-attention modeling of a group of documents can effectively boost performances for both learning-to-rank algorithms and BERT-based re-ranking.  ...  To this end, a BERT-based groupwise QPP model is proposed, in which the ranking contexts of a list of queries are jointly modeled to predict the relative performance of individual queries.  ...  Furthermore, Co-BERT [8] incorporates cross-document ranking context into BERT-based re-ranking models, demonstrating the effectiveness of using groupwise methods in boosting the ranking performance  ... 
arXiv:2204.11489v1 fatcat:s4q6cuwnfbabppuklvyh6kghhm

HSRM-LAVIS at TREC 2020 Deep Learning Track: Neural First-stage Ranking Complementing Term-based Retrieval

Marco Wrzalik, Dirk Krechel
2020 Text Retrieval Conference  
Third, rankings from an ELECTRA-based re-ranker using the candidates from the second run as an example of end-to-end results.  ...  Our approach aims to complement term-based retrieval methods with rankings from a representation-focused neural ranking model for first-stage ranking.  ...  ELECTRA Re-ranking As an example for an end-to-end ranking pipeline based on our candidates, we trained a neural re-ranker using a point-wise learning objective.  ... 
dblp:conf/trec/WrzalikK20 fatcat:qbvvwzx57bapvolu4qfr3q44ue

CoRT: Complementary Rankings from Transformers [article]

Marco Wrzalik, Dirk Krechel
2021 arXiv   pre-print
In this context we propose CoRT, a simple neural first-stage ranking model that leverages contextual representations from pretrained language models such as BERT to complement term-based ranking functions  ...  Although BM25 has proven decent performance as a first-stage ranker, it tends to miss relevant passages.  ...  Acknowledgments We would like to thank Felix Hamann and Prof. Dr. Adrian Ulges for helpful discussions and comments on the manuscript, as well as the anonymous reviewers for their valuable feedback.  ... 
arXiv:2010.10252v2 fatcat:smxecdij4ngofdjhbf3yjp4lhm

Topic Propagation in Conversational Search [article]

I. Mele, C. I. Muntean, F. M. Nardini, R. Perego, N. Tonellotto, O. Frieder
2020 arXiv   pre-print
the rewritten utterances, and (iii) neural-based re-ranking of candidate passages.  ...  Experimental results show the effectiveness of our techniques that achieve an improvement up to 0.28 (+93%) for P@1 and 0.19 (+89.9%) for nDCG@3 w.r.t. the CAsT baseline.  ...  CONCLUSION We proposed a three-steps architecture for conversational search and, to this regard, we developed several utterance rewriting techniques.  ... 
arXiv:2004.14054v1 fatcat:tai3ol7455d4zjfj3mvvow5eou

DUTh at TREC 2020 Conversational Assistance Track

Michalis Fotiadis, Georgios Peikos, Symeon Symeonidis, Avi Arampatzis
2020 Text Retrieval Conference  
We argue that the conversational context of previous turns to have less impact than the keywords from the current turn while still adding some informational value.  ...  Our approach incorporates linguistic analysis of the available queries along with query reformulation.  ...  BERT Re-ranking After the initial retrieval, we tried to utilize BERT as a passage re-ranker to improve our results.  ... 
dblp:conf/trec/FotiadisPSA20 fatcat:p4tgp2finbfrvpvlbt5daafplm

Exploring Classic and Neural Lexical Translation Models for Information Retrieval: Interpretability, Effectiveness, and Efficiency Benefits [article]

Leonid Boytsov, Zico Kolter
2021 arXiv   pre-print
We study the utility of the lexical translation model (IBM Model 1) for English text retrieval, in particular, its neural variants that are trained end-to-end.  ...  This new approach to design a neural ranking system has benefits for effectiveness, efficiency, and interpretability.  ...  The resulting composite network (including token embeddings) is learned end-to-end using a ranking objective.  ... 
arXiv:2102.06815v2 fatcat:bg74b25ks5e4lk7za25j6s6ace

BERT-QE: Contextualized Query Expansion for Document Re-ranking [article]

Zhi Zheng, Kai Hui, Ben He, Xianpei Han, Le Sun, Andrew Yates
2020 arXiv   pre-print
of the BERT model to select relevant document chunks for expansion.  ...  To bridge this gap, inspired by recent advances in applying contextualized models like BERT to the document retrieval task, this paper proposes a novel query expansion model that leverages the strength  ...  Wu et al. (2020) propose the context-aware Passage-level Cumulative Gain to aggregate passage relevance representations scores, which is incorporated into a BERT-based model for document ranking.  ... 
arXiv:2009.07258v2 fatcat:z4ytilc6hvhfzc4644wnskbstu

MPII at TREC CAsT 2019: Incoporating Query Context into a BERT Re-ranker

Samarth Mehrotra, Andrew Yates
2019 Text Retrieval Conference  
Our approach consists of an initial stage ranker followed by a BERT-based [3] neural document re-ranking model.  ...  ., Wikipedia and ConceptNet) serves as the first stage ranking method, while the neural model uses BERT embeddings and a kernel-based ranking module (KNRM) to predict a document-query relevance score.  ...  Document Re-Ranking with BERT and KNRM Given a potentially relevant document D, our goal is to predict a relevance score between query q i and D that takes the conversation context (i.e., queries q 1 ,  ... 
dblp:conf/trec/MehrotraY19 fatcat:zs4zoynahzgundeu73ix2opktq

ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT [article]

Omar Khattab, Matei Zaharia
2020 arXiv   pre-print
Beyond reducing the cost of re-ranking the documents retrieved by a traditional model, ColBERT's pruning-friendly interaction mechanism enables leveraging vector-similarity indexes for end-to-end retrieval  ...  To tackle this, we present ColBERT, a novel ranking model that adapts deep LMs (in particular, BERT) for efficient retrieval.  ...  Se ing Dimension(m) Bytes/Dim Space(GiBs) MRR@10 Re-rank Cosine 128 4 286 34.9 End-to-end L2 128 2 154 36.0 Re-rank L2 128 2 143 34.8 Re-rank Cosine 48 4 54 34.4 Re-rank Cosine  ... 
arXiv:2004.12832v2 fatcat:zxu3gagcbvguvjqwufgnnquvyu

Neural Ranking Models for Document Retrieval [article]

Mohamed Trabelsi, Zhiyu Chen, Brian D. Davison, Jeff Heflin
2021 arXiv   pre-print
These models are trained end-to-end to extract features from the raw data for ranking tasks, so that they overcome the limitations of hand-crafted features.  ...  A variety of deep learning models have been proposed, and each model presents a set of neural network components to extract features that are used for ranking.  ...  Query-level learning to rank using isotonic regression. 46th Annual Allerton Conference on Communication, Control, and Computing (pp. 1108-1115).  ... 
arXiv:2102.11903v1 fatcat:zc2otf456rc2hj6b6wpcaaslsa

Language Modelling via Learning to Rank [article]

Arvid Frydenlund, Gagandeep Singh, Frank Rudzicz
2021 arXiv   pre-print
To avoid annotating top-k ranks, we generate them using pre-trained LMs: GPT-2, BERT, and Born-Again models. This leads to a rank-based form of knowledge distillation (KD).  ...  given context.  ...  Acknowledgements We thank our reviewers for their insightful feedback.  ... 
arXiv:2110.06961v2 fatcat:bjqxpfncdbbsvjnsy2injcmedi

Argument Retrieval Using Deep Neural Ranking Models

Saeed Entezari, Michael Völske
2020 Conference and Labs of the Evaluation Forum  
In order to incorporate the insights from multiple models into an argument ranking, we further investigate a simple linear aggregation strategy.  ...  In this notebook-paper for Touché by taking a distant supervision approach for constructing the query relevance information, we investigate seven different deep neural ranking models proposed in the literature  ...  In our experiments, a basis retrieval model such as BM25 produces an initial ranking which is then re-ranked by the deep neural model (except in the case of end-to-end models, which operate without an  ... 
dblp:conf/clef/EntezariV20 fatcat:oayhlcgrfncybctnbhxmfhll5u

Question Rewriting for Conversational Question Answering

Svitlana Vakulenko, Shayne Longpre, Zhucheng Tu, Raviteja Anantha
2021 Proceedings of the 14th ACM International Conference on Web Search and Data Mining  
Our evaluation results indicate that the QR model we proposed achieves near human-level performance on both datasets and the gap in performance on the end-to-end conversational QA task is attributed mostly  ...  Conversational question answering (QA) requires the ability to correctly interpret a question in the context of previous conversation turns.  ...  ACKNOWLEDGEMENTS We would like to thank our colleagues Srinivas Chappidi, Bjorn Hoffmeister, Stephan Peitz, Russ Webb, Drew Frank, and Chris DuBois for their insightful comments.  ... 
doi:10.1145/3437963.3441748 fatcat:dm4nvglfjfd2vmgx7qeoc5ak6e

Open-Domain Question-Answering for COVID-19 and Other Emergent Domains [article]

Sharon Levy, Kevin Mo, Wenhan Xiong, William Yang Wang
2021 arXiv   pre-print
Furthermore, we incorporate effective re-ranking and question-answering techniques, such as document diversity and multiple answer spans.  ...  This has created the need for a public space for users to ask questions and receive credible, scientific answers.  ...  Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.  ... 
arXiv:2110.06962v1 fatcat:soqvpnhmibeznfqufott73upza
« Previous Showing results 1 — 15 out of 5,087 results