Filters








3,162 Hits in 6.9 sec

Improving Transformer-Kernel Ranking Model Using Conformer and Query Term Independence [article]

Bhaskar Mitra, Sebastian Hofstatter, Hamed Zamani, Nick Craswell
2021 arXiv   pre-print
Furthermore, we incorporate query term independence and explicit term matching to extend the model to the full retrieval setting.  ...  The Transformer-Kernel (TK) model has demonstrated strong reranking performance on the TREC Deep Learning benchmark -- and can be considered to be an efficient (but slightly less effective) alternative  ...  We incorporate Conformers into TK as a direct replacement for the Transformer layers and name the new architecture as a Conformer-Kernel (CK) model.  ... 
arXiv:2104.09393v1 fatcat:styghwxkwfhnzgi5ukr3lmgotq

Conformer-Kernel with Query Term Independence for Document Retrieval [article]

Bhaskar Mitra, Sebastian Hofstatter, Hamed Zamani, Nick Craswell
2020 arXiv   pre-print
The Transformer-Kernel (TK) model has demonstrated strong reranking performance on the TREC Deep Learning benchmark---and can be considered to be an efficient (but slightly less effective) alternative  ...  In this work, we extend the TK architecture to the full retrieval setting by incorporating the query term independence assumption.  ...  An alternative approach assumes query term independence (QTI) in the design of the neural ranking model .  ... 
arXiv:2007.10434v1 fatcat:vxpfn3fnwbge3fx5kiejezvhj4

Conformer-Kernel with Query Term Independence at TREC 2020 Deep Learning Track [article]

Bhaskar Mitra, Sebastian Hofstatter, Hamed Zamani, Nick Craswell
2021 arXiv   pre-print
We benchmark Conformer-Kernel models under the strict blind evaluation setting of the TREC 2020 Deep Learning track.  ...  In particular, we study the impact of incorporating: (i) Explicit term matching to complement matching based on learned representations (i.e., the "Duet principle"), (ii) query term independence (i.e.,  ...  Conformer-Kernel with Query Term Independence The CK models combine novel Conformer layers with several other existing ideas from the neural information retrieval literature Craswell, 2018, Guo et al.  ... 
arXiv:2011.07368v2 fatcat:vzcm4qubzvflbluntvlhvn6fsy

Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation [article]

Sebastian Hofstätter, Sophia Althammer, Michael Schröder, Mete Sertkan, Allan Hanbury
2021 arXiv   pre-print
The latency of neural ranking models at query time is largely dependent on the architecture and deliberate choices by their designers to trade-off effectiveness for higher efficiency.  ...  Retrieval and ranking models are the backbone of many applications such as web search, open domain QA, or text-based recommender systems.  ...  global attention [2] or by combining an efficient transformer-kernel model with a conformer layer [33] .  ... 
arXiv:2010.02666v2 fatcat:5xwgvypsrrfqxbpvs26e66gjbm

Probabilistic Trace Alignment [article]

Giacomo Bergami, Fabrizio Maria Maggi, Marco Montali, Rafael Peñaloza
2021 arXiv   pre-print
However, approaches based on trace alignments use crisp process models as reference and recent probabilistic conformance checking approaches check the degree of conformance of an event log with respect  ...  In this paper, for the first time, we provide a conformance checking approach based on trace alignments using stochastic Workflow nets.  ...  Also, we will try to improve the performance (in terms of efficiency and accuracy) of the proposed approach by intervening both on the embedding and the algorithmic strategies.  ... 
arXiv:2107.03997v1 fatcat:qdqwkip6uzeijn5rjz7f5bkwf4

Effective and practical neural ranking

Sean MacAvaney
2021 SIGIR Forum  
I also demonstrate that contextualized representations, particularly those from transformer-based language models, considerably improve neural ad-hoc ranking performance.  ...  training ranking models.  ...  KNRM and C-KNRM [37, 214] are kernel-based ranking models that calculate the cosine similarity between the word embeddings of query and document terms, aggregating the scores using Gaussian kernels.  ... 
doi:10.1145/3476415.3476432 fatcat:fdjy53sggvhgxo5fa5hzpede2i

Text Mining for Protein Docking

Varsha D. Badal, Petras J. Kundrotas, Ilya A. Vakser, Nir Ben-Tal
2015 PLoS Computational Biology  
The accumulated data on experimentally determined structures transformed structure prediction of proteins and protein complexes.  ...  A major paradigm shift in modeling of protein complexes is emerging due to the rapidly expanding amount of such information, which can be used as modeling constraints.  ...  queryD OR queryE)").  ... 
doi:10.1371/journal.pcbi.1004630 pmid:26650466 pmcid:PMC4674139 fatcat:5usgubv7uzho7iapj4ha26ge7q

Exploring Classic and Neural Lexical Translation Models for Information Retrieval: Interpretability, Effectiveness, and Efficiency Benefits [article]

Leonid Boytsov, Zico Kolter
2021 arXiv   pre-print
Using Model 1 we produced best neural and non-neural runs on the MS MARCO document ranking leaderboard in late 2020.  ...  We use the neural Model1 as an aggregator layer applied to context-free or contextualized query/document embeddings.  ...  Berger and Lafferty proposed to estimate this probability with a term-independent and context-free model known as Model 1 [4] .  ... 
arXiv:2102.06815v2 fatcat:bg74b25ks5e4lk7za25j6s6ace

Stretching Bayesian Learning in the Relevance Feedback of Image Retrieval [chapter]

Ruofei Zhang, Zhongfei (Mark) Zhang
2004 Lecture Notes in Computer Science  
Different learning strategies are used for positive and negative sample collections in BALAS, respectively, based on the two unique characteristics.  ...  By defining the relevancy confidence as the relevant posterior probability, we have developed an integrated ranking scheme in BALAS which complementarily combines the subjective relevancy confidence and  ...  It is reasonable to assume all dimensions of one feature are independent (raw features per se are independent, e. g., color and texture features, or we can always apply K-L transform [5] to generate  ... 
doi:10.1007/978-3-540-24672-5_28 fatcat:xiq64s66gzhozcgw4n7oldzp7e

An Axiomatic Approach to Regularizing Neural Ranking Models

Corby Rosset, Bhaskar Mitra, Chenyan Xiong, Nick Craswell, Xia Song, Saurabh Tiwary
2019 Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR'19  
This work explores the use of IR axioms to augment the direct supervision from labeled data for training neural ranking models.  ...  Our experiments show that the neural ranking model achieves faster convergence and better generalization with axiomatic regularization.  ...  AXIOMATIC REGULARIZATION FOR NEURAL RANKING MODELS In ad-hoc retrieval-an important IR task-the ranking model receives as input a pair of query q and document d, and estimates a score proportional to their  ... 
doi:10.1145/3331184.3331296 dblp:conf/sigir/RossetMXCST19 fatcat:g5robc5tfnfwxfwmq3c4nyho4a

An Axiomatic Approach to Regularizing Neural Ranking Models [article]

Corby Rosset, Bhaskar Mitra, Chenyan Xiong, Nick Craswell, Xia Song, Saurabh Tiwary
2019 arXiv   pre-print
This work explores the use of IR axioms to augment the direct supervision from labeled data for training neural ranking models.  ...  Our experiments show that the neural ranking model achieves faster convergence and better generalization with axiomatic regularization.  ...  AXIOMATIC REGULARIZATION FOR NEURAL RANKING MODELS In ad-hoc retrieval-an important IR task-the ranking model receives as input a pair of query q and document d, and estimates a score proportional to their  ... 
arXiv:1904.06808v1 fatcat:g5sfgji6pnaibpbxegktlk2b7i

Pretrained Transformers for Text Ranking: BERT and Beyond

Andrew Yates, Rodrigo Nogueira, Jimmy Lin
2021 Proceedings of the 14th ACM International Conference on Web Search and Data Mining  
In the context of text ranking, these models produce high quality results across many domains, tasks, and settings.  ...  The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query for a particular task.  ...  Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques  ... 
doi:10.1145/3437963.3441667 fatcat:6teqmlndtrgfvk5mneq5l7ecvq

Should one Use Term Proximity or Multi-Word Terms for Arabic Information Retrieval?

Abdelkader El Mahdaouy, Eric Gaussier, Saïd Ouatik El Alaoui
2019 Computer Speech and Language  
In this paper, we propose to explore whether term dependencies can help improve Arabic IR systems, and what are the best methods to use.  ...  Recently, several Information retrieval (IR) models have been proposed in order to boost the retrieval performance using term dependencies.  ...  Farasa segmenter is based on SVM-rank using linear kernels (Abdelali et al., 2016; Darwish and Mubarak, 2016) .  ... 
doi:10.1016/j.csl.2019.04.002 fatcat:k6wup5ke3be7ljia5jvpi5s77m

A kernel-based active learning strategy for content-based image retrieval

I. Daoudi, K. Idrissi
2010 2010 International Workshop on Content Based Multimedia Indexing (CBMI)  
In this paper, we propose an efficient kernel-based active learning strategy to improve the retrieval performance of CBIR systems using class probability distributions.  ...  The distances between user's request and database images are then learned and computed in the kernel space.  ...  Based on quasiconformal transformed kernels, the proposed active learning process generates for each class a suitable similarity model by accumulating classification knowledge collected over multiple query  ... 
doi:10.1109/cbmi.2010.5529915 dblp:conf/cbmi/DaoudiI10 fatcat:bxj66epktzanzeof2olfbfssia

BALAS: Empirical Bayesian learning in the relevance feedback for image retrieval

Ruofei Zhang, Zhongfei (Mark) Zhang
2006 Image and Vision Computing  
In BALAS different learning strategies are used for positive and negative sample collections, respectively, based on the two unique characteristics.  ...  By defining the relevancy confidence as the relevant posterior probability, we have developed an integrated ranking scheme in BALAS which complementarily combines the subjective relevancy confidence and  ...  other dimensions' distributions in the feature space are jumbled, and thus do not conform to a Gaussian model well.  ... 
doi:10.1016/j.imavis.2005.11.004 fatcat:ojjb4moxjjaw7odxv6wohcymra
« Previous Showing results 1 — 15 out of 3,162 results