9,976 Hits in 5.6 sec

Neural Embedding-Based Metrics for Pre-retrieval Query Performance Prediction [chapter]

Negar Arabzadeh, Fattane Zarrinkalam, Jelena Jovanovic, Ebrahim Bagheri
2020 Lecture Notes in Computer Science  
Since neural embedding-based models are showing wider adoption in the Information Retrieval (IR) community, we propose a set of pre-retrieval QPP metrics based on the properties of pre-trained neural embeddings  ...  Pre-retrieval QPP methods are oblivious to the performance of the retrieval model as they predict query difficulty prior to observing the set of documents retrieved for the query.  ...  based on the neural embedding-based representation of terms to perform pre-retrieval QPP.  ... 
doi:10.1007/978-3-030-45442-5_10 fatcat:ifb36ewytfchzikbmoytzijdie

Neural Search: Learning Query and Product Representations in Fashion E-commerce [article]

Lakshya Kumar, Sagnik Sarkar
2021 arXiv   pre-print
corpus for the query token prediction task.  ...  We perform experiments related to pre-training of the Transformer based RoBERTa model using a fashion corpus and fine-tuning it over the triplet loss.  ...  For retrieving the products based on query embedding, we construct six 4 different Annoy[34] index over the product embeddings obtained from different models.  ... 
arXiv:2107.08291v1 fatcat:z3ycxfyw4be47ks5zvtbjlepk4

Self-Supervised Learning for Code Retrieval and Summarization through Semantic-Preserving Program Transformations [article]

Nghi D. Q. Bui, Yijun Yu, Lingxiao Jiang
2021 arXiv   pre-print
The pre-trained model from Corder can be used in two ways: (1) it can produce vector representation of code and can be applied to code retrieval tasks that does not have labelled data; (2) it can be used  ...  Through extensive experiments, we have shown that our Corder pretext task substantially outperform the other baselines for code-to-code retrieval, text-to-code retrieval and code-to-text summarization  ...  [28] shows that the pre-trained Code2vec [4] model does not perform well for other code modeling tasks when it was trained specifically for the method-name prediction task. Jiang et al.  ... 
arXiv:2009.02731v5 fatcat:sdyhezkr4rhmbpmgoyssrtiu5i

A Hybrid Embedding Approach to Noisy Answer Passage Retrieval [chapter]

Daniel Cohen, W. Bruce Croft
2018 Lecture Notes in Computer Science  
In this paper, we demonstrate the flexibility of a character based approach on the task of answer passage retrieval, agnostic to the source of embeddings and with improved performance in P@1 and MRR metrics  ...  Answer passage retrieval is an increasingly important information retrieval task as queries become more precise and mobile and audio interfaces more prevalent.  ...  Acknowledgments This work was supported in part by the Center for Intelligent Information Retrieval, in part by NSF IIS-1160894 and in part by NSF grant #IIS-1419693.  ... 
doi:10.1007/978-3-319-76941-7_10 fatcat:6glb3eh65rg3ng5rfatuz5edim

An Analysis of a BERT Deep Learning Strategy on a Technology Assisted Review Task

Alexandros Ioannidis
2021 Zenodo  
clinical queries.  ...  I find that the proposed DL strategy works, I compare it to the recently successful BM25+RM3 (IR) model, and conclude that the suggested method accomplishes advanced retrieval performance in the initial  ...  [12] to compare the performance of alternative language models based on two neural word embeddings (Word2vec and GloVe) for document representation in an AL environment.  ... 
doi:10.5281/zenodo.4697891 fatcat:w4ks77xkdjfupi47vhsskoh6n4

Deep Neural Networks for Query Expansion using Word Embeddings [article]

Ayyoob Imani, Amir Vakili, Ali Montazer, Azadeh Shakery
2018 arXiv   pre-print
In this paper, we show that this is also true for more recently proposed embedding-based query expansion methods.  ...  We then introduce an artificial neural network classifier to predict the usefulness of query expansion terms. This classifier uses term word embeddings as inputs.  ...  The neural network uses only pre-trained word embeddings and no manual feature selection or initial retrieval using the query is necessary.  ... 
arXiv:1811.03514v1 fatcat:nu2iw4u6frcepnvmsum4oahiwy

Supporting Complex Information-Seeking Tasks with Implicit Constraints [article]

Ali Ahmadvand, Negar Arabzadeh, Julia Kiseleva, Patricio Figueroa Sanz, Xin Deng, Sujay Jauhar, Michael Gamon, Eugene Agichtein, Ned Friend, Aniruddha
2022 arXiv   pre-print
To demonstrate the performance of the proposed modeling paradigm, we have adopted various pre-retrieval metrics that capture the extent to which guided interactions with our system yield better retrieval  ...  In such scenarios, the user requests can be issued at once in the form of a complex and long query, unlike conversational and exploratory search models that require short utterances or queries where they  ...  Neural embedding based QPPs (Neural-CC): Neural embeddingbased QPP metrics have shown excellent performance on several Information Retrieval benchmarks.  ... 
arXiv:2205.00584v1 fatcat:waapsu6kjfgolbvhny36kqffsa

LIDER: An Efficient High-dimensional Learned Index for Large-scale Dense Passage Retrieval [article]

Yifan Wang, Haodi Ma, Daisy Zhe Wang
2022 arXiv   pre-print
Text retrieval using dense embeddings generated from deep neural models is called "dense passage retrieval".  ...  But most of the existing learned indexes are designed for low dimensional data. Thus they are not suitable for dense passage retrieval tasks with high-dimensional dense embeddings.  ...  For an in-cluster retriever, that range is pre-computed purely based on 𝑘 before the search, while for the centroids retriever, the range is also pre-determined only based on 𝑐 0 .  ... 
arXiv:2205.00970v1 fatcat:jp3ckx7u4rd4rm4zqunkxlw4a4

Semantic Models for the First-stage Retrieval: A Comprehensive Review [article]

Yinqiong Cai, Yixing Fan, Jiafeng Guo, Fei Sun, Ruqing Zhang, Xueqi Cheng
2021 arXiv   pre-print
methods and neural semantic retrieval methods.  ...  In this paper, we describe the current landscape of the first-stage retrieval models under a unified framework to clarify the connection between classical term-based retrieval methods, early semantic retrieval  ...  Existing works aim to train specific neural networks to get the query and document representation vectors for different retrieval tasks.  ... 
arXiv:2103.04831v3 fatcat:6qa7hvc3jve3pcmo2mo4qsiefq


Sendong Zhao, Chang Su, Andrea Sboner, Fei Wang
2019 Proceedings of the 28th ACM International Conference on Information and Knowledge Management - CIKM '19  
for each query.  ...  Effective biomedical literature retrieval (BLR) plays a central role in precision medicine informatics. In this paper, we propose GRAPHENE, which is a deep learning based framework for precise BLR.  ...  There are studies trying to perform query expansion or leverage pre-trained embeddings of name entities using external knowledge bases.  ... 
doi:10.1145/3357384.3358038 dblp:conf/cikm/ZhaoSSW19 fatcat:hpll7xork5em7jkoz4kz3m3gry

Neural Models for Information Retrieval [article]

Bhaskar Mitra, Nick Craswell
2017 arXiv   pre-print
Neural ranking models for information retrieval (IR) use shallow or deep neural networks to rank search results in response to a query.  ...  We then review shallow neural IR methods that employ pre-trained neural term embeddings without learning the IR task end-to-end.  ...  Embedding based models often perform poorly when the retrieval is performed over the full document collection [143] .  ... 
arXiv:1705.01509v1 fatcat:i3flxbi5kfdafcsaexqokmhouy

AutoADR: Automatic Model Design for Ad Relevance [article]

Yiren Chen, Yaming Yang, Hong Sun, Yujing Wang, Yu Xu, Wei Shen, Rong Zhou, Yunhai Tong, Jing Bai, Ruofei Zhang
2020 arXiv   pre-print
Specifically, AutoADR leverages a one-shot neural architecture search algorithm to find a tailored network architecture for Ad Relevance.  ...  The search process is simultaneously guided by knowledge distillation from a large pre-trained teacher model (e.g.  ...  In general, it contains three key components: Ad Retrieval component performs the initial retrieval step with techniques like Information Retrieval (IR) to generate a large candidate list for a given user  ... 
arXiv:2010.07075v1 fatcat:jhvzasllj5dujggc5to64rir2m

Designing An Information Framework For Semantic Search

İsmail Burak PARLAK
2022 European Journal of Science and Technology  
More accurate results can be retrieved by obtaining contextual information of different types of content such as text, image, video with neural models.  ...  As a result, semantic search methods performed better than lexical search.  ...  Cosine similarity performed better for the same base models. Looking at the training datasets, the models trained with more and diverse datasets performed better by far.  ... 
doi:10.31590/ejosat.1043441 fatcat:vq2z7kg4ozd4rjobkvpqlubiby

Deep Neural Network and Boosting Based Hybrid Quality Ranking for e-Commerce Product Search

Mourad Jbene, Smail Tigani, Saadane Rachid, Abdellah Chehri
2021 Big Data and Cognitive Computing  
This work proposes an e-commerce product search engine based on a similarity metric that works on top of query and product embeddings.  ...  Two pre-trained word embedding models were tested, the first representing a category of models that generate fixed embeddings and a second representing a newer category of models that generate context-aware  ...  Acknowledgments: We would like to thank the anonymous referees for their valuable comments and helpful suggestions.  ... 
doi:10.3390/bdcc5030035 fatcat:balznb36jbhcdl2oofcsptzjmq

Cross-modal representation alignment of molecular structure and perturbation-induced transcriptional profiles [article]

Samuel G. Finlayson, Matthew B.A. McDermott, Alex V. Pickering, Scott L. Lipnick, Isaac S. Kohane
2020 arXiv   pre-print
Many benchmark tasks have been proposed for molecular property prediction, but these tasks are generally aimed at specific, isolated biomedical properties.  ...  We benchmark our results against oracle models and principled baselines, and find that cell line variability markedly influences performance in this domain.  ...  Additionally, it is also clear that CCA is the preferred query metric, over either raw correlation (Corr) based lookups or D-NN based lookups.  ... 
arXiv:1911.10241v2 fatcat:vf2yzabsu5fxjollco4i3epsjm
« Previous Showing results 1 — 15 out of 9,976 results