Filters








16,986 Hits in 5.6 sec

Domain-matched Pre-training Tasks for Dense Retrieval [article]

Barlas Oğuz, Kushal Lakhotia, Anchit Gupta, Patrick Lewis, Vladimir Karpukhin, Aleksandra Piktus, Xilun Chen, Sebastian Riedel, Wen-tau Yih, Sonal Gupta, Yashar Mehdad
2021 arXiv   pre-print
Pre-training on larger datasets with ever increasing model size is now a proven recipe for increased performance across almost all NLP tasks.  ...  A notable exception is information retrieval, where additional pre-training has so far failed to produce convincing results.  ...  Question generation Conclusion We have investigated domain-matched pre-training tasks for bi-encoder dense retrieval models.  ... 
arXiv:2107.13602v1 fatcat:yuagm746jjfxllrctlbqey5fmy

Efficient Retrieval Optimized Multi-task Learning [article]

Hengxin Fun, Sunil Gandhi, Sujith Ravi
2021 arXiv   pre-print
These advances are fueled by combining large pre-trained language models with learnable retrieval of documents.  ...  In this paper, we propose a novel Retrieval Optimized Multi-task (ROM) framework for jointly training self-supervised tasks, knowledge retrieval, and extractive question answering.  ...  ., 2020) use selfsupervised pre-training for learning dense representations for the query and passages.  ... 
arXiv:2104.10129v1 fatcat:br6yaaumfncdrck2wxmodmihoq

Large Dual Encoders Are Generalizable Retrievers [article]

Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, Yinfei Yang
2021 arXiv   pre-print
It has been shown that dual encoders trained on one domain often fail to generalize to other domains for retrieval tasks.  ...  With multi-stage training, surprisingly, scaling up the model size brings significant improvement on a variety of retrieval tasks, especially for out-of-domain generalization.  ...  Acknowledgments We thank Chris Tar and Don Metzler for feedback and suggestions.  ... 
arXiv:2112.07899v1 fatcat:y6dydk7vnndmpp4cxy3r7fwhhm

ReACC: A Retrieval-Augmented Code Completion Framework [article]

Shuai Lu, Nan Duan, Hojae Han, Daya Guo, Seung-won Hwang, Alexey Svyatkovskiy
2022 arXiv   pre-print
We adopt a stage-wise training approach that combines a source code retriever and an auto-regressive language model for programming language.  ...  Specifically, we propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.  ...  Acknowledgements This work is supported by Microsoft Research Asia and IITP grants (2021-0-01696, High Potential Individuals Global Training Program)  ... 
arXiv:2203.07722v1 fatcat:7otcfglafbaa3bddc7656r5toe

Multi-CPR: A Multi Domain Chinese Dataset for Passage Retrieval [article]

Dingkun Long, Qiong Gao, Kuan Zou, Guangwei Xu, Pengjun Xie, Ruijie Guo, Jian Xu, Guanjun Jiang, Luxi Xing, Ping Yang
2022 arXiv   pre-print
We hope the release of the Multi-CPR dataset could benchmark Chinese passage retrieval task in specific domain and also make advances for future studies.  ...  We find that the performance of retrieval models trained on dataset from general domain will inevitably decrease on specific domain.  ...  ACKNOWLEDGMENTS We thank all anonymous reviewers for their helpful suggestions. We also thank all the annotators for constructing this dataset.  ... 
arXiv:2203.03367v2 fatcat:wjvdvmxmvrcuzfwuzh4agrh37u

Open-Domain Question-Answering for COVID-19 and Other Emergent Domains [article]

Sharon Levy, Kevin Mo, Wenhan Xiong, William Yang Wang
2021 arXiv   pre-print
Despite the small data size available, we are able to successfully train the system to retrieve answers from a large-scale corpus of published COVID-19 scientific papers.  ...  In this work, we present such a system for the emergent domain of COVID-19.  ...  Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.  ... 
arXiv:2110.06962v1 fatcat:soqvpnhmibeznfqufott73upza

Towards Unsupervised Dense Information Retrieval with Contrastive Learning [article]

Gautier Izacard and Mathilde Caron and Lucas Hosseini and Sebastian Riedel and Piotr Bojanowski and Armand Joulin and Edouard Grave
2021 arXiv   pre-print
Information retrieval is an important component in natural language processing, for knowledge intensive tasks such as question answering and fact checking.  ...  Thus, a natural question is whether it is possible to train dense retrievers without supervision.  ...  Training dense retrievers without supervision can be achieved by using a pretext task that approximates retrieval.  ... 
arXiv:2112.09118v1 fatcat:xa32uym67vhgbkz6uxu6aukyf4

Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling [article]

Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, Jingjing Liu
2021 arXiv   pre-print
These feature extractors are trained independently and usually on tasks different from the target domains, rendering these fixed features sub-optimal for downstream tasks.  ...  Moreover, due to the high computational overload of dense video features, it is often difficult (or infeasible) to plug feature extractors directly into existing approaches for easy finetuning.  ...  Beyond using fixed features and same-domain data (i.e., video-text pre-training only for video-text tasks), our work focuses on end-to-end training and applying image-text pre-training for video-text tasks  ... 
arXiv:2102.06183v1 fatcat:n5yabezujbg27eosmpb23s4hlm

Learning Dense Representations of Phrases at Scale [article]

Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, Danqi Chen
2021 arXiv   pre-print
Finally, we directly use our pre-indexed dense phrase representations for two slot filling tasks, showing the promise of utilizing DensePhrases as a dense knowledge base for downstream tasks.  ...  On five popular open-domain QA datasets, our model DensePhrases improves over previous phrase retrieval models by 15%-25% absolute accuracy and matches the performance of state-of-the-art retriever-reader  ...  Conclusion In this study, we show that we can learn dense representations of phrases at the Wikipedia scale, which are readily retrievable for open-domain QA and other knowledge-intensive NLP tasks.  ... 
arXiv:2012.12624v3 fatcat:vjszqzioyng5djmut33b4fkm6y

Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One? [article]

Xilun Chen, Kushal Lakhotia, Barlas Oğuz, Anchit Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta, Wen-tau Yih
2022 arXiv   pre-print
Empirically, SPAR shows superior performance on a range of tasks including five question answering datasets, MS MARCO passage retrieval, as well as the EntityQuestions and BEIR benchmarks for out-of-domain  ...  We show that a dense Lexical Model Λ can be trained to imitate a sparse one, and SPAR is built by augmenting a standard dense retriever with Λ.  ...  ., 2021) , which consists of a diverse set of 18 retrieval tasks across 9 domains 5 .  ... 
arXiv:2110.06918v2 fatcat:l3q7vcersrfsxjgjoffwjhmnmy

RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering [article]

Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, Haifeng Wang
2021 arXiv   pre-print
In open-domain question answering, dense passage retrieval has become a new paradigm to retrieve relevant passages for finding answers.  ...  Typically, the dual-encoder architecture is adopted to learn dense representations of questions and passages for semantic matching.  ...  ERNIE 2.0 has the same networks as BERT, and it introduces continual pre-training framework on multiple pre-trained tasks.  ... 
arXiv:2010.08191v2 fatcat:abwwfka3svcsdfipm5454hfifm

Cross-Lingual Training with Dense Retrieval for Document Retrieval [article]

Peng Shi, Rui Zhang, He Bai, Jimmy Lin
2021 arXiv   pre-print
However, its effectiveness in document retrieval for non-English languages remains unexplored due to the limitation in training resources.  ...  Dense retrieval has shown great success in passage ranking in English.  ...  Introduction Dense retrieval uses dense vector representations for semantic encoding and matching.  ... 
arXiv:2109.01628v1 fatcat:ogrosnckcjcjlco3hdb4cqsrwe

GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense Retrieval [article]

Kexin Wang, Nandan Thakur, Nils Reimers, Iryna Gurevych
2022 arXiv   pre-print
We further investigate the role of six recent pre-training methods in the scenario of domain adaptation for retrieval tasks, where only three could yield improved results.  ...  ., 2021b), the performance of dense retrievers severely degrades under a domain shift. This limits the usage of dense retrieval approaches to only a few domains with large training datasets.  ...  If these pre-training approaches can be used for unsupervised domain adaptation for dense retrieval was so far unclear.  ... 
arXiv:2112.07577v3 fatcat:55n5p6tpencplduxb4ik3htssy

Encoder Adaptation of Dense Passage Retrieval for Open-Domain Question Answering [article]

Minghan Li, Jimmy Lin
2021 arXiv   pre-print
One key feature of dense passage retrievers (DPR) is the use of separate question and passage encoder in a bi-encoder design.  ...  For example, applying an OOD passage encoder usually hurts the retrieval accuracy while an OOD question encoder sometimes even improves the accuracy.  ...  ., 2021) , and QA pre-training (Lu et al., 2021; Gao and Callan, 2021) has further pushed the performance boundary of in-distribution dense retrieval.  ... 
arXiv:2110.01599v1 fatcat:4oir3cwcc5cqrlkatf7gf62b6i

RECONSIDER: Re-Ranking using Span-Focused Cross-Attention for Open Domain Question Answering [article]

Srinivasan Iyer, Sewon Min, Yashar Mehdad, Wen-tau Yih
2020 arXiv   pre-print
We develop a simple and effective re-ranking approach (RECONSIDER) for span-extraction tasks, that improves upon the performance of large pre-trained MRC models.  ...  State-of-the-art Machine Reading Comprehension (MRC) models for Open-domain Question Answering (QA) are typically trained for span selection using distantly supervised positive examples and heuristically  ...  open-domain QA tasks evaluated in our paper.  ... 
arXiv:2010.10757v1 fatcat:ti5xmklgxbbg3b7apvciucbsi4
« Previous Showing results 1 — 15 out of 16,986 results