Filters








3,337 Hits in 6.7 sec

Large Dual Encoders Are Generalizable Retrievers [article]

Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, Yinfei Yang
2021 arXiv   pre-print
Experimental results show that our dual encoders, Generalizable T5-based dense Retrievers (GTR), outperform retrievers on the BEIR dataset significantly.  ...  It has been shown that dual encoders trained on one domain often fail to generalize to other domains for retrieval tasks.  ...  ., 2016) , we are able to train large-scale dual encoder retrieval models. We call the resulting models Generalizable T5-based dense Retrievers (GTR).  ... 
arXiv:2112.07899v1 fatcat:y6dydk7vnndmpp4cxy3r7fwhhm

Keep the Caption Information: Preventing Shortcut Learning in Contrastive Image-Caption Retrieval [article]

Maurits Bleeker, Andrew Yates, Maarten de Rijke
2022 arXiv   pre-print
To train image-caption retrieval (ICR) methods, contrastive loss functions are a common choice for optimization functions.  ...  Additionally, we show that the evaluation scores benefit from implementing LTD as an optimization constraint instead of a dual loss.  ...  The baseline image-caption retrieval model consists of an image and caption encoder. The encoders are trained by the contrastive InfoNCE loss.  ... 
arXiv:2204.13382v1 fatcat:alcrgcsq6je67ixtervmtzdjqi

UnifieR: A Unified Retriever for Large-Scale Retrieval [article]

Tao Shen, Xiubo Geng, Chongyang Tao, Can Xu, Kai Zhang, Daxin Jiang
2022 arXiv   pre-print
Large-scale retrieval is to recall relevant documents from a huge collection given a query. It relies on representation learning to embed documents and queries into a common semantic encoding space.  ...  model with a dual-representing capability.  ...  ., dense-vector and lexicon-based retrieval) are widely studied towards large-scale retrieval.  ... 
arXiv:2205.11194v1 fatcat:ttwz7flj75dg7ezchxsos2tfum

Interfering with memory for faces: The cost of doing two things at once

Jeffrey D. Wammes, Myra A. Fernandes
2015 Memory  
A dual-task paradigm was used to infer the processes critical for episodic memory retrieval by measuring susceptibility to memory interference from different distracting tasks.  ...  demands of the distracting and retrieval tasks overlap.  ...  large.  ... 
doi:10.1080/09658211.2014.998240 pmid:25621412 fatcat:umnuhp5shfdezgctc5patujehe

Multi-Granularity Representations of Dialog

Shikib Mehri, Maxine Eskenazi
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
Strong performance gains are observed on the next utterance retrieval task using both the MultiWOZ dataset and the Ubuntu dialog corpus.  ...  The multi-granularity training algorithm modifies the mechanism by which negative candidate responses are sampled in order to control the granularity of learned latent representations.  ...  Methods This section describes three methods used for next utterance retrieval: a strong baseline dual encoder architecture, an ensemble of dual encoders, and an ensemble of dual encoders with multi-granularity  ... 
doi:10.18653/v1/d19-1184 dblp:conf/emnlp/MehriE19 fatcat:zgtwwapngrhpvcsk46b526kycy

MILES: Visual BERT Pre-training with Injected Language Semantics for Video-text Retrieval [article]

Yuying Ge, Yixiao Ge, Xihui Liu, Alex Jinpeng Wang, Jianping Wu, Ying Shan, Xiaohu Qie, Ping Luo
2022 arXiv   pre-print
Dominant pre-training work for video-text retrieval mainly adopt the "dual-encoder" architectures to enable efficient retrieval, where two separate encoders are used to contrast global video and text representations  ...  In this work, we for the first time investigate masked visual modeling in video-text pre-training with the "dual-encoder" architecture.  ...  Our contributions are three-fold. (1) We are the first to explore the potential of BERT-style pre-training in videotext retrieval with dual-encoder models.  ... 
arXiv:2204.12408v1 fatcat:hdl63xqcnvb6bj6zjvvv2rqdty

Multi-Granularity Representations of Dialog [article]

Shikib Mehri, Maxine Eskenazi
2019 arXiv   pre-print
Strong performance gains are observed on the next utterance retrieval task using both the MultiWOZ dataset and the Ubuntu dialog corpus.  ...  The multi-granularity training algorithm modifies the mechanism by which negative candidate responses are sampled in order to control the granularity of learned latent representations.  ...  Methods This section describes three methods used for next utterance retrieval: a strong baseline dual encoder architecture, an ensemble of dual encoders, and an ensemble of dual encoders with multi-granularity  ... 
arXiv:1908.09890v1 fatcat:fm2pakzxkbcirfpeo5lf6iujiq

Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers [article]

Weng Lam Tam, Xiao Liu, Kaixuan Ji, Lilong Xue, Xingjian Zhang, Yuxiao Dong, Jiahua Liu, Maodi Hu, Jie Tang
2022 arXiv   pre-print
Finally, to facilitate research on retrievers' cross-topic generalizability, we curate and release an academic retrieval dataset with 18K query-results pairs in 87 topics, making it the largest topic-specific  ...  By updating only 0.1% of the model parameters, the prompt tuning strategy can help retrieval models achieve better generalization performance than traditional methods in which all parameters are updated  ...  First, training dual-encoders double the size of the parameters to be tuned.  ... 
arXiv:2207.07087v1 fatcat:vzkzz577yjentn7ivi7q7uvzvq

Entity Linking in 100 Languages [article]

Jan A. Botha, Zifei Shan, Daniel Gillick
2020 arXiv   pre-print
We train a dual encoder in this new setting, building on prior work with improved feature representation, negative mining, and an auxiliary entity-pairing task, to obtain a single entity retrieval model  ...  Rare entities and low-resource languages pose challenges at this large-scale, so we advocate for an increased focus on zero- and few-shot evaluation.  ...  Experiments We conduct a series of experiments to gain insight into the behavior of the dual encoder retrieval models under the proposed MEL setting, asking: • What are the relative merits of the two types  ... 
arXiv:2011.02690v1 fatcat:5kvghqpknjgyjohtlohtqgjb2m

Efficient Retrieval Optimized Multi-task Learning [article]

Hengxin Fun, Sunil Gandhi, Sujith Ravi
2021 arXiv   pre-print
These advances are fueled by combining large pre-trained language models with learnable retrieval of documents.  ...  Using separate encoders for each stage/task occupies a lot of memory and makes it difficult to scale to a large number of tasks.  ...  ., 2020) and Fusion-in-Decoder (Izacard and Grave, 2020) are based on DPR and use dual-encoder for retrieval of the documents.  ... 
arXiv:2104.10129v1 fatcat:br6yaaumfncdrck2wxmodmihoq

Computational discrimination between natural images based on gaze during mental imagery

Xi Wang, Andreas Ley, Sebastian Koch, James Hays, Kenneth Holmqvist, Marc Alexa
2020 Scientific Reports  
In this work we first quantify the similarity of eye movements between recalling an image and encoding the same image, followed by the investigation on whether comparing such pairs of eye movements can  ...  be used for computational image retrieval.  ...  encoder and decoder in the dual-task network.  ... 
doi:10.1038/s41598-020-69807-0 pmid:32747683 fatcat:i5yzidmp4vhsrefgjuivbsxhd4

Picture-word differences in decision latency: An analysis of single and dual memory models

James W. Pellegrino, Richard R. Rosinski, Harry L. Chiesi, Alexander Siegel
1977 Memory & Cognition  
Several variants of a dual memory model are rejected and those which fit the data require assumptions about storage and/or transfer time values which result in a functional regression to the unitary memory  ...  Semantic and perceptual size decision times for pictorial and verbal material were analyzed in the context of a unitary memory model and several dual memory models.  ...  a' Encode Word a' Encode Word a' Word-Word Encode Retrieval Picture-Word Picture + inNVS + a rNY Encode Retrieval Word-Picture Picture + inNYS a rNY Transfer to YS t Retrieval  ... 
doi:10.3758/bf03197377 pmid:24203005 fatcat:4maw3sukojfunj5ajlzy7uxqw4

Long-Term Memory Updating: The Reset-of-Encoding Hypothesis in List-Method Directed Forgetting

Bernhard Pastötter, Tobias Tempel, Karl-Heinz T. Bäuml
2017 Frontiers in Psychology  
Alternative explanations of the effect and the generalizability of ROE to other experimental tasks are discussed.  ...  In this task, people are cued to forget a previously studied list of items (list 1) and to learn a new list of items (list 2) instead.  ...  FROM SINGLE-TO DUAL-MECHANISM ACCOUNTS OF LMDF Single-mechanism accounts of LMDF assume that L2E and L1F are the two sides of the same coin and are mediated by the same cognitive mechanism.  ... 
doi:10.3389/fpsyg.2017.02076 pmid:29230187 pmcid:PMC5711817 fatcat:ycob44kwgngipm2xginzghgv2e

Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models [article]

Jianmo Ni, Gustavo Hernández Ábrego, Noah Constant, Ji Ma, Keith B. Hall, Daniel Cer, Yinfei Yang
2021 arXiv   pre-print
Finally, our encoder-decoder method achieves a new state-of-the-art on STS when using sentence embeddings. Our models are released at https://tfhub.dev/google/collections/sentence-t5/1.  ...  Sentence embeddings are broadly useful for language processing tasks.  ...  dual encoders and contrastive learning (Conneau et al., 2017; Gao et al., 2021) .  ... 
arXiv:2108.08877v3 fatcat:7lxpkreadvcd7kgweyho5pcopy

LILE: Look In-Depth before Looking Elsewhere – A Dual Attention Network using Transformers for Cross-Modal Information Retrieval in Histopathology Archives [article]

Danial Maleki, H.R Tizhoosh
2022 arXiv   pre-print
Therefore, enabling bidirectional cross-modality data retrieval capable of processing has become a requirement for many domains and disciplines of research.  ...  For experiments using MS-COCO dataset, the same architectures are implemented for both image encoder and text encoder.  ...  The cross-modal retrieval task can benefit greatly from the use of large amounts of collected data and powerful computational resources.  ... 
arXiv:2203.01445v2 fatcat:onogf45adrgcvjd5psnm25sbam
« Previous Showing results 1 — 15 out of 3,337 results