Filters








5,148 Hits in 5.9 sec

Prototypical Priors: From Improving Classification to Zero-Shot Learning [article]

Saumya Jetley, Bernardino Romera-Paredes, Sadeep Jayasumana, Philip Torr
2018 arXiv   pre-print
Using prototypes as prior information, the deepnet pipeline learns the input image projections into the prototypical embedding space subject to minimization of the final classification loss.  ...  In zero-shot learning scenarios, the same system can be directly deployed to draw inference on unseen classes by simply adding the prototypical information for these new classes at test time.  ...  during training, (b) Improvement in classification performance over unseen classes, i.e., in a zero-shot learning scenario.  ... 
arXiv:1512.01192v2 fatcat:6r7lo54tibhwropgayzm3iqe5e

Prototypical Priors: From Improving Classification to Zero-Shot Learning

Saumya Jetley, Bernardino Romera-Paredes, Sadeep Jayasumana, Philip Torr
2015 Procedings of the British Machine Vision Conference 2015  
Using prototypes as prior information, the deepnet pipeline learns the input image projections into the prototypical embedding space subject to minimization of the final classification loss.  ...  In zero-shot learning scenarios, the same system can be directly deployed to draw inference on unseen classes by simply adding the prototypical information for these new classes at test time.  ...  during training, (b) Improvement in classification performance over unseen classes, i.e., in a zero-shot learning scenario.  ... 
doi:10.5244/c.29.120 dblp:conf/bmvc/JetleyRJT15 fatcat:aph7jcjonnhv3br7n7remq6cli

Variational Prototyping-Encoder: One-Shot Learning with Prototypical Images [article]

Junsik Kim, Tae-Hyun Oh, Seokju Lee, Fei Pan, In So Kweon
2019 arXiv   pre-print
We propose a new approach called variational prototyping-encoder (VPE) that learns the image translation task from real-world input images to their corresponding prototypical images as a meta-task.  ...  We tackle an open-set graphic symbol recognition problem by one-shot classification with prototypical images as a single training example for each novel class.  ...  One-shot classification (Real to prototypes) The one-shot classification performances are reported in Table 2 and Table 3 .  ... 
arXiv:1904.08482v1 fatcat:hmqy2toanzaj7phhxzpdvyhvty

Hyperbolic Busemann Learning with Ideal Prototypes [article]

Mina Ghadimi Atigh, Martin Keller-Ressel, Pascal Mettes
2021 arXiv   pre-print
Hyperbolic space has become a popular choice of manifold for representation learning of various datatypes from tree-like structures and text to graphs.  ...  Building on the success of deep learning with prototypes in Euclidean and hyperspherical spaces, a few recent works have proposed hyperbolic prototypes for classification.  ...  Learning with prototypes from class means has shown to be effective for few-shot learning [13, 31, 34, 39] , zero-shot recognition [39, 45, 47] , and domain adaptation [32] .  ... 
arXiv:2106.14472v2 fatcat:jejk2cjzs5fhlnmriqm4d2vqhq

Stochastic Prototype Embeddings [article]

Tyler R. Scott, Karl Ridgeway, Michael C. Mozer
2019 arXiv   pre-print
Incorporating uncertainty improves performance on few-shot learning and gracefully handles label noise and out-of-distribution inputs.  ...  Compared to the state-of-the-art stochastic method, Hedged Instance Embeddings (Oh et al., 2019), we achieve superior large- and open-set classification accuracy.  ...  The OPBN was not tested on few-shot and open-set recognition because it requires extensions to be applied to classification tasks.  ... 
arXiv:1909.11702v1 fatcat:23r43h5tmzeazoaildgahesszi

Prototypical Contrastive Language Image Pretraining [article]

Delong Chen, Zhao Wu, Fan Liu, Zaiquan Yang, Yixiang Huang, Yiping Bao, Erjin Zhou
2022 arXiv   pre-print
Combining the above novel designs, we train our ProtoCLIP on Conceptual Captions and achieved an +5.81% ImageNet linear probing improvement and an +2.01% ImageNet zero-shot classification improvement.  ...  We further propose Prototypical Back Translation (PBT) to decouple representation grouping from representation alignment, resulting in effective learning of meaningful representations under large modality  ...  It attracts widespread attention from the deep learning community, since its learned representations can be transferred well to a variety of downstream tasks, including linear probing, zero-shot classification  ... 
arXiv:2206.10996v1 fatcat:jqximhc4zvbslksu2pt7cc3cwq

Prototype Completion with Primitive Knowledge for Few-Shot Learning [article]

Baoquan Zhang, Xutao Li, Yunming Ye, Zhichao Huang, Lisai Zhang
2021 arXiv   pre-print
Then, we design a prototype completion network to learn to complete prototypes with these priors.  ...  Few-shot learning is a challenging task, which aims to learn a classifier for novel classes with few examples.  ...  Zero-Shot Learning Zero-shot learning (ZSL) is also closely related to FSL, which aims to address the novel class categorizations without any labeled samples.  ... 
arXiv:2009.04960v6 fatcat:yhdqce3ibnhuvotqq7mexg5aae

Prototype Completion for Few-Shot Learning [article]

Baoquan Zhang, Xutao Li, Yunming Ye, Shanshan Feng
2021 arXiv   pre-print
Finally, a prototype completion network is devised to learn to complete prototypes with these priors.  ...  Few-shot learning aims to recognize novel classes with few examples.  ...  Zero-Shot Learning Zero-shot learning (ZSL) is also closely related to FSL, which aims to address the novel class categorizations without any labeled samples [56] .  ... 
arXiv:2108.05010v1 fatcat:xza2wcuapbetndhofphzofpmrq

Independent Prototype Propagation for Zero-Shot Compositionality [article]

Frank Ruis, Gertjan Burghouts, Doina Bucur
2021 arXiv   pre-print
To be able to deal with underspecified datasets while still leveraging contextual clues during classification, we propose ProtoProp, a novel prototype propagation graph method.  ...  Next we propagate the independent prototypes through a compositional graph, to learn compositional prototypes of novel attribute-object combinations that reflect the dependencies of the target distribution  ...  To this end, we have proposed a novel prototype propagation method for compositional zero-shot learning.  ... 
arXiv:2106.00305v2 fatcat:pde7b2ibtra5lh6jmon55e37oa

Hyperspherical Prototype Networks [article]

Pascal Mettes, Elise van der Pol, Cees G. M. Snoek
2019 arXiv   pre-print
We position prototypes through data-independent optimization, with an extension to incorporate priors from class semantics.  ...  For classification, a common approach is to define prototypes as the mean output vector over training examples per class.  ...  A Evaluating hyperspherical prototypes on DenseNet-121  ... 
arXiv:1901.10514v3 fatcat:4jylobyodvbfrktf6yxevcl4mi

Uniform Priors for Data-Efficient Transfer [article]

Samarth Sinha, Karsten Roth, Anirudh Goyal, Marzyeh Ghassemi, Hugo Larochelle, Animesh Garg
2020 arXiv   pre-print
Meta-Learning, Deep Metric Learning, Zero-Shot Domain Adaptation, as well as Out-of-Distribution classification.  ...  However, the ability to perform few or zero-shot adaptation to novel tasks is important for the scalability and deployment of machine learning models.  ...  Metric Learning and zero-shot classification for domain adaptation.  ... 
arXiv:2006.16524v2 fatcat:w6e74imnbbhe7jb3ghuf7fjymm

Attribute Prototype Network for Any-Shot Learning [article]

Wenjia Xu, Yongqin Xian, Jiuniu Wang, Bernt Schiele, Zeynep Akata
2022 arXiv   pre-print
Any-shot image classification allows to recognize novel classes with only a few or even zero samples.  ...  To better transfer attribute-based knowledge from seen to unseen classes, we argue that an image representation with integrated attribute localization ability would be beneficial for any-shot, i.e. zero-shot  ...  Existing FSL methods usually rely on prior knowledge from only visual modality, while in zero-shot learning, multi-modality data such as word embeddings (Xian et al., 2019a) and attributes (Lampert  ... 
arXiv:2204.01208v1 fatcat:eljtv3zgt5db3bwcshvfn2wx54

Informed Pre-Training on Prior Knowledge [article]

Laura von Rueden, Sebastian Houben, Kostadin Cvejoski, Christian Bauckhage, Nico Piatkowski
2022 arXiv   pre-print
In this paper, we propose a novel informed machine learning approach and suggest to pre-train on prior knowledge.  ...  Analyzing which parts of the model are affected most by the prototypes reveals that improvements come from deeper layers that typically represent high-level features.  ...  • We compare our approach to ImageNet pre-training and find that the latter can be further improved by subsequently pre-training on knowledge prototypes.  ... 
arXiv:2205.11433v1 fatcat:xscx7m67sjeenfusvsb6brtwba

ProtoInfoMax: Prototypical Networks with Mutual Information Maximization for Out-of-Domain Detection [article]

Iftitahu Ni'mah, Meng Fang, Vlado Menkovski, Mykola Pechenizkiy
2021 arXiv   pre-print
Experimental results show that our proposed method can substantially improve performance up to 20% for OOD detection in low resource settings of text classification.  ...  The ability to detect Out-of-Domain (OOD) inputs has been a critical requirement in many real-world NLP applications. For example, intent classification in dialogue systems.  ...  As a result, current research introduces few-shot and zero-shot learning frameworks for OOD detection problems in a low resource scenario of text classification (Tan et al., 2019) .  ... 
arXiv:2108.12229v5 fatcat:ykcw6gjlabfvpb5lr5qeu732xa

On the cross-lingual transferability of multilingual prototypical models across NLU tasks [article]

Oralie Cattan, Christophe Servan, Sophie Rosset
2022 arXiv   pre-print
Through this context, this article proposes to investigate the cross-lingual transferability of using synergistically few-shot learning with prototypical neural networks and multilingual Transformers-based  ...  In practice, these approaches suffer from the drawbacks of domain-driven design and under-resourced languages. Domain and language models are supposed to grow and change as the problem space evolves.  ...  From another perspective low-shot learning such as fewshot and zero-shot, aims to transfer knowledge learned from one language to another when the training data is limited or is missing some task labels  ... 
arXiv:2207.09157v1 fatcat:n44kmzu7wzhwjonj6eidmiqlbm
« Previous Showing results 1 — 15 out of 5,148 results