A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Improving Few-Shot Learning with Auxiliary Self-Supervised Pretext Tasks
[article]
2021
arXiv
pre-print
Recent work on few-shot learning showed that quality of learned representations plays an important role in few-shot classification performance. ...
In this work, we exploit the complementarity of both paradigms via a multi-task framework where we leverage recent self-supervised methods as auxiliary tasks. ...
Conclusion Based on the simple baseline of Tian et al. (2020a) , we have proposed a multi-task framework with self-supervised auxiliary tasks to improve few-shot image classification. ...
arXiv:2101.09825v1
fatcat:gkmrz3z4dja55j5uycsrqcmgcm
Self-Supervised Meta-Learning for Few-Shot Natural Language Classification Tasks
[article]
2020
arXiv
pre-print
Furthermore, we show how the self-supervised tasks can be combined with supervised tasks for meta-learning, providing substantial accuracy gains over previous supervised meta-learning. ...
This paper proposes a self-supervised approach to generate a large, rich, meta-learning task distribution from unlabeled text. ...
(2019) demonstrated that better feature learning from supervised tasks helps few-shot learning. Thus, we also evaluate multi-task learning and multi-task meta-learning for few-shot generalization. ...
arXiv:2009.08445v2
fatcat:klscagonaveaxo67swdr56pyry
Multi-Pretext Attention Network for Few-shot Learning with Self-supervision
[article]
2021
arXiv
pre-print
Few-shot learning is an interesting and challenging study, which enables machines to learn from few samples like humans. ...
Self-supervised learning is emerged as an efficient method to utilize unlabeled data. ...
Inspired by the similarity of few-shot and self-supervised learning, some works [6, 7] have weaved self-supervision into the training process of few-shot learning. ...
arXiv:2103.05985v1
fatcat:wgadzl75gzeobcdyvqypyzxt2e
Pareto Self-Supervised Training for Few-Shot Learning
[article]
2021
arXiv
pre-print
While few-shot learning (FSL) aims for rapid generalization to new concepts with little supervision, self-supervised learning (SSL) constructs supervisory signals directly computed from unlabeled data. ...
Exploiting the complementarity of these two manners, few-shot auxiliary learning has recently drawn much attention to deal with few labeled data. ...
Boosting few-
shot visual learning with self-supervision. ...
arXiv:2104.07841v2
fatcat:gywa2f3ikvf6fluxbdhjjjcoea
Visual Representation Learning with Self-Supervised Attention for Low-Label High-data Regime
[article]
2022
arXiv
pre-print
In this paper, we are the first to question if self-supervised vision transformers (SSL-ViTs) can be adapted to two important computer vision tasks in the low-label, high-data regime: few-shot image classification ...
Our self-supervised attention representations outperforms the state-of-the-art on several public benchmarks for both tasks, namely miniImageNet and CUB200 for few-shot image classification by up-to 6 CUB200 ...
Few-Shot Image Classification with Self-Supervised Feature Embeddings: For our few-shot image-classification framework, we follow the distribution calibration [9] methodology proposed as an alternative ...
arXiv:2201.08951v2
fatcat:6ulgcp5ornh53hyt6ml6uwkada
CSN: Component-Supervised Network for Few-Shot Classification
[article]
2022
arXiv
pre-print
The few-shot classification (FSC) task has been a hot research topic in recent years. It aims to address the classification problem with insufficient labeled data on a cross-category basis. ...
Starting from the root cause of this problem, this paper presents a new scheme, Component-Supervised Network (CSN), to improve the performance of FSC. ...
recently proposed semi-supervised few-shot classification methods. ...
arXiv:2203.07738v1
fatcat:l74sp5dvpfazhl3ztk4jxn7ulu
Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?
[article]
2020
arXiv
pre-print
state-of-the-art few-shot learning methods. ...
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms. ...
classification or self-supervised learning, on this combined dataset. ...
arXiv:2003.11539v2
fatcat:y3d2r3kpdjgsjnedlqbfse4774
When Does Self-supervision Improve Few-shot Learning?
[article]
2020
arXiv
pre-print
We investigate the role of self-supervised learning (SSL) in the context of few-shot learning. ...
We find that SSL reduces the relative error rate of few-shot meta-learners by 4%-27%, even when the datasets are small and only utilizing images within the datasets. ...
Few-shot learning as an evaluation for self-supervised tasks The fewshot classification task provides a way of evaluating the effectiveness of selfsupervised tasks. ...
arXiv:1910.03560v2
fatcat:wt4oebe5ejcptdgxfhpghbldty
Representation Based Meta-Learning for Few-Shot Spoken Intent Recognition
2020
Interspeech 2020
This paper presents a few-shot spoken intent classification approach with task-agnostic representations via meta-learning paradigm. ...
The performance is comparable to traditionally supervised classification models with abundant training samples. ...
We also hypothesize a weighted combination of the reconstruction loss from the self-supervision (with a controlling parameter α) together with the cross-entropy loss from the meta-learning few-shot classification ...
doi:10.21437/interspeech.2020-3208
dblp:conf/interspeech/MittalBKCSK20
fatcat:2zerbq2eh5a6rbco7a2jjqrlqu
Label-Efficient Learning on Point Clouds using Approximate Convex Decompositions
[article]
2020
arXiv
pre-print
We show that using ACD to approximate ground truth segmentation provides excellent self-supervision for learning 3D point cloud representations that are highly effective on downstream tasks. ...
We report improvements over the state-of-the-art for unsupervised representation learning on the ModelNet40 shape classification dataset and significant gains in few-shot part segmentation on the ShapeNetPart ...
., the few-shot classification setting, including self-supervised losses along with the usual supervised training is shown to be beneficial [59] . ...
arXiv:2003.13834v2
fatcat:6wbuhbjsmfcjrghskkspxbf3l4
Self-Supervised Learning For Few-Shot Image Classification
[article]
2021
arXiv
pre-print
In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself ...
Few-shot image classification aims to classify unseen classes with limited labelled samples. ...
A popular strategy for few-shot learning is through metalearning (also called learning-to-learn) with multi-auxiliary tasks [11, 12, 13, 3] . ...
arXiv:1911.06045v3
fatcat:ac5q4scuofdynk2rkqjce4kxlm
An Efficient Method for the Classification of Croplands in Scarce-Label Regions
[article]
2021
arXiv
pre-print
We introduce three self-supervised tasks for cropland classification. ...
We will show how to leverage their potential for cropland classification using self-supervised tasks. ...
[33] , proposed to train the self-supervised tasks jointly with the main one, just like the way we used for few-shot learning. ...
arXiv:2103.09588v1
fatcat:erzaggmwgrabxfcwrorr4zdtdi
Self Supervision to Distillation for Long-Tailed Visual Recognition
[article]
2021
arXiv
pre-print
Specifically, we propose a conceptually simple yet particularly effective multi-stage training scheme, termed as Self Supervised to Distillation (SSD). This scheme is composed of two parts. ...
Second, we present a new distillation label generation module guided by self-supervision. ...
Feature learning enhanced by self-supervision In phase-I of the feature learning stage, we choose to train the backbone network using a standard supervised task and a self-supervised task in a multi-task ...
arXiv:2109.04075v1
fatcat:5gi4s55pcfaixbcrwfvss5erym
SAFFNet: Self-Attention-Based Feature Fusion Network for Remote Sensing Few-Shot Scene Classification
2021
Remote Sensing
Here, the feature weighting value can be fine-tuned by the support set in the few-shot learning task. ...
In this paper, a multi-scale feature fusion network for few-shot remote sensing scene classification is proposed by integrating a novel self-attention feature selection module, denoted as SAFFNet. ...
Comparison of traditional supervised, zero-shot and few-shot learning for classification tasks. ...
doi:10.3390/rs13132532
fatcat:fonmvysiczct5kr7wwl3xe2xdm
Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks
[article]
2020
arXiv
pre-print
parameters for few-shot learning than self-supervised pre-training or multi-task training, outperforming many strong baselines, for example, yielding 14.5% average relative gain in accuracy on unseen tasks ...
We consider this problem of learning to generalize to new tasks with few examples as a meta-learning problem. ...
inference, sentiment classification, and various other text classification tasks; (4) we study how metalearning, multi-task learning and fine-tuning perform for few-shot learning of completely new tasks ...
arXiv:1911.03863v3
fatcat:7bppnqaqirfy7a3b2lekqxzp7i
« Previous
Showing results 1 — 15 out of 11,897 results