A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Prototypical Cross-domain Self-supervised Learning for Few-shot Unsupervised Domain Adaptation
[article]
2021
arXiv
pre-print
In this paper, we propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA). ...
Our framework captures category-wise semantic structures of the data by in-domain prototypical contrastive learning; and performs feature alignment through cross-domain prototypical self-supervision. ...
Prototypical Cross-domain Self-supervised Learning for Few-shot
Unsupervised Domain ...
arXiv:2103.16765v1
fatcat:laitrt6765hkpcrb2qsdfg37me
Self-Supervised Prototypical Transfer Learning for Few-Shot Classification
[article]
2020
arXiv
pre-print
We demonstrate that our self-supervised prototypical transfer learning approach ProtoTransfer outperforms state-of-the-art unsupervised meta-learning methods on few-shot tasks from the mini-ImageNet dataset ...
Recently, unsupervised meta-learning methods have exchanged the annotation requirement for a reduction in few-shot classification performance. ...
Fine-tuning for few-shot classification: show that adaptation on the target task is key for good cross-domain few-shot classification performance. ...
arXiv:2006.11325v1
fatcat:b5fpxjbdgjfj3ltvkhoduv3pum
Self-Supervised Class-Cognizant Few-Shot Classification
[article]
2022
arXiv
pre-print
as well as the (5-way, 5 and 20-shot) settings of cross-domain CDFSL benchmark. ...
To build in this direction, this paper focuses on unsupervised learning from an abundance of unlabeled data followed by few-shot fine-tuning on a downstream classification task. ...
We also compare our method on a more challenging cross-domain few-shot learning (CDFSL) benchmark [Guo et al., 2019] . ...
arXiv:2202.08149v1
fatcat:ks4d6tlgvbdstoktw5chturgmi
AutoFi: Towards Automatic WiFi Human Sensing via Geometric Self-Supervised Learning
[article]
2022
arXiv
pre-print
In this paper, we firstly explore how to learn a robust model from these low-quality CSI samples, and propose AutoFi, an automatic WiFi sensing model based on a novel geometric self-supervised learning ...
Though domain adaptation methods have been proposed to tackle this issue, it is not practical to collect high-quality, well-segmented and balanced CSI samples in a new environment for adaptation algorithms ...
Geometric Self-Supervised Learning Module The geometric self-supervised (GSS) learning module aims to learn CSI representations in an unsupervised manner. ...
arXiv:2205.01629v1
fatcat:5hsqdtxvpzgshl6lx2hbujzsda
Unsupervised Transfer Learning with Self-Supervised Remedy
[article]
2020
arXiv
pre-print
Different methods have been studied to address the underlying problem based on different assumptions, e.g. from domain adaptation to zero-shot and few-shot learning. ...
Our method mitigates nontransferrable prior-knowledge by self-supervision, benefiting from both transfer and self-supervised learning. ...
(e) A few labelled anchor samples are available as the prototype of novel classes in few-shot learning (FSL). ...
arXiv:2006.04737v1
fatcat:jivttxerg5chhaxo4qdwvtokiq
Multi-level Consistency Learning for Semi-supervised Domain Adaptation
[article]
2022
arXiv
pre-print
Semi-supervised domain adaptation (SSDA) aims to apply knowledge learned from a fully labeled source domain to a scarcely labeled target domain. ...
In this paper, we propose a Multi-level Consistency Learning (MCL) framework for SSDA. ...
Introduction Semi-supervised Domain Adaptation (SSDA) has attracted lots of attention due to its promising performance compared with Unsupervised Domain Adaptation (UDA). ...
arXiv:2205.04066v1
fatcat:cdpql35gxbemzdxuzg5ts4kfji
Attract, Perturb, and Explore: Learning a Feature Alignment Network for Semi-supervised Domain Adaptation
[article]
2020
arXiv
pre-print
The novel setting of the semi-supervised domain adaptation (SSDA) problem shares the challenges with the domain adaptation problem and the semi-supervised learning problem. ...
Although unsupervised domain adaptation methods have been widely adopted across several computer vision tasks, it is more desirable if we can exploit a few labeled data from new domains encountered in ...
We compared our method with the semi-supervised domain adaptation (SSDA), unsupervised domain adaptation (UDA), semi-supervised learning (SSL), and no adaptation methods. ...
arXiv:2007.09375v1
fatcat:4wc6pt6ffbayrchj7rzyblbb24
Low-confidence Samples Matter for Domain Adaptation
[article]
2022
arXiv
pre-print
Recently, increasing researches have focused on self-training or other semi-supervised algorithms to explore the data structure of the target domain. ...
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain. ...
Fig. 4 shows the average value of the accumulation of top
Conclusion In this paper, we propose a novel contrastive learning framework for unsupervised domain adaptation (UDA) and semi-supervised domain ...
arXiv:2202.02802v2
fatcat:pvyje4nebjdhbdi5fzhzxomvwq
Machine learning with limited data
[article]
2021
arXiv
pre-print
So we propose a more realistic cross-domain few-shot learning with unlabeled data setting, in which some unlabeled data is available in the target domain. We propose two methods in this setting. ...
We also find that domain shift is a critical issue in few shot learning when the training domain and testing domain are different. ...
Cross-domain Few shot learning with unlabeled data In this chapter we introduce the domain shift which exists in few-shot learning and two settings of cross-domain few-shot learning. ...
arXiv:2101.11461v1
fatcat:c76awge7gvhzdpmkhwgencpa3a
Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer
[article]
2021
arXiv
pre-print
Specifically, SHOT exploits both information maximization and self-supervised learning for the feature extraction module learning to ensure the target features are implicitly aligned with the features ...
To effectively utilize the source model for adaptation, we propose a novel approach called Source HypOthesis Transfer (SHOT), which learns the feature extraction module for the target domain by fitting ...
Index Terms—Unsupervised domain adaptation, transfer learning, self-supervised learning, semi-supervised learning, model reuse. ...
arXiv:2012.07297v3
fatcat:sofsnofvpjegnhzumy277rjo3a
ECACL: A Holistic Framework for Semi-Supervised Domain Adaptation
[article]
2021
arXiv
pre-print
This paper studies Semi-Supervised Domain Adaptation (SSDA), a practical yet under-investigated research topic that aims to learn a model of good performance using unlabeled samples and a few labeled samples ...
on VisDA2017 and from 45.5 to 53.4 on DomainNet for the 1-shot setting. ...
According to the type of data available in target domain, DA methods can be divided into three categories: Unsupervised Domain Adaptation (UDA), Few-Shot Domain Adaptation (FSDA) and Semi-Supervised Domain ...
arXiv:2104.09136v2
fatcat:coeb6xk5vnfdpc2u6ynwh3ktvi
On the Importance of Distractors for Few-Shot Classification
[article]
2021
arXiv
pre-print
Compared to state-of-the-art approaches, our method shows accuracy gains of up to 12% in cross-domain and up to 5% in unsupervised prior-learning settings. ...
We demonstrate for the first time that inclusion of such distractors can significantly boost few-shot generalization. ...
For experiments in unsupervised prior learning, we use the same train split of miniImageNet to learn a self-supervised representation that is then evaluated for few-shot performance on miniImageNet-test ...
arXiv:2109.09883v1
fatcat:h3br2w4w25bmznjirbfuqu7bs4
Deep Domain Adaptive Object Detection: a Survey
[article]
2020
arXiv
pre-print
Deep domain adaptive object detection (DDAOD) has emerged as a new learning paradigm to address the above mentioned challenges. ...
This paper aims to review the state-of-the-art progress on deep domain adaptive object detection approaches. Firstly, we introduce briefly the basic concepts of deep domain adaptation. ...
Labeled data of the target domain In consideration of labeled data of the target domain, we can categorize DDAOD into supervised, semi-supervised, weaklysupervised, few-shot and unsupervised. ...
arXiv:2002.06797v3
fatcat:mozths3lk5djndue6dzefxuq3q
Learning Invariant Representation with Consistency and Diversity for Semi-supervised Source Hypothesis Transfer
[article]
2021
arXiv
pre-print
Semi-supervised domain adaptation (SSDA) aims to solve tasks in target domain by utilizing transferable information learned from the available source domain and a few labeled target data. ...
a few supervisions. ...
SAFN [46] proposes a norm adaptation to well discriminate the source and target features. SHOT [23] addresses unsupervised model adaptation with self-supervision learning. ...
arXiv:2107.03008v2
fatcat:rse5gdh6unfe5etka5ze3mjmpa
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
[article]
2019
arXiv
pre-print
This paper takes a problem-oriented perspective and presents a comprehensive review of transfer learning methods, both shallow and deep, for cross-dataset visual recognition. ...
The comprehensive problem-oriented review of the advances in transfer learning with respect to the problem has not only revealed the challenges in transfer learning for visual recognition, but also the ...
However, the labelled target data are generally insufficient for learning an effective classifier. This is also called supervised domain adaptation or few-shot domain adaptation in the literature. ...
arXiv:1705.04396v3
fatcat:iknfmppi5zca7ljovdlwvdwluu
« Previous
Showing results 1 — 15 out of 1,085 results