Filters








40,937 Hits in 6.8 sec

Cross-domain Self-supervised Learning for Domain Adaptation with Few Source Labels [article]

Donghyun Kim, Kuniaki Saito, Tae-Hyun Oh, Bryan A. Plummer, Stan Sclaroff, Kate Saenko
2020 arXiv   pre-print
Our self-supervised learning method captures apparent visual similarity with in-domain self-supervision in a domain adaptive manner and performs cross-domain feature matching with across-domain self-supervision  ...  We propose a novel Cross-Domain Self-supervised (CDS) learning approach for domain adaptation, which learns features that are not only domain-invariant but also class-discriminative.  ...  Adaptation Results with Few Source Labels.  ... 
arXiv:2003.08264v1 fatcat:75nlwwt3hvgtxfkvnwcqifqdea

Prototypical Cross-domain Self-supervised Learning for Few-shot Unsupervised Domain Adaptation [article]

Xiangyu Yue, Zangwei Zheng, Shanghang Zhang, Yang Gao, Trevor Darrell, Kurt Keutzer, Alberto Sangiovanni Vincentelli
2021 arXiv   pre-print
In this paper, we propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA).  ...  To cope with this problem, recent work performed instance-wise cross-domain self-supervised learning, followed by an additional fine-tuning stage.  ...  In-domain Prototypical Contrastive Learning Self-supervised Learning for Domain Adaptation.  ... 
arXiv:2103.16765v1 fatcat:laitrt6765hkpcrb2qsdfg37me

Unsupervised Domain Adaptation for Colorectal Cancer Tissue Classification Using Self-supervised Deep Learning Methods

Christian Abbet, Linda Studer, Heather Dawson, Felix Müller, Andreas Fischer, Inti Zlobec, Behzad Bozorgtabar, Jean-Philippe Thiran
2021 Zenodo  
Poster: Unsupervised Domain Adaptation for Colorectal Cancer Tissue Classification Using Self-supervised Deep Learning Methods  ...  "Cross-domain Self-supervised Learning for Domain Adaptation with Few Source Labels." , 2020Match• Learn morphological information without labels • Save annotation time • Take advantage of widely available  ...  Motivation and Goal Method: Self-Rule to Adapt (SRA) Class labels (source data) T-SNE Visualisations of the Feature Embeddings Donghyun, et al.  ... 
doi:10.5281/zenodo.4767474 fatcat:tbti3ytyczbyvmclccsvo6tn34

An Efficient Method for the Classification of Croplands in Scarce-Label Regions [article]

Houtan Ghaffari
2021 arXiv   pre-print
Subsequently, we use the self-supervised tasks to perform unsupervised domain adaptation and benefit from the labeled samples in other regions.  ...  We introduce three self-supervised tasks for cropland classification.  ...  Acknowledgment I would like to thank Marc Rußwurm from the Technical University of Munich for his support and fruitful discussions, which improved this work considerably.  ... 
arXiv:2103.09588v1 fatcat:erzaggmwgrabxfcwrorr4zdtdi

Self Supervised Adversarial Domain Adaptation for Cross-Corpus and Cross-Language Speech Emotion Recognition [article]

Siddique Latif, Rajib Rana, Sara Khalifa, Raja Jurdak, Björn Schuller
2022 arXiv   pre-print
We also introduce a self-supervised ADDi (sADDi) network that utilises self-supervised pre-training with unlabelled data.  ...  Recent studies focus on utilising adversarial methods to learn domain generalised representation for improving cross-corpus and cross-language SER to address this issue.  ...  One of the novel features of our model is the utilisation of self-supervised learning (SSL) for domain adaptation, which has not been explored for SER domain adaptation.  ... 
arXiv:2204.08625v1 fatcat:7pbqre3orffw5lg726vdytikpe

Self-Supervision Meta-Learning for One-Shot Unsupervised Cross-Domain Detection [article]

F. Cappio Borlino, S. Polizzotto, A. D'Innocente, S. Bucci, B. Caputo, T. Tommasi
2022 arXiv   pre-print
Our multi-task architecture includes a self-supervised branch that we exploit to meta-train the whole model with single-sample cross-domain episodes, and prepare to the test condition.  ...  At deployment time the self-supervised task is iteratively solved on any incoming sample to one-shot adapt on it.  ...  Motiian et al. (2017) considered few-shot supervised domain adaptation where the few target samples available are fully labeled.  ... 
arXiv:2106.03496v2 fatcat:5mmtkpkjb5gt5a3fzgqvjoo2au

Representation Learning with Multiple Lipschitz-Constrained Alignments on Partially-Labeled Cross-Domain Data

Songlei Jian, Liang Hu, Longbing Cao, Kai Lu
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
MULAN shows its superior performance on partially-labeled semi-supervised domain adaptation and few-shot domain adaptation and outperforms the state-of-the-art visual domain adaptation models by up to  ...  The cross-domain representation learning plays an important role in tasks including domain adaptation and transfer learning.  ...  In few-shot DA settings, we test two types of classifiers: one is trained with the source domain labeled data and the other is trained with the target domain labeled data.  ... 
doi:10.1609/aaai.v34i04.5856 fatcat:an2fveyfe5hl3f5fzvtgskrx7y

Multi-Modal Domain Adaptation for Fine-Grained Action Recognition [article]

Jonathan Munro, Dima Damen
2020 arXiv   pre-print
Unsupervised Domain Adaptation (UDA) approaches have frequently utilised adversarial training between the source and target domains.  ...  We then combine adversarial training with multi-modal self-supervision, showing that our approach outperforms other UDA methods by 3%.  ...  Transferring a model learned on a labelled source domain to an unlabelled target domain is known as Unsupervised Domain Adaptation (UDA).  ... 
arXiv:2001.09691v2 fatcat:jkdwf2uhpfc2tfg32apqnquuem

Self-supervised Autoregressive Domain Adaptation for Time Series Data [article]

Mohamed Ragab, Emadeldeen Eldele, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, Xiaoli Li
2021 arXiv   pre-print
In particular, we first design a self-supervised learning module that utilizes forecasting as an auxiliary task to improve the transferability of the source features.  ...  To address these limitations, we propose a Self-supervised Autoregressive Domain Adaptation (SLARDA) framework.  ...  Hence, in our future works, we aim to design self-supervised learning [40] to learn representations with few labeled data and a large amount of unlabelled in the source domain.  ... 
arXiv:2111.14834v1 fatcat:v6jgc4uhdfcudfvxxu4kkpwxm4

Multi-Modal Domain Adaptation for Fine-Grained Action Recognition

Jonathan Munro, Dima Damen
2019 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)  
Transferring a model learned on a labelled source domain to an unlabelled target domain is known as Unsupervised Domain Adaptation (UDA).  ...  Very recently, self-supervised learning has been proposed as a domain adaptation approach [5, 52] .  ... 
doi:10.1109/iccvw.2019.00461 dblp:conf/iccvw/MunroD19 fatcat:bpaieusutneanl3zyrr4id5j4e

Multi-Source Domain Adaptation and Semi-Supervised Domain Adaptation with Focus on Visual Domain Adaptation Challenge 2019 [article]

Yingwei Pan and Yehao Li and Qi Cai and Yang Chen and Ting Yao
2019 arXiv   pre-print
Semi-Supervised Domain Adaptation: For this task, we adopt a standard self-learning framework to construct a classifier based on the labeled source and target data, and generate the pseudo labels for unlabeled  ...  Multi-Source Domain Adaptation: We investigate both pixel-level and feature-level adaptation for multi-source domain adaptation task, i.e., directly hallucinating labeled target sample via CycleGAN and  ...  Overall, our adopted EEA with Generalized Cross Entropy exhibits better performance than other runs, which demonstrates the merit of self-learning for multi-source domain adaptation.  ... 
arXiv:1910.03548v2 fatcat:vk7dvjdxrfe3fhml6i6fdwjdnq

Multi-Modal Domain Adaptation for Fine-Grained Action Recognition

Jonathan Munro, Dima Damen
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Unsupervised Domain Adaptation (UDA) approaches have frequently utilised adversarial training between the source and target domains.  ...  We then combine adversarial training with multi-modal self-supervision, showing that our approach outperforms other UDA methods by 3%.  ...  Transferring a model learned on a labelled source domain to an unlabelled target domain is known as Unsupervised Domain Adaptation (UDA).  ... 
doi:10.1109/cvpr42600.2020.00020 dblp:conf/cvpr/MunroD20 fatcat:sevo6h5elbfbzkkla7igpyvm54

Towards Utilizing Unlabeled Data in Federated Learning: A Survey and Prospective [article]

Yilun Jin, Xiguang Wei, Yang Liu, Qiang Yang
2020 arXiv   pre-print
Federated Learning (FL) proposed in recent years has received significant attention from researchers in that it can bring separate data sources together and build machine learning models in a collaborative  ...  However, to the best of our knowledge, few existing works aim to utilize unlabeled data to enhance federated learning, which leaves a potentially promising research topic.  ...  Weakly Supervised Learning Algorithms Transfer Learning Transfer Learning aims to transfer knowledge learned from a source domain to a relevant target domain, probably with fewer labeled samples to train  ... 
arXiv:2002.11545v2 fatcat:tjmj3cowdzes3j5f2uhpokzgqm

An Adversarial Self-Learning Method for Cross-City Adaptation in Semantic Segmentation

Huachen Yu, Department of Mechanical Engineering, Meijo University, Nagoya, Japan, Jianming Yang
2020 International Journal of Machine Learning and Computing  
With the Cityscapes to NTHU cross-city adaptation experiments, we can see that the adversarial self-learning method can achieve stateof-the-art results compared with the domain adaptation methods proposed  ...  Just for the reduction of the distance between the source and target domains, domain adaptation methods are proposed for the unsupervised training with the unlabeled target domain.  ...  AUTHOR CONTRIBUTIONS Huachen Yu designed the study, performed the experiments, analyzed the data and wrote the paper, Jianming Yang supervised the research and revised the paper; all authors had approved  ... 
doi:10.18178/ijmlc.2020.10.5.986 fatcat:xzlnmbzqsjhqzbug4hbu7bhowu

Learning Fashion Compatibility from In-the-wild Images [article]

Additya Popli, Vijay Kumar, Sujit Jos, Saraansh Tandon
2022 arXiv   pre-print
In this work, we propose to learn representations for compatibility prediction from in-the-wild street fashion images through self-supervised learning by leveraging the fact that people often wear compatible  ...  Most existing approaches learn representation for this task using labeled outfit datasets containing manually curated compatible item combinations.  ...  DATASETS For training, we use Fashionpedia [20] dataset as a source of street fashion images and IQON3000 [25] for domain adaptation. We do not use any labels from these datasets.  ... 
arXiv:2206.05982v1 fatcat:j4pzourxmrgzjnwlo4fwfbaeua
« Previous Showing results 1 — 15 out of 40,937 results