Filters








36,186 Hits in 5.0 sec

Data Transformation Insights in Self-supervision with Clustering Tasks [article]

Abhimanu Kumar, Aniket Anand Deshmukh, Urun Dogan, Denis Charles, Eren Manavoglu
2020 arXiv   pre-print
We provide novel insights into the use of data transformation in self-supervised tasks, specially pertaining to clustering.  ...  We show theoretically and empirically that certain set of transformations are helpful in convergence of self-supervised clustering.  ...  self-supervised representation learning cases with clustering tasks under assumptions 1-A and 1-B.  ... 
arXiv:2002.07384v1 fatcat:catbmhluxjhlbgybgkk3ponrna

A Multi-view Perspective of Self-supervised Learning [article]

Chuanxing Geng, Zhenghao Tan, Songcan Chen
2020 arXiv   pre-print
As a newly emerging unsupervised learning paradigm, self-supervised learning (SSL) recently gained widespread attention, which usually introduces a pretext task without manual annotation of data.  ...  In this paper, we borrow a multi-view perspective to decouple a class of popular pretext tasks into a combination of view data augmentation (VDA) and view label classification (VLC), where we attempt to  ...  [17] investigated the SSL task combined with clustering, and provided the insight from data transformation perspective.  ... 
arXiv:2003.00877v2 fatcat:6bcsj52rdng3denuwjx33sfmz4

Self-supervised edge features for improved Graph Neural Network training [article]

Arijit Sehanobish, Neal G. Ravindra, David van Dijk
2020 arXiv   pre-print
In this work, we present a framework for creating new edge features, applicable to any domain, via a combination of self-supervised and unsupervised learning.  ...  In recent years, there has been a lot of work incorporating edge features along with node features for prediction tasks.  ...  Hafler for generating the MS patients dataset and sharing the data with us.  ... 
arXiv:2007.04777v1 fatcat:vcgjxc5z6zgdbmscf2odvdpcd4

Gaining Insight into SARS-CoV-2 Infection and COVID-19 Severity Using Self-supervised Edge Features and Graph Neural Networks [article]

Arijit Sehanobish, Neal G. Ravindra, David van Dijk
2020 arXiv   pre-print
We propose a model that builds on Graph Attention Networks (GAT), creates edge features using self-supervised learning, and ingests these edge features via a Set Transformer.  ...  To do this, we developed a new approach to generating self-supervised edge features.  ...  multi-label node classification task from single-cell data.  ... 
arXiv:2006.12971v2 fatcat:yf2dq3ke3vbujci2vcsdk55nhy

Recent Advancements in Self-Supervised Paradigms for Visual Feature Representation [article]

Mrinal Anand, Aditya Garg
2021 arXiv   pre-print
We present some of the key insights concerning two different approaches in self-supervision, generative and contrastive methods.  ...  This study conducts a comprehensive and insightful survey and analysis of recent developments in the self-supervised paradigm for feature representation.  ...  [31] showed that neither the architectures are coherent with the pretext task nor the self-supervised pretext task is consistent with the architectures in terms of performance.  ... 
arXiv:2111.02042v1 fatcat:e6ec3auu7vaodluwraxrzzdtxa

Self-Supervised Representation Learning: Introduction, Advances and Challenges [article]

Linus Ericsson, Henry Gouk, Chen Change Loy, Timothy M. Hospedales
2021 arXiv   pre-print
These methods have advanced rapidly in recent years, with their efficacy approaching and sometimes surpassing fully supervised pre-training alternatives across a variety of data modalities including image  ...  data.  ...  Signal Processing; EPSRC Centre for Doctoral Training in Data Science, funded by EPSRC (grant EP/L016427/1) and the University of Edinburgh; and EPSRC grant EP/R026173/1.  ... 
arXiv:2110.09327v1 fatcat:qoprtdh4rzg6lcylgn5rafubpe

Self-supervised learning methods and applications in medical imaging analysis: A survey [article]

Saeed Shurrab, Rehab Duwairi
2021 arXiv   pre-print
This article reviews the state-of-the-art research directions in self-supervised learning approaches for image data with concentration on their applications in the field of medical imaging analysis.  ...  in annotated medical data.  ...  On the other side, the supervised perspective in self-supervised learning approach is represented in model training with labels generated from the data itself.  ... 
arXiv:2109.08685v2 fatcat:iu2zanqqrnaflawcxndb6xszgu

Revisiting Pretraining for Semi-Supervised Learning in the Low-Label Regime [article]

Xun Xu, Jingyi Liao, Lile Cai, Manh Cuong Nguyen, Kangkang Lu, Wanyue Zhang, Yasin Yazici, Chuan Sheng Foo
2022 arXiv   pre-print
Semi-supervised learning (SSL) addresses the lack of labeled data by exploiting large unlabeled data through pseudolabeling.  ...  We carried out extensive experiments on both classification and segmentation tasks by doing target pretraining then followed by semi-supervised finetuning.  ...  We further apply the same augmentations in BYOL [10] for target pretraining on classification tasks and with additional affine transformation for segmentation tasks.  ... 
arXiv:2205.03001v1 fatcat:2gw4pt6sirg3rngki7rcuas5ka

Multimodal and self-supervised representation learning for automatic gesture recognition in surgical robotics [article]

Aniruddha Tamhane, Jie Ying Wu, Mathias Unberath
2020 arXiv   pre-print
Further, we qualitatively demonstrate that our self-supervised representations cluster in semantically meaningful properties (surgeon skill and gestures).  ...  Self-supervised, multi-modal learning has been successful in holistic representation of complex scenarios.  ...  Multimodal self-supervised learning Multimodal, self-supervised learning is has shown great promise in learning .  ... 
arXiv:2011.00168v1 fatcat:ualqf3z7eraxpj53qwvefugncu

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting [article]

Ayan Kumar Bhunia, Pinaki Nath Chowdhury, Yongxin Yang, Timothy M. Hospedales, Tao Xiang, Yi-Zhe Song
2021 arXiv   pre-print
In this paper, we are interested in defining a self-supervised pre-text task for sketches and handwriting data.  ...  Self-supervised learning has gained prominence due to its efficacy at learning powerful representations from unlabelled data that achieve excellent performance on many challenging downstream tasks.  ...  We compare our self-supervised method with CPC which can handle sequential data.  ... 
arXiv:2103.13716v1 fatcat:c4gurfucxjaj3oyhtbcbyqvqcq

SiT: Self-supervised vIsion Transformer [article]

Sara Atito and Muhammad Awais and Josef Kittler
2021 arXiv   pre-print
These supervised pretrained vision transformers achieve very good results in downstream tasks with minimal changes.  ...  So far, the vision transformers have been shown to work well when pretrained either using a large scale supervised data or with some kind of co-supervision, e.g. in terms of teacher network.  ...  ACKNOWLEDGMENTS This work was supported in part by the EPSRC Programme Grant (FACER2VM) EP/N007743/1 and the EP-SRC/dstl/MURI project EP/R018456/1.  ... 
arXiv:2104.03602v2 fatcat:leyl2xvhsnbflnwohrgigxsogy

Unsupervised Pre-Training of Image Features on Non-Curated Data [article]

Mathilde Caron, Piotr Bojanowski, Julien Mairal, Armand Joulin
2019 arXiv   pre-print
To that effect, we propose a new unsupervised approach which leverages self-supervision and clustering to capture complementary statistics from large-scale data.  ...  Pre-training general-purpose visual features with convolutional neural networks without relying on annotations is a challenging and important task.  ...  Self-supervision In self-supervised learning, a pretext task is used to extract target labels directly from data [12] . These targets can take a variety of forms.  ... 
arXiv:1905.01278v3 fatcat:kwbjkt4ntba4td3mxg2xtjdsou

Unsupervised Pre-Training of Image Features on Non-Curated Data

Mathilde Caron, Piotr Bojanowski, Julien Mairal, Armand Joulin
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
To that effect, we propose a new unsupervised approach which leverages self-supervision and clustering to capture complementary statistics from large-scale data.  ...  Pre-training general-purpose visual features with convolutional neural networks without relying on annotations is a challenging and important task.  ...  Self-supervision In self-supervised learning, a pretext task is used to extract target labels directly from data [12] . These targets can take a variety of forms.  ... 
doi:10.1109/iccv.2019.00305 dblp:conf/iccv/CaronBMJ19 fatcat:3pwrtnqndzd2rm5afyqdluisbe

Towards a Hypothesis on Visual Transformation based Self-Supervision [article]

Dipan K. Pal, Sreena Nallamothu, Marios Savvides
2020 arXiv   pre-print
The VTSS hypothesis helps us identify transformations that have the potential to be effective as a self-supervision task.  ...  The hypothesis was derived by observing a key constraint in the application of self-supervision using a particular transformation.  ...  Note that for a self-supervision task under this framework, all image data is modelled as seed vectors in the distribution P x with g k being a particular instantiation of a transformation, including the  ... 
arXiv:1911.10594v2 fatcat:jybwu2xfb5aztncrw6s7wiqrdy

SSAST: Self-Supervised Audio Spectrogram Transformer [article]

Yuan Gong, Cheng-I Jeff Lai, Yu-An Chung, James Glass
2022 arXiv   pre-print
The proposed self-supervised framework significantly boosts AST performance on all tasks, with an average improvement of 60.9%, leading to similar or even better results than a supervised pretrained AST  ...  To the best of our knowledge, it is the first patch-based self-supervised learning framework in the audio and speech domain, and also the first self-supervised learning framework for AST.  ...  Acknowledgments We thank the anonymous reviewers for their insightful comments and suggestions. This work is partly supported by Signify.  ... 
arXiv:2110.09784v2 fatcat:z3rz7pigjrbkvejzs577imc7ky
« Previous Showing results 1 — 15 out of 36,186 results