15,571 Hits in 9.0 sec

Beyond Just Vision: A Review on Self-Supervised Representation Learning on Multimodal and Temporal Data [article]

Shohreh Deldari, Hao Xue, Aaqib Saeed, Jiayuan He, Daniel V. Smith, Flora D. Salim
2022 arXiv   pre-print
The popularity of self-supervised learning is driven by the fact that traditional models typically require a huge amount of well-annotated data for training.  ...  Recently, Self-Supervised Representation Learning (SSRL) has attracted much attention in the field of computer vision, speech, natural language processing (NLP), and recently, with other types of modalities  ...  Acknowledgments Authors would like to acknowledge the support from CSIRO Data61 Scholarship program (Grant number 500588), RMIT Research International Tuition Fee Scholarship and Australian Research Council  ... 
arXiv:2206.02353v2 fatcat:ljkxvfxsand43otrpq4effs7cq

Rumor Detection with Self-supervised Learning on Texts and Social Graph [article]

Yuan Gao, Xiang Wang, Xiangnan He, Huamin Feng, Yongdong Zhang
2022 arXiv   pre-print
In this work, we explore contrastive self-supervised learning on heterogeneous information sources, so as to reveal their relations and characterize rumors better.  ...  We term this framework as Self-supervised Rumor Detection (SRD). Extensive experiments on three real-world datasets validate the effectiveness of SRD for automatic rumor detection on social media.  ...  To the best of our knowledge, we are the first to leverage self-supervised learning for rumor detection on social media. • We propose cluster-wise and instance-wise discrimination as the self-supervised  ... 
arXiv:2204.08838v1 fatcat:ybmyd4ipxfh3zamwxcd53ha7k4

Recent Advancements in Self-Supervised Paradigms for Visual Feature Representation [article]

Mrinal Anand, Aditya Garg
2021 arXiv   pre-print
This study conducts a comprehensive and insightful survey and analysis of recent developments in the self-supervised paradigm for feature representation.  ...  We witnessed a massive growth in the supervised learning paradigm in the past decade. Supervised learning requires a large amount of labeled data to reach state-of-the-art performance.  ...  Clustering-based methods such as DeepCluster [8] uses a simple method such as k-means and an encoder to learn representations for the visual features.  ... 
arXiv:2111.02042v1 fatcat:e6ec3auu7vaodluwraxrzzdtxa

A Survey on Contrastive Self-supervised Learning [article]

Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, Fillia Makedon
2021 arXiv   pre-print
Specifically, contrastive learning has recently become a dominant component in self-supervised learning methods for computer vision, natural language processing (NLP), and other domains.  ...  It is capable of adopting self-defined pseudo labels as supervision and use the learned representations for several downstream tasks.  ...  This helps to estimate the effectiveness of the self-supervised approach [56] .  ... 
arXiv:2011.00362v3 fatcat:7md5sjlws5fnfivbq7pjhzauna

A Survey on Contrastive Self-Supervised Learning

Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, Fillia Makedon
2020 Technologies  
Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains.  ...  It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks.  ...  Conflicts of Interest: The authors declare no conflicts of interest.  ... 
doi:10.3390/technologies9010002 fatcat:j7lkmrb2prd5vbjdof5p3mf2ke

HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units [article]

Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed
2021 arXiv   pre-print
Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound  ...  Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech  ...  The need for such high-fidelity representations drove research in self-supervised learning for speech and audio where the targets driving the learning process of a designed pretext task are drawn from  ... 
arXiv:2106.07447v1 fatcat:y2x227ubtzbmzduuphvlptoghy

Unsupervised Speech Recognition [article]

Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, Michael Auli
2022 arXiv   pre-print
We leverage self-supervised speech representations to segment unlabeled audio and learn a mapping from these representations to phonemes via adversarial training.  ...  This paper describes wav2vec-U, short for wav2vec Unsupervised, a method to train speech recognition models without any labeled data.  ...  the setup of Chen et al. ( 2019 ), Marc'Aurelio Ranzato for general helpful discussions, and Ruth Kipng'eno, Ruth Ndila Ndeto as well as Mark Mutitu for error analysis of our Swahili model.  ... 
arXiv:2105.11084v3 fatcat:tx63si7jpfdpxowaw7mkyg3vhi

Semi-supervised Learning with Weakly-Related Unlabeled Data: Towards Better Text Categorization

Liu Yang, Rong Jin, Rahul Sukthankar
2008 Neural Information Processing Systems  
For empirical evaluation, we present a direct comparison with a number of stateof-the-art methods for inductive semi-supervised learning and text categorization.  ...  The cluster assumption is exploited by most semi-supervised learning (SSL) methods.  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF and NIH.  ... 
dblp:conf/nips/YangJS08 fatcat:z5crr4ryovatbiivzvbhrmvu24

A Survey on Semi-, Self-and Unsupervised Learning for Image Classification

Lars Schmarje, Monty Santarossa, Simon-Martin Schroder, Reinhard Koch
2021 IEEE Access  
The degree of supervision which is needed to achieve comparable results to the usage of all labels is decreasing and therefore methods need to be extended to settings with a variable number of classes.  ...  While deep learning strategies achieve outstanding results in computer vision tasks, one issue remains: The current strategies rely heavily on a huge amount of labeled data.  ...  However, for a large number of classes or an increasing number of classes the ideas of unsupervised are still of high importance and ideas from semi-supervised and self-supervised learning need to be transferred  ... 
doi:10.1109/access.2021.3084358 fatcat:aiznkxq47rdspha7xnpj5iwfxe

Self-supervised contrastive learning on agricultural images

Ronja Güldenring, Lazaros Nalpantidis
2021 Computers and Electronics in Agriculture  
This observation has motivated us to explore the applicability of self-supervised contrastive learning on agricultural images.  ...  We then require only a limited number of annotated images to fine-tune those networks in a supervised training manner for relevant downstream tasks, such as plant classification or segmentation.  ...  and sustainability of dairy farming (GALIRUMI)", H2020-SPACE-EGNSS-2019-870258.  ... 
doi:10.1016/j.compag.2021.106510 fatcat:6d7if5lu25bcbl5ice6nyemvra

Survey on Self-Supervised Learning: Auxiliary Pretext Tasks and Contrastive Learning Methods in Imaging

Saleh Albelwi
2022 Entropy  
It details the motivation for this research, a general pipeline of SSL, the terminologies of the field, and provides an examination of pretext tasks and self-supervised methods.  ...  This paper provides a comprehensive literature review of the top-performing SSL methods using auxiliary pretext and contrastive learning techniques.  ...  Conflicts of Interest: The author declares no conflict of interest.  ... 
doi:10.3390/e24040551 pmid:35455214 pmcid:PMC9029566 fatcat:dcs5shccu5frjmejfpvt6skwem

"Is depression related to cannabis?": A knowledge-infused model for Entity and Relation Extraction with Limited Supervision [article]

Kaushik Roy, Usha Lokala, Vedant Khandelwal, Amit Sheth
2021 arXiv   pre-print
Because of the lack of annotations due to the limited availability of the domain experts' time, we use supervised contrastive learning in conjunction with GPT-3 trained on a vast corpus to achieve improved  ...  With strong marketing advocacy of the benefits of cannabis use for improved mental health, cannabis legalization is a priority among legislators.  ...  . (2) Supervised Contrastive Learning Module, uses a triplet loss to learn a representation space for the cannabis and depression phrases through supervised contrastive learning.  ... 
arXiv:2102.01222v1 fatcat:ej5mcjrduvbmlnoxxv3gaimkju

Dynamic Contrastive Distillation for Image-Text Retrieval [article]

Jun Rao, Liang Ding, Shuhan Qi, Meng Fang, Yang Liu, Li Shen, Dacheng Tao
2022 arXiv   pre-print
First, to achieve multi-modal contrastive learning, and balance the training costs and effects, we propose to use a teacher network to estimate the difficult samples for students, making the students absorb  ...  and students' self-learning ability.  ...  Unlike typical self-supervised contrastive learning [7] , [47] , imagetext retrieval uses a supervised learning paradigm.  ... 
arXiv:2207.01426v1 fatcat:2zg2kgfj6nfefan36bpvsqs65i

Unsupervised Semantic Segmentation with Self-supervised Object-centric Representations [article]

Andrii Zadaianchuk, Matthaeus Kleindessner, Yi Zhu, Francesco Locatello, Thomas Brox
2022 arXiv   pre-print
In this paper, we show that recent advances in self-supervised feature learning enable unsupervised object discovery and semantic segmentation with a performance that matches the state of the field on  ...  We propose a methodology based on unsupervised saliency masks and self-supervised feature clustering to kickstart object discovery followed by training a semantic segmentation network on pseudo-labels  ...  Acknowledgments We would like to thank Yash Sharma and Maximilian Seitzer for his insightful discussions and practical advice. Bibliography  ... 
arXiv:2207.05027v1 fatcat:o6occo2k4zdyrmnlnghknx52wm

InferCode: Self-Supervised Learning of Code Representations by Predicting Subtrees [article]

Nghi D. Q. Bui, Yijun Yu, Lingxiao Jiang
2020 arXiv   pre-print
cross-language code search or reused under a transfer learning scheme to continue training the model weights for supervised tasks such as code classification and method name prediction.  ...  We trained an InferCode model instance using the Tree-based CNN as the encoder of a large set of Java code and applied it to downstream unsupervised tasks such as code clustering, code clone detection,  ...  This notion of self-supervised learning is very suitable for our aim.  ... 
arXiv:2012.07023v2 fatcat:jxhfs2a6qfeabehgjvt4bavkfe
« Previous Showing results 1 — 15 out of 15,571 results