Filters








13,846 Hits in 4.6 sec

ContraGAN: Contrastive Learning for Conditional Image Generation [article]

Minguk Kang, Jaesik Park
2021 arXiv   pre-print
Conditional image generation is the task of generating diverse images using class label information.  ...  In this paper, we propose ContraGAN that considers relations between multiple image embeddings in the same batch (data-to-data relations) as well as the data-to-class relations by using a conditional contrastive  ...  Conditional Contrastive Loss To exploit data-to-data relations, we can adopt loss functions used in self-supervised [34] learning or metric learning [32, 35, 36, 37, 38, 39] .  ... 
arXiv:2006.12681v3 fatcat:tlnifrdon5carcvihhtbytedpa

Context Matters: Graph-based Self-supervised Representation Learning for Medical Images [article]

Li Sun, Ke Yu, Kayhan Batmanghelich
2020 arXiv   pre-print
We introduce a novel approach with two levels of self-supervised representation learning objectives: one on the regional anatomical level and another on the patient-level.  ...  Although self-supervised learning enables us to bootstrap the training by exploiting unlabeled data, the generic self-supervised methods for natural images do not sufficiently incorporate the context.  ...  Thus, self-supervised pre-training presents an appealing solution in this domain. There are some existing works that focus on self-supervised methods for learning image-level representations.  ... 
arXiv:2012.06457v1 fatcat:4d5byeo2k5cm3m7w5z2knqxwzu

Bootstrap your own latent: A new approach to self-supervised Learning [article]

Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu (+2 others)
2020 arXiv   pre-print
We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning.  ...  BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other.  ...  Acknowledgements The authors would like to thank the following people for their help throughout the process of writing this paper, in alphabetical order: Aaron van den Oord, Andrew Brock, Jason Ramapuram  ... 
arXiv:2006.07733v3 fatcat:rjjef33krnbbxgdhtxudgchhg4

VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning [article]

Adrien Bardes and Jean Ponce and Yann LeCun
2021 arXiv   pre-print
Recent self-supervised methods for image representation learning are based on maximizing the agreement between embedding vectors from different views of the same image.  ...  This collapse problem is often avoided through implicit biases in the learning architecture, that often lack a clear justification or interpretation.  ...  INTRODUCTION Self-supervised representation learning has made significant progress over the last years, almost reaching the performance of supervised baselines on many downstream tasks Bachman et al.  ... 
arXiv:2105.04906v2 fatcat:be5v3kjiyvh2tpuoehiyl7o2zu

Semi-supervised Learning [chapter]

Xiaojin Zhu
2017 Encyclopedia of Machine Learning and Data Mining  
It contrasts supervised learning (data all labeled) or unsupervised learning (data all unlabeled).  ...  This new representation contains the information of unlabeled data and auxiliary problems. One then perform standard supervised learning with labeled data using the new representation.  ... 
doi:10.1007/978-1-4899-7687-1_749 fatcat:a3pujecbsff5nlahnn36cmqdgi

Impact of Deep Learning on Transfer Learning : A Review

M. J. Barwary, A. M. Abdulazeez
2021 Zenodo  
learning, provide information on present solutions, and appraise applications employed in diverse facets of transfer learning and deep learning.  ...  The objective of this review is to determine more abstract qualities at the greater levels of the representation, by utilising deep learning to detach the variables in the outcomes, formally outline transfer  ...  Of late, a new method known as self-supervised learning has surfaced.  ... 
doi:10.5281/zenodo.4559668 fatcat:2sqju4jirbd5jae3gccd3kkm6y

Enhance Images as You Like with Unpaired Learning [article]

Xiaopeng Sun, Muxingzi Li, Tianyu He, Lubin Fan
2021 arXiv   pre-print
In contrast, we propose a lightweight one-path conditional generative adversarial network (cGAN) to learn a one-to-many relation from low-light to normal-light image space, given only sets of low- and  ...  By formulating this ill-posed problem as a modulation code learning task, our network learns to generate a collection of enhanced images from a given input conditioned on various reference images.  ...  Removing this loss can lead to results with undesirable color distribution (see the fourth column in Fig. 7 for an example).  ... 
arXiv:2110.01161v1 fatcat:nju7e7emnvhk5cdny5fjvnuofq

Self-Supervised Learning based Monaural Speech Enhancement with Multi-Task Pre-Training [article]

Yi Li, Yang Sun, Syed Mohsen Naqvi
2021 arXiv   pre-print
In this paper, we propose a multi-task pre-training method to improve the speech enhancement performance with self-supervised learning.  ...  In self-supervised learning, it is challenging to reduce the gap between the enhancement performance on the estimated and target speech signals with existed pre-tasks.  ...  conditions [3] [4] .  ... 
arXiv:2112.11459v1 fatcat:zkn6r7mx3zg3jpforsle5mpuuq

Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning [article]

Zixin Wen, Yuanzhi Li
2021 arXiv   pre-print
Why does contrastive learning usually need much stronger data augmentations than supervised learning to ensure good representations?  ...  In this work, we formally study how contrastive learning learns the feature representations for neural networks by analyzing its feature learning process.  ...  Self/un-supervised representation learning has a long history in the literature. In natural language processing (NLP), self-supervised learning has been the major approach [37, 20] .  ... 
arXiv:2105.15134v3 fatcat:vn4x3swabzfvpngnn2yibjpeje

Learning of Inter-Label Geometric Relationships Using Self-Supervised Learning: Application To Gleason Grade Segmentation [article]

Dwarikanath Mahapatra
2021 arXiv   pre-print
missing mask segments in a self-supervised manner.  ...  We propose a method to synthesize for PCa histopathology images by learning the geometrical relationship between different disease labels using self-supervised learning.  ...  We use the pre-text approach to leverage the learned representation for generating synthetic images. In one of the first works on self-supervised learning in medical imaging, Jamaludin et al.  ... 
arXiv:2110.00404v1 fatcat:276y4oi2nzeqtmslyvs3kmvdrm

Semi-supervised Deep Learning for Stress Prediction: A Review and Novel Solutions

Mazin Alshamrani
2021 International Journal of Advanced Computer Science and Applications  
This research introduces a novel self-supervised deep learning model for stress detection using an intelligent solution that detects the stress state using the physiological parameters.  ...  In the second part of the paper, a novel semi-supervised deep learning model for predicting the stress state is proposed.  ...  Finally, for dealing with the noise components, a lowpass filter was utilized to remove undesirable frequencies.  ... 
doi:10.14569/ijacsa.2021.0120949 fatcat:epl27mblzzfgtpvq3na53ph3y4

Self-Supervised Learning of Pretext-Invariant Representations [article]

Ishan Misra, Laurens van der Maaten
2019 arXiv   pre-print
Our approach sets a new state-of-the-art in self-supervised learning from images on several popular benchmarks for self-supervised learning.  ...  Despite being unsupervised, PIRL outperforms supervised pre-training in learning image representations for object detection.  ...  Experiments Following common practice in self-supervised learning [19, 78] , we evaluate the performance of PIRL in transfer-learning experiments.  ... 
arXiv:1912.01991v1 fatcat:sxwrs62ptjfbjitgjp2xpu2ave

Mitigating Sampling Bias and Improving Robustness in Active Learning [article]

Ranganath Krishnan, Alok Sinha, Nilesh Ahuja, Mahesh Subedar, Omesh Tickoo, Ravi Iyer
2021 arXiv   pre-print
We propose an unbiased query strategy that selects informative data samples of diverse feature representations with our methods: supervised contrastive active learning (SCAL) and deep feature modeling  ...  We introduce supervised contrastive active learning by leveraging the contrastive loss for active learning under a supervised setting.  ...  The supervised contrastive active learning outperforms other high-performing active learning methods by a big margin in robustness to out-of-distribution and dataset shift. and simultaneously are removed  ... 
arXiv:2109.06321v1 fatcat:3e4stqr6breqhpucen56kl2pb4

Deep Reinforcement Learning [article]

Yuxi Li
2018 arXiv   pre-print
We discuss deep reinforcement learning in an overview style. We draw a big picture, filled with details.  ...  Next we discuss RL core elements, including value function, policy, reward, model, exploration vs. exploitation, and representation.  ...  Lanctot et al. (2017) observe that independent RL, in which each agent learns by interacting with the environment, oblivious to other agents, can overfit the learned policies to other agents' policies  ... 
arXiv:1810.06339v1 fatcat:kp7atz5pdbeqta352e6b3nmuhy

Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles [article]

Mehdi Noroozi, Paolo Favaro
2017 arXiv   pre-print
In this paper we study the problem of image representation learning without human annotation.  ...  Our proposed method for learning visual representations outperforms state of the art methods in several transfer learning benchmarks.  ...  Preventing Shortcuts In a self-supervised learning method, shortcuts exploit information useful for solving the pre-text task, but not for a target task, such as detection.  ... 
arXiv:1603.09246v3 fatcat:sv46jxbzvfawxhjaoprqcdnu5y
« Previous Showing results 1 — 15 out of 13,846 results