Filters








2,606 Hits in 5.2 sec

Automatic Shortcut Removal for Self-Supervised Representation Learning [article]

Matthias Minderer, Olivier Bachem, Neil Houlsby, Michael Tschannen
2020 arXiv   pre-print
In self-supervised visual representation learning, a feature extractor is trained on a "pretext task" for which labels can be generated cheaply, without human annotation.  ...  Additionally, the modifications made by the lens reveal how the choice of pretext task and dataset affects the features learned by self-supervision.  ...  We also thank Sylvain Gelly and the Google Brain team in Zurich for helpful discussions Automatic Shortcut Removal for Self-Supervised Representation Learning  ... 
arXiv:2002.08822v3 fatcat:xxzqyaocjrbmnlp5vydpjigbmm

Self-supervised visual feature learning with curriculum [article]

Vishal Keshav, Fabien Delattre
2020 arXiv   pre-print
Self-supervised learning techniques have shown their abilities to learn meaningful feature representation.  ...  Moreover, removing those shortcuts often leads to the loss of some semantically valuable information. We show that it directly impacts the speed of learning of the downstream task.  ...  [16] , the network learns shortcuts to make the self-supervised task learn better.  ... 
arXiv:2001.05634v1 fatcat:g6rmvexwzbdb3nlturml37eh3u

Addressing Feature Suppression in Unsupervised Visual Representations [article]

Tianhong Li, Lijie Fan, Yuan Yuan, Hao He, Yonglong Tian, Rogerio Feris, Piotr Indyk, Dina Katabi
2021 arXiv   pre-print
We then present predictive contrastive learning (PCL), a framework for learning unsupervised representations that are robust to feature suppression.  ...  Contrastive learning is one of the fastest growing research areas in machine learning due to its ability to learn useful representations without labeled data.  ...  Reconstructive Contrastive Learning (RCL) Reconstructive contrastive learning (RCL) is a framework for self-supervised representation learning.  ... 
arXiv:2012.09962v5 fatcat:ft73bfytjfdnrcawdvuvgcb36q

Learning Visual Representations for Transfer Learning by Suppressing Texture [article]

Shlok Mishra, Anshul Shah, Ankan Bansal, Jonghyun Choi, Abhinav Shrivastava, Abhishek Sharma, David Jacobs
2020 arXiv   pre-print
In self-supervised learning in particular, texture as a low-level cue may provide shortcuts that prevent the network from learning higher level representations.  ...  We empirically show that our method achieves state-of-the-art results on object detection and image classification with eight diverse datasets in either supervised or self-supervised learning tasks such  ...  Our approach yields consistent improvements in both supervised and self-supervised learning settings for learning representations that generalize well across different datasets.  ... 
arXiv:2011.01901v2 fatcat:ncpzsmjgifhbhgdqpjrcbdb4ai

Keep the Caption Information: Preventing Shortcut Learning in Contrastive Image-Caption Retrieval [article]

Maurits Bleeker, Andrew Yates, Maarten de Rijke
2022 arXiv   pre-print
We introduce an approach to reduce shortcut feature representations for the ICR task: latent target decoding (LTD).  ...  We add an additional decoder to the learning framework to reconstruct the input caption, which prevents the image and caption encoder from learning shortcut features.  ...  ACKNOWLEDGMENTS We thank Maartje ter Hoeve, Sarah Ibrahimi, Ana Lucic and Julien Rossi for their valuable feedback and discussions.  ... 
arXiv:2204.13382v1 fatcat:alcrgcsq6je67ixtervmtzdjqi

Can contrastive learning avoid shortcut solutions? [article]

Joshua Robinson, Li Sun, Ke Yu, Kayhan Batmanghelich, Stefanie Jegelka, Suvrit Sra
2021 arXiv   pre-print
The generalization of representations learned via contrastive learning depends crucially on what features of the data are extracted.  ...  However, we observe that the contrastive loss does not always sufficiently guide which features are extracted, a behavior that can negatively impact the performance on downstream tasks via "shortcuts",  ...  The initial learning rate is set as 0.1 and we tune for 100 epochs for CIFAR10, 25 epochs for CIFAR100 respectively. An SGD optimizer is used to finetune the model.  ... 
arXiv:2106.11230v3 fatcat:3zsvbtgq4veozcwzvxz2lqom6i

Discovery of Visual Semantics by Unsupervised and Self-Supervised Representation Learning [article]

Gustav Larsson
2017 arXiv   pre-print
Additionally, it gives us a method for self-supervised representation learning. In order for the model to appropriately re-color a grayscale object, it must first be able to identify it.  ...  In particular, we propose to use self-supervised automatic image colorization.  ...  CHAPTER 3 SELF-SUPERVISED REPRESENTATION LEARNING Returning to representation learning, we demonstrate in this chapter how colorization can be used for self-supervised representation learning and how it  ... 
arXiv:1708.05812v1 fatcat:w77w3q3ms5c5fnyzl65mkj4ozy

Self-Supervised Multi-View Synchronization Learning for 3D Pose Estimation [article]

Simon Jenni, Paolo Favaro
2020 arXiv   pre-print
In contrast, we propose an approach that can exploit small annotated data sets by fine-tuning networks pre-trained via self-supervised learning on (large) unlabeled data sets.  ...  To drive such networks towards supporting 3D pose estimation during the pre-training step, we introduce a novel self-supervised feature learning task designed to focus on the 3D structure in an image.  ...  Self-supervised learning. Self-supervised learning is a type of unsupervised representation learning that has demonstrated impressive performance on image and video benchmarks.  ... 
arXiv:2010.06218v1 fatcat:hfxjwjacp5gdrhhifuhf2ebhde

Semi-parametric Topological Memory for Navigation [article]

Nikolay Savinov, Alexey Dosovitskiy, Vladlen Koltun
2018 arXiv   pre-print
We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals.  ...  Removing vision-based shortcuts from the graph leads to dramatic decline in performance.  ...  The R and L networks are trained in self-supervised fashion, without any manual labeling or reward signal.  ... 
arXiv:1803.00653v1 fatcat:q6mk5gdtojcsvm4atqg7huheai

Self-Attention Enhanced CNNs and Collaborative Curriculum Learning for Distantly Supervised Relation Extraction

Yuyun Huang, Jinhua Du
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
Specifically, we firstly propose an internal self-attention mechanism between the convolution operations in convolutional neural networks (CNNs) to learn a better sentence representation from the noisy  ...  Although Distantly Supervised Relation Extraction (DSRE) benefits from automatic labelling, it suffers from serious mislabelling issues, i.e. some or all of the instances for an entity pair (head and tail  ...  We would like to thank Emer Gilmartin (gilmare@tcd.ie) for helpful comments and presentation improvements.  ... 
doi:10.18653/v1/d19-1037 dblp:conf/emnlp/HuangD19 fatcat:txbpo52ycjdrridb77sgjcx3xi

Greedy Gradient Ensemble for Robust Visual Question Answering [article]

Xinzhe Han, Shuhui Wang, Chi Su, Qingming Huang, Qi Tian
2021 arXiv   pre-print
We further propose a new de-bias framework, Greedy Gradient Ensemble (GGE), which combines multiple biased models for unbiased base model learning.  ...  Based on experimental analysis for existing robust VQA methods, we stress the language bias in VQA that comes from two aspects, i.e., distribution bias and shortcut bias.  ...  In the future, we will extend GGE to solve bias problems for other tasks, provide a more rigorous analysis to guarantee model convergence, and learn to automatically detect different kinds of bias features  ... 
arXiv:2107.12651v4 fatcat:fj3yfxqtkjgpfg5gpem5b5jdg4

Motion Segmentation using Frequency Domain Transformer Networks [article]

Hafez Farazi, Sven Behnke
2020 arXiv   pre-print
Self-supervised prediction is a powerful mechanism to learn representations that capture the underlying structure of the data.  ...  Despite recent progress, the self-supervised video prediction task is still challenging.  ...  The Tagger network learns to group the representations of different objects and backgrounds iteratively in a self-supervised way. Hsieh et al.  ... 
arXiv:2004.08638v1 fatcat:rkhyiqwl5jfrdiazmn4rsrpise

Video Jigsaw: Unsupervised Learning of Spatiotemporal Context for Video Action Recognition [article]

Unaiza Ahsan, Rishi Madhok, Irfan Essa
2018 arXiv   pre-print
We propose a self-supervised learning method to jointly reason about spatial and temporal context for video recognition.  ...  We propose to combine spatial and temporal context in one self-supervised framework without any heavy preprocessing.  ...  Avoiding Network Shortcuts As mentioned in recent self-supervised approaches [9, 34, 35] , it is imperative to deal with the self-supervised network's tendency to learn the patch locations via low level  ... 
arXiv:1808.07507v1 fatcat:pddkk2qnpvc4xm5jf3z5ga2hvi

Sense and Learn: Self-Supervision for Omnipresent Sensors [article]

Aaqib Saeed, Victor Ungureanu, Beat Gfeller
2021 arXiv   pre-print
We present a generalized framework named Sense and Learn for representation or feature learning from raw sensory data.  ...  The self-learning nature of our methodology opens up exciting possibilities for on-device continual learning.  ...  Lyon for their valuable feedback and help with this work.  ... 
arXiv:2009.13233v2 fatcat:ver2i7o5zvgv3boterps4tqxcu

A Survey on Self-supervised Pre-training for Sequential Transfer Learning in Neural Networks [article]

Huanru Henry Mao
2020 arXiv   pre-print
We provide an overview of the taxonomy for self-supervised learning and transfer learning, and highlight some prominent methods for designing pre-training tasks across different domains.  ...  Self-supervised pre-training for transfer learning is becoming an increasingly popular technique to improve state-of-the-art results using unlabeled data.  ...  Cottrell for providing comments, advice and editorial assistance. Thanks to Bodhisattwa Prasad Majumder for providing proofreading assistance.  ... 
arXiv:2007.00800v1 fatcat:jgjl2l7wqfaq5do4vre5fryuoe
« Previous Showing results 1 — 15 out of 2,606 results