Filters








1,121 Hits in 3.6 sec

Self-Supervised GAN to Counter Forgetting [article]

Ting Chen and Xiaohua Zhai and Neil Houlsby
2018 arXiv   pre-print
To counter forgetting, we encourage the discriminator to maintain useful representations by adding a self-supervision. Conditional GANs have a similar effect using labels.  ...  When trained on sequential tasks, neural networks exhibit forgetting. For GANs, discriminator forgetting leads to training instability.  ...  Acknowledgments We would like to thank Mario Lucic, Marvin Ritter, Sylvain Gelly, Ilya Tolstikhin, Alexander Kolesnikov and Lucas Beyer for help with and discussions on this project.  ... 
arXiv:1810.11598v2 fatcat:4ulxhadrnbdgrp7adx3urrwski

Self-Supervised GANs via Auxiliary Rotation Loss

Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, Neil Houlsby
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Under the same conditions, the self-supervised GAN attains a similar performance to state-of-the-art conditional counterparts.  ...  The role of self-supervision is to encourage the discriminator to learn meaningful feature representations which are not forgotten during training.  ...  Acknowledgements We would also like to thank Marcin Michalski, Karol Kurach and Anton Raichuk for their help with infustrature, and major contributions to the Compare GAN library.  ... 
doi:10.1109/cvpr.2019.01243 dblp:conf/cvpr/ChenZRLH19 fatcat:kajs42ghnfg4xkfjhz7hlchiem

Self-Supervised GANs via Auxiliary Rotation Loss [article]

Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, Neil Houlsby
2019 arXiv   pre-print
Under the same conditions, the self-supervised GAN attains a similar performance to state-of-the-art conditional counterparts.  ...  The role of self-supervision is to encourage the discriminator to learn meaningful feature representations which are not forgotten during training.  ...  Acknowledgements We would also like to thank Marcin Michalski, Karol Kurach and Anton Raichuk for their help with infustrature, and major contributions to the Compare GAN library.  ... 
arXiv:1811.11212v2 fatcat:an6skwpx3rfgjlnkt5qyg54kde

Towards Lifelong Self-Supervision For Unpaired Image-to-Image Translation [article]

Victor Schmidt, Makesh Narsimhan Sreedhar, Mostafa ElAraby, Irina Rish
2020 arXiv   pre-print
To alleviate this, we introduce Lifelong Self-Supervision (LiSS) as a way to pre-train an I2IT model (e.g., CycleGAN) on a set of self-supervised auxiliary tasks.  ...  Unpaired Image-to-Image Translation (I2IT) tasks often suffer from lack of data, a problem which self-supervised learning (SSL) has recently been very popular and successful at tackling.  ...  Continual Learning Performance Our main finding is that Lifelong Self-Supervision partially prevents forgetting.  ... 
arXiv:2004.00161v1 fatcat:xwivijmhrrd3ng2cgxc2jhadza

Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game [article]

Ngoc-Trung Tran, Viet-Hung Tran, Ngoc-Bao Nguyen, Linxiao Yang, Ngai-Man Cheung
2020 arXiv   pre-print
Self-supervised (SS) learning is a powerful approach for representation learning using unlabeled data. Recently, it has been applied to Generative Adversarial Networks (GAN) training.  ...  Specifically, SS tasks were proposed to address the catastrophic forgetting issue in the GAN discriminator.  ...  [4] apply self-supervised task to help discriminator counter catastrophic forgetting.  ... 
arXiv:1911.06997v2 fatcat:vpvp45dn2fborlmn4n2pr3dc2a

PREGAN: Pose Randomization and Estimation for Weakly Paired Image Style Translation [article]

Zexi Chen, Jiaxin Guo, Xuecheng Xu, Yunkai Wang, Yue Wang, Rong Xiong
2021 arXiv   pre-print
Towards this goal, one class of methods is to translate the image style from another environment to the one on which models are trained.  ...  To translate across such images, we propose PREGAN to train a style translator by intentionally transforming the two images with a random pose, and to estimate the given random pose by differentiable non-trainable  ...  Compared to existing self-supervisions introduced to GAN that mostly counter with the problem of the forgetfulness of the discriminator by a hand crafted classifier, our method utilizes a continuous pose  ... 
arXiv:2011.00301v2 fatcat:c2bjcnxerbalvhx2fvqxzrpyqq

Memory Protection Generative Adversarial Network (MPGAN): A Framework to Overcome the Forgetting of GANs Using Parameter Regularization Methods

Yifan Chang, Wenbo Li, Jian Peng, Haifeng Li, Yu Kang, Yingliang Huang
2020 IEEE Access  
Similarly, the self-supervised GAN [22] focuses on the discriminator and class information plays an important role in identifying real images.  ...  Two issues must be addressed to solve catastrophic forgetting of GANs.  ... 
doi:10.1109/access.2020.3028067 fatcat:vrnqhvl3u5ctpj2iaqurjnh2ou

A survey on data‐efficient algorithms in big data era

Amina Adadi
2021 Journal of Big Data  
Specifically, the survey covers solution strategies that handle data-efficiency by (i) using non-supervised algorithms that are, by nature, more data-efficient, by (ii) creating artificially more data,  ...  industrial and academic communities calling for more data-efficient models that harness the power of artificial learners while achieving good results with less training data and in particular less human supervision  ...  (ii) Self-supervised methods Self-supervision is a form of unsupervised learning where the data provides the supervision.  ... 
doi:10.1186/s40537-021-00419-9 fatcat:v4uahsvhlzdldlxqf24bshmja4

CNLL: A Semi-supervised Approach For Continual Noisy Label Learning [article]

Nazmul Karim, Umar Khalid, Ashkan Esmaeili, Nazanin Rahnavard
2022 arXiv   pre-print
The task of continual learning requires careful design of algorithms that can tackle catastrophic forgetting.  ...  After purification, we perform fine-tuning in a semi-supervised fashion that ensures the participation of all available samples.  ...  While SPR [27] employs self-supervised learning in order to create a purified buffer, this type of learning demands a long training time with high computations which limits its application in practical  ... 
arXiv:2204.09881v1 fatcat:yqdjuupakrhzdmerygsiqw73ty

On the Performance of Generative Adversarial Network by Limiting Mode Collapse for Malware Detection Systems

Acklyn Murray, Danda B. Rawat
2021 Sensors  
Typically, a mode collapse occurs when a GAN fails to fit the set optimizations and leads to several instabilities in the generative model, diminishing the capability to generate new content regardless  ...  Generative adversarial network (GAN) has been regarded as a promising solution to many machine learning problems, and it comprises of a generator and discriminator, determining patterns and anomalies in  ...  GANs with supervision signal SSGAN [29] uses supervision signal to inform the generator of the approximate output that corresponds to the input noise and ensures that the generated distribution is similar  ... 
doi:10.3390/s22010264 pmid:35009810 pmcid:PMC8749644 fatcat:kl3uiq7ykbbcvcc6iciunfiqxa

Towards Continual, Online, Unsupervised Depth [article]

Muhammad Umar Karim Khan
2021 arXiv   pre-print
Regularization and replay-based methods without task boundaries are proposed to avoid catastrophic forgetting while adapting to online data.  ...  Results of forgetting as well as adaptation are provided, which are superior to recent methods.  ...  Their work has been advanced in [52] , where the authors develop an approach to perform online adaptation without catastrophic forgetting. [30] uses replay at test time.  ... 
arXiv:2103.00369v2 fatcat:ecqxi5h6t5hmbha7jojzfqrkva

One-shot learning for the long term: consolidation with an artificial hippocampal algorithm [article]

Gideon Kowadlo, Abdelrahman Ahmed, David Rawlinson
2021 arXiv   pre-print
We claim that few-shot learning should be long term, assimilating knowledge for the future, without forgetting previous concepts.  ...  The results demonstrated that with the addition of AHA, the system could learn in one-shot and consolidate the knowledge for the long term without catastrophic forgetting.  ...  PM and PR are trained with self-supervised learning i.e. labels are internally generated. Each of the submodules correspond to Hippocampus subfields or pathways.  ... 
arXiv:2102.07503v2 fatcat:ut4lxfe7xbfr7oxjwwv47zuihm

Test-Time Training with Self-Supervision for Generalization under Distribution Shifts [article]

Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei A. Efros, Moritz Hardt
2020 arXiv   pre-print
We turn a single unlabeled test sample into a self-supervised learning problem, on which we update the model parameters before making a prediction.  ...  This also extends naturally to data in an online stream. Our simple approach leads to improvements on diverse image classification benchmarks aimed at evaluating robustness to distribution shifts.  ...  This paper took a long time to develop, and benefited from conversations with many of our colleagues, including Ben Recht and his students Ludwig Schmidt, Vaishaal Shanker and Becca Roelofs; Ravi Teja  ... 
arXiv:1909.13231v3 fatcat:edaruivkibf4dl4vfdymj65634

How Generative Adversarial Networks and Their Variants Work

Yongjun Hong, Uiwon Hwang, Jaeyoon Yoo, Sungroh Yoon
2019 ACM Computing Surveys  
In this paper, we aim to discuss the details of GAN for those readers who are familiar with, but do not comprehend GAN deeply or who wish to view GAN from various perspectives.  ...  Finally, we enumerate the GAN variants that are applied to various tasks and other fields for those who are interested in exploiting GAN for their research.  ...  Continual learning in deep neural networks suffers from catastrophic forgetting which refers to forgetting a previously learned task while learning a new task.  ... 
doi:10.1145/3301282 fatcat:z2xe6jdh5nd2dmovkes3rav3ke

Generative Models for Novelty Detection: Applications in abnormal event and situational change detection from data series [article]

Mahdyar Ravanbakhsh
2019 arXiv   pre-print
In this thesis, we propose several methods to model the novelty detection problem in unsupervised and semi-supervised fashion.  ...  Therefore, detecting the Novel classes in unsupervised and semi-supervised settings is a crucial step in such tasks.  ...  Acknowledgements First of all, I would like to thank my family (Farida, Hamid, Morteza); it is because of their never ending support that I have had the chance to progress in life.  ... 
arXiv:1904.04741v1 fatcat:fdwhsuaoi5hcdbjzcbjh2z6ydu
« Previous Showing results 1 — 15 out of 1,121 results