Filters








22,640 Hits in 5.5 sec

Learning robust visual representations using data augmentation invariance

Alex Hernandez-Garcia, Peter König, Tim Kietzmann
<span title="">2019</span> <i title="Cognitive Computational Neuroscience"> 2019 Conference on Cognitive Computational Neuroscience </i> &nbsp; <span class="release-stage">unpublished</span>
As a solution, we propose data augmentation invariance, an unsupervised learning objective which improves the robustness of the learned representations by promoting the similarity between the activations  ...  categorization are not sufficiently robust to identitypreserving image transformations commonly used in data augmentation.  ...  Taking inspiration from this property of the visual cortex, we have proposed an unsupervised learning objective to encourage learning more robust features, using data augmentation as the framework to transform  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.32470/ccn.2019.1242-0">doi:10.32470/ccn.2019.1242-0</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qxl6da5gyrdrvkwmebnyjgz6pm">fatcat:qxl6da5gyrdrvkwmebnyjgz6pm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210124130333/https://baicsworkshop.github.io/pdf/BAICS_3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7e/85/7e85f534268d914fcef0f43d6f99ce7c5f5878ec.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.32470/ccn.2019.1242-0"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Supervised Contrastive Learning for Accented Speech Recognition [article]

Tao Han, Hantao Huang, Ziang Yang, Wei Han
<span title="2021-07-02">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
From the experiments on the Common Voice dataset, we have shown that contrastive learning helps to build data-augmentation invariant and pronunciation invariant representations, which significantly outperforms  ...  To build different views (similar "positive" data samples) for contrastive learning, three data augmentation techniques including noise injection, spectrogram augmentation and TTS-same-sentence generation  ...  This definition helps us learn a robust and accent invariant feature without changing of original ASR architecture.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.00921v1">arXiv:2107.00921v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ofxdq4x6yfbpxemod3tzfrshpm">fatcat:ofxdq4x6yfbpxemod3tzfrshpm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210710013444/https://arxiv.org/pdf/2107.00921v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/70/9a/709a06b86bc92a615516643e908db6847e76ca82.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.00921v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Representation Learning via Invariant Causal Mechanisms [article]

Jovana Mitrovic, Brian McWilliams, Jacob Walker, Lars Buesing, Charles Blundell
<span title="2020-10-15">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Self-supervised learning has emerged as a strategy to reduce the reliance on costly supervised signal by pretraining representations only using unlabeled data.  ...  In this paper we analyze self-supervised representation learning using a causal framework.  ...  able to generate all possible styles using a fixed set of data augmentations, we will use augmentations that generate large sets of diverse styles as this allows us to learn better representations.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.07922v1">arXiv:2010.07922v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/r5dijhfoxrblndt2jq3ysi5seu">fatcat:r5dijhfoxrblndt2jq3ysi5seu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201123091519/https://arxiv.org/pdf/2010.07922v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/57/83/57835c5ad5424f94ee75901c3113730f3900e656.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.07922v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

What Should Not Be Contrastive in Contrastive Learning [article]

Tete Xiao, Xiaolong Wang, Alexei A. Efros, Trevor Darrell
<span title="2021-03-18">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Recent self-supervised contrastive methods have been able to produce impressive transferable visual representations by learning to be invariant to different data augmentations.  ...  Our model learns to capture varying and invariant factors for visual representations by constructing separate embedding spaces, each of which is invariant to all but one augmentation.  ...  augmentation is applied.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.05659v2">arXiv:2008.05659v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/giritvqqrjc7to7vcetsac5mr4">fatcat:giritvqqrjc7to7vcetsac5mr4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200818201201/https://arxiv.org/pdf/2008.05659v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.05659v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Study on Representation Invariances of CNNs and Human Visual Information Processing Based on Data Augmentation

Yibo Cui, Chi Zhang, Kai Qiao, Linyuan Wang, Bin Yan, Li Tong
<span title="2020-09-02">2020</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/5hwrtdnkjvclroyxzt4ty5ijb4" style="color: black;">Brain Sciences</a> </i> &nbsp;
To investigate their relationship under common conditions, we proposed a representation invariance analysis approach based on data augmentation technology.  ...  Firstly, the original image library was expanded by data augmentation.  ...  However, data augmentation technology was used from another perspective herein, which is studying the mechanisms of CNNs and fMRI visual information processing for invariant representation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/brainsci10090602">doi:10.3390/brainsci10090602</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/32887405">pmid:32887405</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/nhqigotkavamnh2dfscf6zblma">fatcat:nhqigotkavamnh2dfscf6zblma</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201107115649/https://res.mdpi.com/d_attachment/brainsci/brainsci-10-00602/article_deploy/brainsci-10-00602.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7c/94/7c94481a61c73f89837dc82a304554d03fd7d79d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/brainsci10090602"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Robust Compare Network for Few-Shot Learning

Yixin Yang, Yang Li, Rui Zhang, Jiabao Wang, Zhuang Miao
<span title="">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
Intuitively, data augmentation methods can teach a deep model about robustness, which is used to overcome the weakness of rotational invariance.  ...  .: Robust Compare Network for Few-shot Learning (a) Data augmentation Loss Loss Encoder Shift-invariant block Self-attention block Loss Encoder (b) Inner augmentation TABLE 1 : 1 The recognition  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.3012720">doi:10.1109/access.2020.3012720</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kwgz35ngsrdbbgfg2y46gqfjpm">fatcat:kwgz35ngsrdbbgfg2y46gqfjpm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201108082838/https://ieeexplore.ieee.org/ielx7/6287639/6514899/09151953.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7b/27/7b2756a387b487f917f11d7190d780584cb61093.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.3012720"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Exploring Representational Alignment with Human Perception Using Identically Represented Inputs [article]

Vedant Nanda and Ayan Majumdar and Camila Kolling and John P. Dickerson and Krishna P. Gummadi and Bradley C. Love and Adrian Weller
<span title="2022-05-30">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We find that architectures with residual connections trained using a self-supervised contrastive loss with ℓ_p ball adversarial data augmentation tend to learn the most human-like invariances.  ...  We contribute to the study of the quality of learned representations.  ...  Data Augmentation Hand-crafted data augmentations are commonly used in deep learning pipelines.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.14726v2">arXiv:2111.14726v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pwerezkgdrhh5abvxl2hko6m2q">fatcat:pwerezkgdrhh5abvxl2hko6m2q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220605160840/https://arxiv.org/pdf/2111.14726v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f1/31/f13171d468cf6eb1484af0b0677254f6f75769b6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.14726v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Unsupervised Adversarial Invariance [article]

Ayush Jaiswal, Yue Wu, Wael AbdAlmageed, Premkumar Natarajan
<span title="2018-09-26">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Our unsupervised model outperforms state-of-the-art methods, which are supervised, at inducing invariance to inherent nuisance factors, effectively using synthetic data augmentation to learn invariance  ...  Data representations that contain all the information about target variables but are invariant to nuisance factors benefit supervised learning algorithms by preventing them from learning associations between  ...  the learned latent representation of data that is used for prediction.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1809.10083v1">arXiv:1809.10083v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cf2kwbhirndhpkex2ginbatt6e">fatcat:cf2kwbhirndhpkex2ginbatt6e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929164548/https://arxiv.org/pdf/1809.10083v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f8/4c/f84c506ac78cb6a3418f4b4c53c11b58dbc5d192.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1809.10083v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

On the use of Cortical Magnification and Saccades as Biological Proxies for Data Augmentation [article]

Binxu Wang, David Mayo, Arturo Deza, Andrei Barbu, Colin Conwell
<span title="2021-12-14">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Self-supervised learning is a powerful way to learn useful representations from natural data.  ...  In this paper, we attempt to reverse-engineer these augmentations to be more biologically or perceptually plausible while still conferring the same benefits for encouraging robust representation.  ...  In other words, to generate invariant representations against augmentations may be one goal our visual system uses to train itself in order to learn object recognition.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.07173v1">arXiv:2112.07173v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ziaual5ndnasvfrlaqmz5nywfi">fatcat:ziaual5ndnasvfrlaqmz5nywfi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211216225701/https://arxiv.org/pdf/2112.07173v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e4/45/e445fc45793e11390220bffc92dc52bc8d22abbd.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.07173v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

What Can Style Transfer and Paintings Do For Model Robustness? [article]

Hubert Lin, Mitchell van Zuijlen, Sylvia C. Pont, Maarten W.A. Wijntjes, Kavita Bala
<span title="2021-05-27">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Data augmentations encourage models to learn desired invariances, such as invariance to horizontal flipping or small changes in color.  ...  Second, we show that learning from paintings as a form of perceptual data augmentation can improve model robustness.  ...  Data augmentations are transformations applied to images to enforce useful model invariances.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.14477v2">arXiv:2011.14477v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2z6g6d5ni5fbjjjinytfjvuioq">fatcat:2z6g6d5ni5fbjjjinytfjvuioq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210603143534/https://arxiv.org/pdf/2011.14477v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2f/76/2f760d26b1172f9983292ae5c9151e164f45ea4b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.14477v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

An Exploration of Consistency Learning with Data Augmentation

Connor Shorten, Taghi M. Khoshgoftaar
<span title="2022-05-04">2022</span> <i title="University of Florida George A Smathers Libraries"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/qsmy2pq4ofbv7pwhg3dhn3kmmy" style="color: black;">Proceedings of the ... International Florida Artificial Intelligence Research Society Conference</a> </i> &nbsp;
However, Supervised Learning still fails to be Robust, making different predictions for original and augmented data points.  ...  Data augmentation has enabled Supervised Learning with less labeled data, while avoiding the pitfalls of overfitting.  ...  Data Augmentation to achieve more robust Deep Learning models.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.32473/flairs.v35i.130669">doi:10.32473/flairs.v35i.130669</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ymy2wxjknjbrhljxghmk7apscq">fatcat:ymy2wxjknjbrhljxghmk7apscq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220506073851/https://journals.flvc.org/FLAIRS/article/download/130669/133865" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1f/2d/1f2d2796b767256268c22bfa1874fc07976678f0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.32473/flairs.v35i.130669"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Data augmentation and image understanding [article]

Alex Hernandez-Garcia
<span title="2020-12-28">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This work focuses on learning representations that are more aligned with visual perception and the biological vision.  ...  A central subject of this dissertation is data augmentation, a commonly used technique for training artificial neural networks to augment the size of data sets through transformations of the images.  ...  This led us to propose data augmentation invariance, an objective function that encourages robust representations, inspired by the invariance observed in the visual cortex.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.14185v1">arXiv:2012.14185v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qcip4vstzvbxzo4qevek5marrm">fatcat:qcip4vstzvbxzo4qevek5marrm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210103060923/https://arxiv.org/pdf/2012.14185v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5e/21/5e21d7ce531a81817d5f4de8ff5de3e444cf6629.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.14185v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Unified Adversarial Invariance [article]

Ayush Jaiswal, Yue Wu, Wael AbdAlmageed, Premkumar Natarajan
<span title="2019-09-04">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Invariance to nuisance is achieved by learning a split representation of data through competitive training between the prediction task and a reconstruction task coupled with disentanglement, whereas that  ...  We present a unified invariance framework for supervised neural networks that can induce independence to nuisance factors of data without using any nuisance annotations, but can additionally use labeled  ...  Effective use of synthetic data augmentation for learning invariance Data is often not available for all possible variations of nuisance factors.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1905.03629v2">arXiv:1905.03629v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/l375bsh3t5ghrgp5evhnoaxyfm">fatcat:l375bsh3t5ghrgp5evhnoaxyfm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929235727/https://arxiv.org/pdf/1905.03629v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/49/f3/49f363c27e21276c1a41b91196195303e584402e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1905.03629v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

MixSiam: A Mixture-based Approach to Self-supervised Representation Learning [article]

Xiaoyang Guo, Tianhao Zhao, Yutian Lin, Bo Du
<span title="2021-11-04">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Recently contrastive learning has shown significant progress in learning visual representations from unlabeled data.  ...  Thus the learned model is more robust compared to previous contrastive learning methods.  ...  By considering both of the relationships, the robustness and the discrimination ability of learned representations are improved.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.02679v1">arXiv:2111.02679v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jmju6d44knht7g3xqxnbmvvucq">fatcat:jmju6d44knht7g3xqxnbmvvucq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211106221720/https://arxiv.org/pdf/2111.02679v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1d/75/1d7517e48be66c93a1287db19ea982f56dfb16a7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.02679v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Polar Transformation on Image Features for Orientation-Invariant Representations

Jinhui Chen, Zhaojie Luo, Zhihong Zhang, Faliang Huang, Zhiling Ye, Tetsuya Takiguchi, Edwin R. Hancock
<span title="">2018</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/sbzicoknnzc3tjljn7ifvwpooi" style="color: black;">IEEE transactions on multimedia</a> </i> &nbsp;
Although vast numbers of alternative robust feature representation models have been proposed to improve the performance of different visual tasks, most existing feature representations (e.g. handcrafted  ...  Experimental results show that the proposed orientation-invariant image representation, based on polar models for both handcrafted features and deep learning features, is both competitive with state-of-the-art  ...  It has been shown that data augmentation by learning all the possible transformations enforces robustness of a learning model to variations of the input [12] , [15] , [31] , [32] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tmm.2018.2856121">doi:10.1109/tmm.2018.2856121</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/l5hveduulrgcneu6tmw756j6e4">fatcat:l5hveduulrgcneu6tmw756j6e4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190501043728/http://eprints.whiterose.ac.uk/132227/1/TMM_camera_ready.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/04/73/0473eb60399a070a9179b7f4ce961b8564e3b15b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tmm.2018.2856121"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 22,640 results