Filters








4,344 Hits in 4.6 sec

Data Augmentation with Manifold Exploring Geometric Transformations for Increased Performance and Robustness [article]

Magdalini Paschali, Walter Simson, Abhijit Guha Roy, Muhammad Ferjad Naeem, Rüdiger Göbl, Christian Wachinger, Nassir Navab
2019 arXiv   pre-print
Inspired by ManiFool, the augmentation is performed by a line-search manifold-exploration method that learns affine geometric transformations that lead to the misclassification on an image, while ensuring  ...  In this paper we propose a novel augmentation technique that improves not only the performance of deep neural networks on clean test data, but also significantly increases their robustness to random transformations  ...  ManiFool Augmentation is performed by populating the training dataset for a given task with samples transformed with optimized affine geometric transformations.  ... 
arXiv:1901.04420v1 fatcat:lwdh4mnhzvbtfhr4nf5phg5huq

PointManifoldCut: Point-wise Augmentation in the Manifold for Point Clouds [article]

Tianfang Zhu, Yue Guan, Anan Li
2021 arXiv   pre-print
The experiments show that our proposed approach can enhance the performance of point cloud classification as well as segmentation networks, and brings them additional robustness to attacks and geometric  ...  transformations.  ...  We also show that this data augmentation method is insensitive to the point drop, coordinates noise and other geometric transformations, and it is attractive to be applied to a broader range of point cloud  ... 
arXiv:2109.07324v2 fatcat:r2gua5ttrzbxxalzo3klmpkh3i

Signed Laplacian Deep Learning with Adversarial Augmentation for Improved Mammography Diagnosis [article]

Heyi Li, Dongdong Chen, William H. Nailon, Mike E. Davies, David I. Laurenson
2019 arXiv   pre-print
To address this, we propose a signed graph regularized deep neural network with adversarial augmentation, named DiagNet.  ...  Firstly, we use adversarial learning to generate positive and negative mass-contained mammograms for each mass class.  ...  To alleviate the impact of inadequate data, [7, 11, 12, 20] applied classical geometric transformations for data augmentation (e.g. flips, rotations, random crops etc), and more recently, [16, 17]  ... 
arXiv:1907.00300v2 fatcat:rprlqaps75dc7akq235vbicwja

Transformation Consistency Regularization- A Semi-Supervised Paradigm for Image-to-Image Translation [article]

Aamir Mustafa, Rafal K. Mantiuk
2020 arXiv   pre-print
The method introduces a diverse set of geometric transformations and enforces the model's predictions for unlabeled data to be invariant to those transformations.  ...  Scarcity of labeled data has motivated the development of semi-supervised learning methods, which learn from large portions of unlabeled data alongside a few labeled samples.  ...  Acknowledgements This project has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement N • 725253EyeCode).  ... 
arXiv:2007.07867v1 fatcat:3c6kfekcc5db3dkrbgoyo3hfxm

A Color/Illuminance Aware Data Augmentation and Style Adaptation Approach to Person Re-identification

Zhouchi Lin, Chenyang Liu, Wenbo Qi, S. C. Chan
2021 IEEE Access  
In fact, training a single network with original and augmented data together shows inferior performance than without using data augmentation.  ...  The principal angles between the subspaces are related to the manifold geodesic distance with a good geometrical interpretation.  ... 
doi:10.1109/access.2021.3100571 fatcat:vetccagaunhlflc6nmakob4y3a

Landmarks Augmentation with Manifold-Barycentric Oversampling [article]

Iaroslav Bespalov, Nazar Buzun, Oleg Kachan, Dmitry V. Dylov
2021 arXiv   pre-print
Our approach reduces the overfitting and improves the quality metrics beyond the original data outcome and beyond the result obtained with popular modern augmentation methods.  ...  In this paper, we propose a new augmentation method that guarantees to keep the new data within the original data manifold thanks to the optimal transport theory.  ...  transform (geometric [8]) augments the original data.  ... 
arXiv:2104.00925v2 fatcat:ucwztbpbnjhglpdszccw6gnxvm

Kernelized Subspace Pooling for Deep Local Descriptors

Xing Wei, Yue Zhang, Yihong Gong, Nanning Zheng
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Such methods, however, have little analysis on how to increase geometric invariance of their generated descriptors.  ...  The proposed method is simple, easy to understand and achieves good performance.  ...  The data are augmented by random flipping and rotating 90 o online, which is the same setup as L2-NET and HARDNET.  ... 
doi:10.1109/cvpr.2018.00200 dblp:conf/cvpr/WeiZGZ18 fatcat:rlfbxaee3nhjzfmzyujgwmh2s4

O-ViT: Orthogonal Vision Transformer [article]

Yanhong Fei, Yingjie Liu, Xian Wei, Mingsong Chen
2022 arXiv   pre-print
Inspired by the tremendous success of the self-attention mechanism in natural language processing, the Vision Transformer (ViT) creatively applies it to image patch sequences and achieves incredible performance  ...  To address this problem, we propose a novel method named Orthogonal Vision Transformer (O-ViT), to optimize ViT from the geometric perspective.  ...  For example, for the SVHN dataset, We can see a sharp increase in the robustness performance of Deep-ViT after imposing orthogonal constraints.  ... 
arXiv:2201.12133v2 fatcat:cqsw5hwqkzdbnl2io5267rw5me

Optimization on Submanifolds of Convolution Kernels in CNNs [article]

Mete Ozay, Takayuki Okatani
2016 arXiv   pre-print
Experimental results show that the proposed method achieves state-of-the-art performance for major image classification benchmarks with CNNs.  ...  Kernel normalization methods have been employed to improve robustness of optimization methods to reparametrization of convolution kernels, covariate shift, and to accelerate training of Convolutional Neural  ...  In addition, we can further boost the performance even for augmented datasets, since data augmentation is conducted using large scale transformations, while the kernels computed at different layers can  ... 
arXiv:1610.07008v1 fatcat:xztkcam2zzggbb5u3rgxnsypx4

Adversarial Examples on Object Recognition: A Comprehensive Survey [article]

Alex Serban, Erik Poll, Joost Visser
2020 arXiv   pre-print
In this article we discuss the impact of adversarial examples on security, safety, and robustness of neural networks.  ...  However, despite achieving impressive performance on complex tasks, they can be very sensitive: Small perturbations of inputs can be sufficient to induce incorrect behavior.  ...  Nonetheless, sensitivity to geometric transformations for models trained with data augmentation techniques shows that algorithms do not learn to abstract general transformations, but only to fit the training  ... 
arXiv:2008.04094v2 fatcat:7xycyybhpvhshawt7fy3fzeana

Disentangled Deep Autoencoding Regularization for Robust Image Classification [article]

Zhenyu Duan, Martin Renqiang Min, Li Erran Li, Mingbo Cai, Yi Xu, Bingbing Ni
2019 arXiv   pre-print
test images with reasonably large geometric transformations.  ...  image classification on robustness against adversarial attacks and generalization to novel test data.  ...  This leads to much improved generalization to novel test data with large geometric transformation.  ... 
arXiv:1902.11134v1 fatcat:n5rfp3rbqzg2pnjpgjp7zy5fni

ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing

Chen-Hsuan Lin, Ersin Yumer, Oliver Wang, Eli Shechtman, Simon Lucey
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
ST-GANs seek image realism by operating in the geometric warp parameter space.  ...  To achieve this, we propose a novel Generative Adversarial Network (GAN) architecture that utilizes Spatial Transformer Networks (STNs) as the generator, which we call Spatial Transformer GANs (ST-GANs  ...  During training, we add geometric data augmentation by randomly perturbing the faces with random similarity transformations and the glasses with random homographies. Results.  ... 
doi:10.1109/cvpr.2018.00985 dblp:conf/cvpr/LinYWSL18 fatcat:jvbq57cktjhgpngurrchjitb4e

Neural Population Geometry Reveals the Role of Stochasticity in Robust Perception [article]

Joel Dapello, Jenelle Feather, Hang Le, Tiago Marques, David D. Cox, Josh H. McDermott, James J. DiCarlo, SueYeon Chung
2021 arXiv   pre-print
We find distinct geometric signatures for each type of network, revealing different mechanisms for achieving robust representations.  ...  mediate a tradeoff between adversarial and clean performance.  ...  All experiments were performed on the MIT BCS OpenMind Computing Cluster.  ... 
arXiv:2111.06979v1 fatcat:reldoes3vjdetcttxgwrpmmhau

Dimensionality transcending: a method for merging BCI datasets with different dimensionalities

Pedro Rodrigues, Marco Congedo, Christian Jutten
2020 IEEE Transactions on Biomedical Engineering  
Our proposal uses a two-step procedure that transforms the data points so that they become matched in terms of dimensionality and statistical distribution.  ...  In the dimensionality matching step, we use isometric transformations to map each dataset into a common space without changing their geometric structures.  ...  a single robust classifier as explored in [45] .  ... 
doi:10.1109/tbme.2020.3010854 pmid:32746067 fatcat:yp5stzotnncn7kh7uzjrxfzytm

Contextual Similarity Aggregation with Self-attention for Visual Re-ranking [article]

Jianbo Ouyang, Hui Wu, Min Wang, Wengang Zhou, Houqiang Li
2021 arXiv   pre-print
To further improve the robustness of our re-ranking model and enhance the performance of our method, a new data augmentation scheme is designed.  ...  Then, the affinity features of the top-K images are refined by aggregating the contextual information with a transformer encoder.  ...  Notably, when using data augmentation, we can achieve higher performance relative to the baseline for the model trained without data augmentation.  ... 
arXiv:2110.13430v1 fatcat:v2wc3mdbgjhuti32nqc25ruk5i
« Previous Showing results 1 — 15 out of 4,344 results