Filters








1,971 Hits in 3.7 sec

Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks [article]

Anne Harrington, Arturo Deza
2022 arXiv   pre-print
Recent work suggests that representations learned by adversarially robust networks are more human perceptually-aligned than non-robust networks via image manipulations.  ...  These results together suggest that (1) adversarially robust representations capture peripheral computation better than non-robust representations and (2) robust representations capture peripheral computation  ...  If the inverted features (stimuli) of two models are perceptually similar, then it is likely that the learned representations are also aligned.  ... 
arXiv:2202.00838v2 fatcat:k7p5d5koergsbbu2i5f2jxsjdu

Inverting Adversarially Robust Networks for Image Synthesis [article]

Renan A. Rojas-Gomez, Raymond A. Yeh, Minh N. Do, Anh Nguyen
2022 arXiv   pre-print
We train an adversarially robust encoder to extract disentangled and perceptually-aligned image representations, making them easily invertible.  ...  To address these limitations, we propose the use of adversarially robust representations as a perceptual primitive for feature inversion.  ...  This fundamental difference equips our adversarially robust (AR) autoencoder with disentangled representations that are perceptually-aligned with human vision [16, 52] , resulting in favorable inversion  ... 
arXiv:2106.06927v3 fatcat:iwuegtgh6jcu5f4hcgwvov25kq

Adversarial Robustness as a Prior for Learned Representations [article]

Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Aleksander Madry
2019 arXiv   pre-print
More broadly, our results indicate adversarial robustness as a promising avenue for improving learned representations.  ...  It turns out that representations learned by robust models address the aforementioned shortcomings and make significant progress towards learning a high-level encoding of inputs.  ...  In what follows, we will explore the effect of the prior induced by adversarial robustness on models' learned representations, and demonstrate that representations learned by adversarially robust models  ... 
arXiv:1906.00945v2 fatcat:fggswfyzzverxgnaymlgj3tn6m

Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4 [article]

William Berrios, Arturo Deza
2022 arXiv   pre-print
However, in this short paper, we provide evidence against the unexpected trend of Vision Transformers (ViT) being not perceptually aligned with human visual representations by showing how a dual-stream  ...  Transformer, a CrossViTa la Chen et al. (2021), under a joint rotationally-invariant and adversarial optimization procedure yields 2nd place in the aggregate Brain-Score 2022 competition averaged across  ...  B Targeted Attacks in Figure 1 Figure1: A qualitative demonstration of the human-machine perceptual alignment of the CrossViT-18 † via the effects of adversarial perturbations.  ... 
arXiv:2203.06649v1 fatcat:n65zgm4t7rfutbeoav5ocn66n4

2021 Index IEEE Transactions on Multimedia Vol. 23

2021 IEEE transactions on multimedia  
Lai, Q., +, TMM 2021 2086-2099 VehicleNet: Learning Robust Visual Representation for Vehicle Re-Identifi-cation.  ...  Ouyang, Learning Localized Representations of Point Clouds With Graph-Convolutional Generative Adversarial Networks.  ...  ., Low-Rank Pairwise Align- ment Bilinear Network For Few-Shot Fine-Grained Image Classification; TMM 2021 1666-1680 Huang, H., see 1855 -1867 Huang, H., see Jiang, X., TMM 2021 2602-2613 Huang, J.,  ... 
doi:10.1109/tmm.2022.3141947 fatcat:lil2nf3vd5ehbfgtslulu7y3lq

BIGRoC: Boosting Image Generation via a Robust Classifier [article]

Roy Ganz, Michael Elad
2022 arXiv   pre-print
Our method, termed BIGRoC (Boosting Image Generation via a Robust Classifier), is based on a post-processing procedure via the guidance of a given robust classifier and without a need for additional training  ...  The interest of the machine learning community in image synthesis has grown significantly in recent years, with the introduction of a wide range of deep generative models and means for training them.  ...  The perceptually aligned gradients property indicates that the features learned by robust models are more aligned with human perception.  ... 
arXiv:2108.03702v3 fatcat:4fndttdsl5gv5d32ol3fqyxhay

Fast Training of Deep Neural Networks Robust to Adversarial Perturbations [article]

Justin Goodwin, Olivia Brown, Victoria Helus
2020 arXiv   pre-print
In this work, we demonstrate that this approach extends to the Euclidean norm and preserves the human-aligned feature representations that are common for robust models.  ...  ., adversarial examples) and their learned feature representations are often difficult to interpret, raising concerns about their true capability and trustworthiness.  ...  Feature Representations As suggested in [23] , adversarial robustness acts as a prior for learning human-aligned features, and we are interested in qualitatively assessing the feature representations  ... 
arXiv:2007.03832v1 fatcat:4wsg2x547fbmbaqusywyg4hcc4

LOTS about Attacking Deep Features [article]

Andras Rozsa, Manuel Günther, Terrance E. Boult
2018 arXiv   pre-print
We analyze and compare the adversarial robustness of the end-to-end VGG Face network with systems that use Euclidean or cosine distance between gallery templates and extracted deep features.  ...  Various approaches have been developed for generating these so-called adversarial examples, but they aim at attacking end-to-end networks.  ...  This research is based upon work funded in part by NSF IIS-1320956 and in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via  ... 
arXiv:1611.06179v5 fatcat:dg2exflnpveq5psnl552bjyvtu

Do Adversarially Robust ImageNet Models Transfer Better? [article]

Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, Aleksander Madry
2020 arXiv   pre-print
In this work, we identify another such aspect: we find that adversarially robust models, while less accurate, often perform better than their standard-trained counterparts when used for transfer learning  ...  Our results are consistent with (and in fact, add to) recent hypotheses stating that robustness leads to improved feature representations.  ...  (a) Perceptually aligned gradients (b) Representation invertibility Figure 1: Adversarially robust (top) and standard (bottom) representations: robust representations allow (a) feature visualization  ... 
arXiv:2007.08489v2 fatcat:mxupmyuksvhwzjymoo262i4hza

Adversarial Robustness: Softmax versus Openmax [article]

Andras Rozsa, Manuel Günther, Terrance E. Boult
2017 arXiv   pre-print
deep representations, and is claimed to be more robust to adversarial perturbations.  ...  that directly work on deep representations.  ...  This research is based upon work funded in part by NSF IIS-1320956 and in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via  ... 
arXiv:1708.01697v1 fatcat:hfcopitdlrebvltmzwfe3xp2dy

SSIMLayer: Towards Robust Deep Representation Learning via Nonlinear Structural Similarity [article]

Ahmed Abobakr, Mohammed Hossny, Saeid Nahavandi
2018 arXiv   pre-print
This layer performs a set of comprehensive convolution operations that mimics the overall function of the human visual system (HVS) via focusing on learning structural information in its input.  ...  against noise perturbations and adversarial attacks.  ...  Introduction Deep representation learning architectures have achieved superior perceptual capabilities in several domains.  ... 
arXiv:1806.09152v2 fatcat:lkg3o2uitjfrzb4uc7bh5chuxe

Toward Characteristic-Preserving Image-based Virtual Try-On Network [article]

Bochao Wang, Huabin Zheng, Xiaodan Liang, Yimin Chen, Liang Lin, Meng Yang
2018 arXiv   pre-print
First, CP-VTON learns a thin-plate spline transformation for transforming the in-shop clothes into fitting the body shape of the target person via a new Geometric Matching Module (GMM) rather than computing  ...  Second, to alleviate boundary artifacts of warped clothes and make the results more realistic, we employ a Try-On Module that learns a composition mask to integrate the warped clothes and the rendered  ...  (a) Geometric Matching Module: the in-shop clothes c and input image representation p are aligned via a learnable matching module.  ... 
arXiv:1807.07688v3 fatcat:2g37teb7tjfinpxzmxweg3l24m

2021 Index IEEE Transactions on Image Processing Vol. 30

2021 IEEE Transactions on Image Processing  
Feng, Z., Robust Face Alignment by Multi-Order High-Precision Hourglass Network. Wan, J., +, TIP 2021 121-133 Robust Tensor Decomposition for Image Representation Based on Generalized Correntropy.  ...  ., +, TIP 2021 907-920 Robust Text Image Recognition via Adversarial Sequence-to-Sequence Domain Adaptation.  ... 
doi:10.1109/tip.2022.3142569 fatcat:z26yhwuecbgrnb2czhwjlf73qu

Cross-Resolution Adversarial Dual Network for Person Re-Identification and Beyond [article]

Yu-Jhe Li, Yun-Chun Chen, Yen-Yu Lin, Yu-Chiang Frank Wang
2020 arXiv   pre-print
By advancing adversarial learning techniques, our proposed model learns resolution-invariant image representations while being able to recover the missing details in low-resolution input images.  ...  To overcome this problem, we propose a novel generative adversarial network to address cross-resolution person re-ID, allowing query images with varying resolutions.  ...  His current research interests include computer vision and machine learning. Yun-Chun  ... 
arXiv:2002.09274v2 fatcat:5ffgelahtjfenb4aztd4eytibm

Robustness May Be at Odds with Accuracy [article]

Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Madry
2019 arXiv   pre-print
These differences, in particular, seem to result in unexpected benefits: the representations learned by robust models tend to align better with salient data characteristics and human perception.  ...  Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers.  ...  Robust models learn meaningful feature representations that align well with salient data characteristics.  ... 
arXiv:1805.12152v5 fatcat:oy4xwgaclng7th3w2worocg6za
« Previous Showing results 1 — 15 out of 1,971 results