Filters








8,328 Hits in 6.4 sec

Deep Learning Generalization, Extrapolation, and Over-parameterization [article]

Roozbeh Yousefzadeh
2022 arXiv   pre-print
We study the generalization of over-parameterized deep networks (for image classification) in relation to the convex hull of their training sets.  ...  We show that interpolation is not adequate to understand generalization of deep networks and we should broaden our perspective.  ...  We show that interpolation is not adequate to understand generalization of deep networks and we should broaden our perspective.  ... 
arXiv:2203.10366v1 fatcat:54kv5e36r5byvpfzy5ogit5vfi

Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings [article]

Pantelis Elinas, Edwin V. Bonilla, Louis Tiao
2020 arXiv   pre-print
data and when the network structure is subjected to adversarial perturbations.  ...  We show that, on real datasets, our approach can outperform state-of-the-art Bayesian and non-Bayesian graph neural network algorithms on the task of semi-supervised classification in the absence of graph  ...  This work was conducted in partnership with the Defence Science and Technology Group, through the Next Generation Technologies Program.  ... 
arXiv:1906.01852v5 fatcat:uxkkrn2klfbllap7srrsl46jbq

Understanding the Role of Adversarial Regularization in Supervised Learning [article]

Litu Rout
2020 arXiv   pre-print
In addition, motivated by a recently introduced unit-wise capacity based generalization bound, we analyze the generalization error in adversarial framework.  ...  We therefore leave as open questions to explore new measures that can explain generalization behavior in adversarial learning.  ...  NTA on Over-Parameterization It is well known that highly over-parameterized deep neural networks sprinkle the corresponding parametric space with lots of good solutions.  ... 
arXiv:2010.00522v1 fatcat:lppgu45fzraxxlc7vnupxfhove

Overfitting or Underfitting? Understand Robustness Drop in Adversarial Training [article]

Zichao Li and Liyuan Liu and Chengyu Dong and Jingbo Shang
2020 arXiv   pre-print
In the light of our analyses, we propose APART, an adaptive adversarial training framework, which parameterizes perturbation generation and progressively strengthens them.  ...  Our goal is to understand why the robustness drops after conducting adversarial training for too long.  ...  ACKNOWLEDGE The research was sponsored in part by DARPA No. W911NF-17-C-0099 and No.  ... 
arXiv:2010.08034v1 fatcat:wnprbpaxb5adtgppylrvtjgegy

Decorrelated jet substructure tagging using adversarial neural networks

Chase Shimmin, Peter Sadowski, Pierre Baldi, Edison Weik, Daniel Whiteson, Edward Goul, Andreas Søgaard
2017 Physical Review D  
The network is trained using an adversarial strategy, resulting in a tagger that learns to balance classification accuracy with decorrelation.  ...  We generalize the adversarial training technique to include a parametric dependence on the signal hypothesis, training a single network that provides optimized, interpolatable decorrelated jet tagging  ...  FIG. 11 . 11 Architecture of the neural networks in the parameterized adversarial training strategy.  ... 
doi:10.1103/physrevd.96.074034 fatcat:odefx5vyvbfete3llbwwh2m2a4

Scene Understanding Based on High-Order Potentials and Generative Adversarial Networks

Xiaoli Zhao, Guozhong Wang, Jiaqi Zhang, Xiang Zhang
2018 Advances in Multimedia  
In this study, we propose a semantic segmentation framework based on classic generative adversarial nets (GAN) to train a fully convolutional semantic segmentation model along with an adversarial network  ...  Scene understanding is to predict a class label at each pixel of an image.  ...  To achieve this goal, we design a generator network G and an adversarial network D. The generator is trained as a network parameterized by .  ... 
doi:10.1155/2018/8207201 fatcat:tjaj4et2orejjfxa3vcml6dtqu

Multi-Domain Adversarial Learning for Slot Filling in Spoken Language Understanding [article]

Bing Liu, Ian Lane
2017 arXiv   pre-print
In our experiments using data sets from multiple domains, we show that adversarial training helps in learning better domain-general SLU models, leading to improved slot filling F1 scores.  ...  The goal of this paper is to learn cross-domain representations for slot filling task in spoken language understanding (SLU).  ...  This is closer to the training of generative adversarial networks [14] .  ... 
arXiv:1711.11310v1 fatcat:6pz54p2lpvdcdkq75hsgubbriq

Prior Networks for Detection of Adversarial Attacks [article]

Andrey Malinin, Mark Gales
2018 arXiv   pre-print
Prior Networks are shown to significantly out-perform these baseline approaches over a range of adversarial attacks in both detection of whitebox and blackbox configurations.  ...  In this work, Prior Networks are applied to adversarial attack detection using measures of uncertainty in a similar fashion to Monte-Carlo Dropout.  ...  Prior Network parameterize a distribution over output distributions which allows them to separately model data uncertainty and distributional uncertainty.  ... 
arXiv:1812.02575v1 fatcat:hubudxmpbzd4ho5zkie6swnsia

Entangling Quantum Generative Adversarial Networks [article]

Murphy Yuezhen Niu, Alexander Zlokapa, Michael Broughton, Sergio Boixo, Masoud Mohseni, Vadim Smelyanskyi, Hartmut Neven
2021 arXiv   pre-print
In this work, we propose a new type of architecture for quantum generative adversarial networks (entangling quantum GAN, EQ-GAN) that overcomes some limitations of previously proposed quantum GANs.  ...  Generative adversarial networks (GANs) are one of the most widely adopted semisupervised and unsupervised machine learning methods for high-definition image, video, and audio generation.  ...  PRIOR ART A GAN comprises of a parameterized generative network G(θ g , z) and discriminator network D(θ d , z).  ... 
arXiv:2105.00080v2 fatcat:juvarqxunrgrrjbizqfhxeulae

A Systematic Construction Of Instability Bounds In Lis Networks

Dimitrios Koukopoulos
2007 Zenodo  
In this framework, we present an innovative systematic construction for the estimation of adversarial injection rate lower bounds, which, if exceeded, cause instability in networks that use the LIS (Longest-in  ...  In this work, we study the impact of dynamically changing link slowdowns on the stability properties of packetswitched networks under the Adversarial Queueing Theory framework.  ...  We say that the adversary generates a set of packets when it generates a set of requested paths.  ... 
doi:10.5281/zenodo.1330505 fatcat:ob4vko3lz5fghn7wc3ton5l5je

An Adversarial Construction Of Instability Bounds In Lis Networks

Dimitrios Koukopoulos
2008 Zenodo  
In this framework, we present an innovative systematic construction for the estimation of adversarial injection rate lower bounds, which, if exceeded, cause instability in networks that use the LIS (Longest-in  ...  In this work, we study the impact of dynamically changing link slowdowns on the stability properties of packetswitched networks under the Adversarial Queueing Theory framework.  ...  We say that the adversary generates a set of packets when it generates a set of requested paths.  ... 
doi:10.5281/zenodo.1072693 fatcat:cdhjrgidhrclhluvzilaziwpwa

Adversarial Mobility Learning for Human Trajectory Classification

Qiang Gao, Fengli Zhang, Fuming Yao, Ailing Li, Lin Mei, Fan Zhou
2020 IEEE Access  
Understanding human mobility is one of the important but challenging tasks in Location-based Social Networks (LBSN).  ...  In addition, AdattTUL leverages an adversarial network to help in regularizing the data distribution of human trajectories.  ...  To overcome unstable training issue in generative adversarial network, we choose Earth-Mover (Wasserstein-1) distance to min-max optimization over parameters in generator and discriminator.  ... 
doi:10.1109/access.2020.2968935 fatcat:qtoljzt3ircstncdwsictuofae

Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality [article]

Yi Zhang, Orestis Plevrakis, Simon S. Du, Xingguo Li, Zhao Song, Sanjeev Arora
2020 arXiv   pre-print
Adversarial training is a popular method to give neural nets robustness against adversarial perturbations. In practice adversarial training leads to low robust training loss.  ...  Key element of our proof is showing that ReLU networks near initialization can approximate the step function, which may be of independent interest.  ...  In each round, the adversary generates new adversarial examples against the current network, on which the learner takes a gradient step to decrease its prediction loss in response (see Algorithm 1).  ... 
arXiv:2002.06668v2 fatcat:hvaezu4jdbaovizm3zdc5zftwi

Resource-Competitive Algorithms

Michael A. Bender, Jeremy T. Fineman, Mahnush Movahedi, Jared Saia, Varsha Dani, Seth Gilbert, Seth Pettie, Maxwell Young
2015 ACM SIGACT News  
In such cases, several techniques have been developed to provide a more refined understanding of how an algorithm performs e.g., competitive analysis, parameterized analysis, and the theory of approximation  ...  In parameterizing by this cost, we can design an algorithm with the following guarantee: if the adversary pays T , then the additional cost of the algorithm is some function of T .  ...  We are grateful to Valerie King for her valuable suggestions in writing this article.  ... 
doi:10.1145/2818936.2818949 fatcat:k3cv6dqzd5a7nhlv3mrbnctzxm

Why Adversarial Interaction Creates Non-Homogeneous Patterns: A Pseudo-Reaction-Diffusion Model for Turing Instability [article]

Litu Rout
2020 arXiv   pre-print
Further, we prove that randomly initialized gradient descent with over-parameterization can converge exponentially fast to an ϵ-stationary point even under adversarial interaction.  ...  Over the years, concerted efforts have been made to align theoretical models to explain patterns in real systems.  ...  This is easily satisfied in over-parameterized networks as given by equation (38) .  ... 
arXiv:2010.00521v2 fatcat:4e6iaco3mnfvzivbau3zdhrtcu
« Previous Showing results 1 — 15 out of 8,328 results