A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch
[article]
2019
arXiv
pre-print
(page 1) Ding, G. W., Sharma, Y., Lui, K. Y. C., and Huang, R. (2018). ...
(page 1) Ding, G. W., Lui, K. Y.-C., Jin, X., Wang, L., and Huang, R. (2019). On the sensitivity of adversarial robustness to input data distributions. ...
arXiv:1902.07623v1
fatcat:soous55lxjey5dvzyzn25ydkou
CDT: Cascading Decision Trees for Explainable Reinforcement Learning
[article]
2021
arXiv
pre-print
Correspondence to: Zihan Ding <zhd-ing@mail.ustc.edu.cn>. better performances), RL agents built on top of them are generally lack of interpretability (Lipton, 2018) . ...
arXiv:2011.07553v2
fatcat:2kezrg5c4jbkzaxxotnngfqedu
Dimensionality Reduction has Quantifiable Imperfections: Two Geometric Bounds
[article]
2018
arXiv
pre-print
Weiguang Ding
Borealis AI Borealis AI ...
Imperfections: Two Geometric Bounds
Kry Yik Chau Lui Gavin ...
arXiv:1811.00115v1
fatcat:zqk2pvhz5rhjbjh4t46pnhq2dy
Improving GAN Training via Binarized Representation Entropy (BRE) Regularization
[article]
2018
arXiv
pre-print
We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminator D spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse. These in turn help G to explore better and discover the real data manifold while avoiding large unstable jumps due to the erroneous extrapolation made by D. Our regularizer guides the rectifier discriminator D to better
arXiv:1805.03644v1
fatcat:pq3levwpcbhcjaornzetrnqhu4
more »
... ate its model capacity, by encouraging the binary activation patterns on selected internal layers of D to have a high joint entropy. Experimental results on both synthetic data and real datasets demonstrate improvements in stability and convergence speed of the GAN training, as well as higher sample quality. The approach also leads to higher classification accuracies in semi-supervised learning.
On the Effectiveness of Low Frequency Perturbations
[article]
2019
arXiv
pre-print
Ding, Y. Sharma, K. Lui, and R. Huang. Max-margin adversarial (mma) training: Direct input space
margin maximization through adversarial training. arXiv preprint arXiv:1812.02637, 2018.
Y. ...
., 2017; Ding et al., 2018) , has demonstrated improvement, but is limited to the properties of the perturbations used, e.g. training exclusively on ∞ does not provide robustness to perturbations generated ...
arXiv:1903.00073v2
fatcat:subl5fqqp5f5xevp24zcxfqxsq
On the Sensitivity of Adversarial Robustness to Input Data Distributions
[article]
2019
arXiv
pre-print
We use the PGD attack implementation from the AdverTorch toolbox (Ding et al., 2019). 3.3 SENSITIVITY OF ROBUST ACCURACY TO DATA TRANSFORMATIONS Results on MNIST variants are presented in Figure 2a 6 . ...
arXiv:1902.08336v1
fatcat:hp7ptbyo7re6jhx467iejs4kdi
Cascaded Deep Neural Networks for Retinal Layer Segmentation of Optical Coherence Tomography with Fluid Presence
[article]
2019
arXiv
pre-print
Optical coherence tomography (OCT) is a non-invasive imaging technology which can provide micrometer-resolution cross-sectional images of the inner structures of the eye. It is widely used for the diagnosis of ophthalmic diseases with retinal alteration, such as layer deformation and fluid accumulation. In this paper, a novel framework was proposed to segment retinal layers with fluid presence. The main contribution of this study is two folds: 1) we developed a cascaded network framework to
arXiv:1912.03418v1
fatcat:onnfsookjzaobjmm6eo4sa3dgm
more »
... rporate the prior structural knowledge; 2) we proposed a novel deep neural network based on U-Net and fully convolutional network, termed LF-UNet. Cross validation experiments proved that the proposed LF-UNet has superior performance comparing with the state-of-the-art methods, and incorporating the relative distance map structural prior information could further improve the performance regardless the network.
Multimodal and Multiscale Deep Neural Networks for the Early Diagnosis of Alzheimer's Disease using structural MR and FDG-PET images
2018
Scientific Reports
Alzheimer's Disease (AD) is a progressive neurodegenerative disease where biomarkers for disease based on pathophysiology may be able to provide objective measures for disease diagnosis and staging. Neuroimaging scans acquired from MRI and metabolism images obtained by FDG-PET provide in-vivo measurements of structure and function (glucose metabolism) in a living brain. It is hypothesized that combining multiple different image modalities providing complementary information could help improve
doi:10.1038/s41598-018-22871-z
pmid:29632364
pmcid:PMC5890270
fatcat:xajfovmxorehtlupcmsvsib35q
more »
... rly diagnosis of AD. In this paper, we propose a novel deep-learning-based framework to discriminate individuals with AD utilizing a multimodal and multiscale deep neural network. Our method delivers 82.4% accuracy in identifying the individuals with mild cognitive impairment (MCI) who will convert to AD at 3 years prior to conversion (86.4% combined accuracy for conversion within 1-3 years), a 94.23% sensitivity in classifying individuals with clinical diagnosis of probable AD, and a 86.3% specificity in classifying non-demented controls improving upon results in published literature. Alzheimer's disease (AD), the most common dementia, affecting 1 out of 9 people over the age of 65 years 1 . Alzheimer's diseases involves progressive cognitive impairment, commonly associated with early memory loss, requiring assistance for activities of self care during advanced stages. Alzheimer's is posited to evolve through a prodromal stage which is commonly referred to as the mild cognitive impairment (MCI) stage and 10-15% of individuals with MCI, progress to AD 2 each year. With improved life expectancy, it is estimated that about 1.2% of global population will develop Alzheimer's disease by 2046 3 thereby affecting millions of individuals directly, as well as many more indirectly through the effects on their families and caregivers. There is an urgent need to develop biomarkers that can identify the changes in a living brain due to the pathophysiology of AD providing numerical staging scores, as well as identifying syndromal stages. Neuroimaging modalities such as magnetic resonance imaging (MRI) 4 and fluorodeoxyglucose positron emission tomography (FDG-PET) 5 have been previously used to develop such pathophysiology-based biomarkers for diagnosis of AD, specially targeting the prodromal stage of AD, where the pathology has begun but the clinical symptoms have not yet manifested. Structural MRI provides measures of brain gray matter, white matter and CSF compartments enabling the quantification of volumes, cortical thickness and shape of various brain regions and utilize these in developing classifiers for AD 6-13 . FDG-PET provides measures of the resting state glucose metabolism 14 , reflecting the functional activity of the underlying tissue 5 that has also been utilized for AD biomarker development [15] [16] [17] . Other published approaches have utilized a combination of modalities for developing neuroimaging AD biomarkers 4,18-24 . Recent advances in deep neural network approaches for developing classifiers have delivered astounding performance for many recognition tasks 25 . The application of deep neural networks in recognition of AD has also attracted application for AD [26] [27] [28] . By applying deep neural network to extract features, such as stacked Published: xx xx xxxx OPEN www.nature.com/scientificreports/ 2 Scientific REPORTs | (2018) 8:5697 |
MMA Training: Direct Input Space Margin Maximization through Adversarial Training
[article]
2020
arXiv
pre-print
arXiv preprint arXiv:1904.12843, 2019. 1 Yash Sharma, Gavin Weiguang Ding, and Marcus A Brubaker. On the effectiveness of low frequency perturbations. ...
D DETAILED SETTINGS OF ATTACKS For both ∞ and 2 PGD attacks, we use the implementation from the AdverTorch toolbox (Ding et al., 2019b) . ...
arXiv:1812.02637v4
fatcat:7uakh4n4q5djlna6pkp72hc3ly
Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks
[article]
2020
arXiv
pre-print
[Ding et al., 2019] Gavin Weiguang Ding, Kry Yik-Chau Lui, Xiaomeng Jin, Luyu Wang, and Ruitong Huang. On the sensitivity of adversarial robustness to input data distributions. ...
[Ding et al., 2019] discovered the relationship between adversarial robustness and the intrinsic distribution of training data. ...
arXiv:2011.01539v1
fatcat:e3o47epftbc2rebpdx5yotzriy
Improve robustness of DNN for ECG signal classification:a noise-to-signal ratio perspective
[article]
2021
arXiv
pre-print
Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang. Max-Margin Adver-
sarial (MMA) Training: Direct Input Space Margin Maximization through Adversarial Training.
01(1):1-34, 2018. ...
Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! ...
arXiv:2005.09134v3
fatcat:zz2k22cze5c7xdorecprane3ua
Calibrated Adversarial Training
[article]
2021
arXiv
pre-print
Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang. Mma training: Semantic Segmentation ppt. ...
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages -, . on Machine Learning, volume of Proceedings of Machine Learning Research, pages Yash Sharma, Gavin Weiguang Ding ...
arXiv:2110.00623v2
fatcat:tvqzw5tgzffrle4l4etdjezxle
BulletTrain: Accelerating Robust Neural Network Training via Boundary Example Mining
[article]
2021
arXiv
pre-print
Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang. MMA training:
Direct input space margin maximization through adversarial training. ...
., 2019; Ding et al., 2020; Hendrycks et al., 2020;
Madry et al., 2018; Wang et al., 2020; Zhang et al., 2019b). ...
arXiv:2109.14707v2
fatcat:zlcqe2n3jzbzxjic7ycp3eh4vu
An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder
[article]
2021
arXiv
pre-print
Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, and Ruitong Huang. Max-margin ad-
versarial (mma) training: Direct input space margin maximization through adversarial training. ...
Therefore, we used pre-trained models from a recent work on adversarial robustness (Ding et al., 2020) , which presented a state-of-the-art adversarial training method. ...
arXiv:2009.08016v4
fatcat:wpi3af3mmzgppbjgyqms5pltia
Adversarial Examples Are Not Bugs, They Are Features
[article]
2019
arXiv
pre-print
Gavin Weiguang Ding et al. "On the Sensitivity of Adversarial Robustness to Input Data Distributions". In: International Conference on Learning Representations. 2019. [Eng+19a] Logan Engstrom et al. ...
Manipulating dataset features Ding et al. ...
arXiv:1905.02175v4
fatcat:ik4hlsfmdnewzowuf6bu6cd7ru
« Previous
Showing results 1 — 15 out of 17 results