Filters








1,768 Hits in 5.6 sec

Abstract: Adversarial Examples as Benchmark for Medical Imaging Neural Networks [chapter]

Magdalini Paschali, Sailesh Conjeti, Fernando Navarro, Nassir Navab
2019 Handbook of Experimental Pharmacology  
To this end, we recently proposed [1] to utilize adversarial examples [2], images that look imperceptibly different from the originals but are consistently missclassified by deep neural networks, as surrogates  ...  Deep learning has been widely adopted as the solution of choice for a plethora of medical imaging applications, due to its state-of-the-art performance and fast deployment.  ...  To this end, we recently proposed [1] to utilize adversarial examples [2] , images that look imperceptibly different from the originals but are consistently missclassified by deep neural networks, as  ... 
doi:10.1007/978-3-658-25326-4_4 fatcat:bch6xie7tfebdbereerp4dwwmu

Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness [article]

Greg Anderson, Shankara Pailoor, Isil Dillig, Swarat Chaudhuri
2019 arXiv   pre-print
In recent years, the notion of local robustness (or robustness for short) has emerged as a desirable property of deep neural networks.  ...  In this paper, we present a novel algorithm for verifying robustness properties of neural networks.  ...  Acknowledgments We thank our shepherd Michael Pradel as well as our anonymous reviewers and members of the UToPiA group for their helpful feedback.  ... 
arXiv:1904.09959v1 fatcat:7khytmrwprfvlppxugkrf3drae

An abstract domain for certifying neural networks

Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev
2019 Proceedings of the ACM on Programming Languages (PACMPL)  
This enables us to prove, for the first time, the robustness of the network when the input image is subjected to complex perturbations such as rotations that employ linear interpolation.  ...  We present a novel method for scalable and precise certification of deep neural networks.  ...  ACKNOWLEDGMENTS We would like to thank the anonymous reviewers for their constructive feedback. This research was supported by the Swiss National Science Foundation (SNF) grant number 163117.  ... 
doi:10.1145/3290354 fatcat:zrdae2n36feydbkdxainecvmlm

Evo* 2021 – Late-Breaking Abstracts Volume [article]

A.M. Mora, A.I. Esparcia-Alcázar
2021 arXiv   pre-print
Volume with the Late-Breaking Abstracts submitted to the Evo* 2021 Conference, held online from 7 to 9 of April 2021.  ...  Amy Smith is supported by the UKRI IGGI centre for doctoral training [EP/S022325].  ...  Acknowledgements Many thanks to the Colab notebook authors: Ryan Murdock and Vadim Epstein, for making this technology successful, accessible and exciting.  ... 
arXiv:2106.11804v1 fatcat:6otgwnlqsfev5jt4fnux32ffly

Controllable Abstractive Summarization [article]

Angela Fan, David Grangier, Michael Auli
2018 arXiv   pre-print
Current models for document summarization disregard user preferences such as the desired length, style, the entities that the user might be interested in, or how much of the document the user has already  ...  On the full text CNN-Dailymail dataset, we outperform state of the art abstractive systems (both in terms of F1-ROUGE1 40.38 vs. 39.53 and human evaluation).  ...  Acknowledgments We thank Yann Dauphin for helpful discussions. We thanks Abigail See for sharing her summaries and Romain Paulus for clarifying their setup.  ... 
arXiv:1711.05217v2 fatcat:3ucaupnv5ndrjmpzgvhkmknu3i

Cross-modal Adversarial Reprogramming [article]

Paarth Neekhara, Shehzeen Hussain, Jinglong Du, Shlomo Dubnov, Farinaz Koushanfar, Julian McAuley
2021 arXiv   pre-print
We analyze the feasibility of adversarially repurposing image classification neural networks for Natural Language Processing (NLP) and other sequence classification tasks.  ...  Recent works on adversarial reprogramming have shown that it is possible to repurpose neural networks for alternate tasks without modifying the network architecture or parameters.  ...  We consider both neural network based classification models and frequency-based statistical models (such as TF-IDF) as our benchmarks.  ... 
arXiv:2102.07325v3 fatcat:bddexr7hjnefhbsdapkq3r7ome

Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness [article]

Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
2021 arXiv   pre-print
We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks, and eventually explore how the geometric study of adversarial examples can serve as a powerful  ...  Towards solving the vulnerability of neural networks, however, the field of adversarial robustness has recently become one of the main sources of explanations of our deep models.  ...  medical imaging [58] .  ... 
arXiv:2010.09624v2 fatcat:mvhosdtxgzcytel75h4foaxqqu

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks [article]

Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Xiaoli Ma, Yi-Chang James Tsai
2019 arXiv   pre-print
Moreover, CE holds promises for detecting adversarial examples as it possesses distinct characteristics in the presence of adversarial perturbations.  ...  Discovering and exploiting the causality in deep neural networks (DNNs) are crucial challenges for understanding and reasoning causal effects (CE) on an explainable visual model.  ...  Framework and deep network topology. We use pixel- wise masking as F i and adversarial examples as F j interven- tion variables in the Z 0 (zero-out) counterfactual setting.  ... 
arXiv:1902.03380v3 fatcat:duh3oiyxazbmrjwmlkfoi2xll4

Incremental Verification of Fixed-Point Implementations of Neural Networks [article]

Luiz Sena, Erickson Alves, Iury Bessa, Eddie Filho, Lucas Cordeiro
2020 arXiv   pre-print
Our approach was able to verify and produce adversarial examples for 85.8% of 21 test cases considering different input images, and 100% of the properties related to covering methods.  ...  Implementations of artificial neural networks (ANNs) might lead to failures, which are hardly predicted in the design phase since ANNs are highly parallel and their parameters are barely interpretable.  ...  The main obstacle is that such tools do not work with shallow neural networks and activation functions differently than ReLU, so as our benchmarks. Covering Methods.  ... 
arXiv:2012.11220v1 fatcat:jlxukohqwjdqra7tblsfhj3s74

Relevant Applications of Generative Adversarial Networks in Drug Design and Discovery: Molecular De Novo Design, Dimensionality Reduction, and De Novo Peptide and Protein Design

Eugene Lin, Chieh-Hsin Lin, Hsien-Yuan Lane
2020 Molecules  
In this review, we focus on the latest developments for three particular arenas in drug design and discovery research using deep learning approaches, such as generative adversarial network (GAN) frameworks  ...  A growing body of evidence now suggests that artificial intelligence and machine learning techniques can serve as an indispensable foundation for the process of drug design and discovery.  ...  For example, artificial neural networks can be utilized to establish the hierarchical representation [21, 22] .  ... 
doi:10.3390/molecules25143250 pmid:32708785 fatcat:rrik322g6vbetaubwjb3rtvajm

Deep Learning based Anomaly Detection in Images: Insights, Challenges and Recommendations

Ahad Alloqmani, Yoosef B., Asif Irshad, Fawaz Alsolami
2021 International Journal of Advanced Computer Science and Applications  
This paper offers a comprehensive analysis of previous works that have been proposed in the area of anomaly detection in images through deep learning generally and in the medical field specifically.  ...  Twenty studies were reviewed, and the literature selection methodology was defined based on four phases: keyword filter, publish filter, year filter, and abstract filter.  ...  of a neural network.  ... 
doi:10.14569/ijacsa.2021.0120428 fatcat:dnatjz567rglbhzwcjprxjazb4

Deep Abstraction and Weighted Feature Selection for Wi-Fi Impersonation Detection

Muhamad Erza Aminanto, Rakyong Choi, Harry Chandra Tanuwidjaja, Paul D. Yoo, Kwangjo Kim
2018 IEEE Transactions on Information Forensics and Security  
An impersonation attack is an attack in which an adversary is disguised as a legitimate party in a system or communications protocol.  ...  The security challenges that need to be overcome mainly stem from the open nature of a wireless medium such as a Wi-Fi network.  ...  [34] leveraged SAE for unsupervised feature learning in the field of medical imaging.  ... 
doi:10.1109/tifs.2017.2762828 fatcat:ogwzywruhnbszdz2gveah73xre

Deep Learning with Neuroimaging and Genomics in Alzheimer's Disease

Eugene Lin, Chieh-Hsin Lin, Hsien-Yuan Lane
2021 International Journal of Molecular Sciences  
A growing body of evidence currently proposes that deep learning approaches can serve as an essential cornerstone for the diagnosis and prediction of Alzheimer's disease (AD).  ...  Finally, we depict a discussion of challenges and directions for future research.  ...  CNNs = Convolutional Neural Networks; DBNs = Deep belief networks; FNN = Fully Connected Neural Networks; GANs = Generative Adversarial Networks; RNNs = Recurrent Neural Networks.  ... 
doi:10.3390/ijms22157911 fatcat:x5rhz7wlx5gmbjgeezjy3mbl34

A survey on Image Data Augmentation for Deep Learning

Connor Shorten, Taghi M. Khoshgoftaar
2019 Journal of Big Data  
Deep neural networks have been successfully applied to Computer Vision tasks such as image classification, object detection, and image segmentation thanks to the development of convolutional neural networks  ...  training, generative adversarial networks, neural style transfer, and meta-learning.  ...  Using a technique for generating adversarial examples known as the "fast gradient sign method", a maxout network [82] (Fig. 15) . Li et al.  ... 
doi:10.1186/s40537-019-0197-0 fatcat:yrzshu3sgje27p2j7acxwfmvva

Deep Proximal Learning for High-Resolution Plane Wave Compounding [article]

Nishith Chennakeshava, Ben Luijten, Massimo Mischi, Yonina C. Eldar, Ruud J. G. van Sloun
2021 arXiv   pre-print
Our solution unfolds the iterations of a proximal gradient descent algorithm as a deep network, thereby directly exploiting the physics-based generative acquisition model into the neural network design  ...  We train our network in a greedy manner, i.e. layer-by-layer, using a combination of pixel, temporal, and distribution (adversarial) losses to achieve both perceptual fidelity and data consistency.  ...  Bengio, “Generative adversarial nets,” in network used in the benchmarks.  ... 
arXiv:2112.12410v1 fatcat:b4qbrpalqvbk3pdpfux47vlbiy
« Previous Showing results 1 — 15 out of 1,768 results