Filters








58,904 Hits in 4.7 sec

Black-Box Testing of Deep Neural Networks through Test Case Diversity [article]

Zohreh Aghababaeyan, Manel Abdellatif, Lionel Briand, Ramesh S, Mojtaba Bagherzadeh
2022 arXiv   pre-print
Deep Neural Networks (DNNs) have been extensively used in many areas including image processing, medical diagnostics, and autonomous driving.  ...  In this paper, we investigate black-box input diversity metrics as an alternative to white-box coverage criteria.  ...  This work was supported by a research grant from General Motors as well as the Canada Research Chair and Discovery Grant programs of the Natural Sciences and Engineering Research Council of Canada (NSERC  ... 
arXiv:2112.12591v3 fatcat:b5addncutfhn7adbidemak32je

Feature-Guided Black-Box Safety Testing of Deep Neural Networks [article]

Matthew Wicker, Xiaowei Huang, Marta Kwiatkowska
2018 arXiv   pre-print
In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge.  ...  Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns.  ...  Xiaowei gratefully acknowledges NVIDIA Corporation for its support with the donation of the Titan Xp GPU, and is partially supported by NSFC (no. 61772232)  ... 
arXiv:1710.07859v2 fatcat:khf3mal7r5d6dilz6xrjcewn5a

Feature-Guided Black-Box Safety Testing of Deep Neural Networks [chapter]

Matthew Wicker, Xiaowei Huang, Marta Kwiatkowska
2018 Lecture Notes in Computer Science  
In this paper, we focus on image classifiers and propose a feature-guided blackbox approach to test the safety of deep neural networks that requires no such knowledge.  ...  Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns.  ...  Xiaowei gratefully acknowledges NVIDIA Corporation for its support with the donation of the Titan Xp GPU, and is partially supported by NSFC (no. 61772232).  ... 
doi:10.1007/978-3-319-89960-2_22 fatcat:uinf4bh3yvex3kgyzdtpca5j2m

Deep Learning Identifikasi Tanaman Obat Menggunakan Konsep Siamese Neural Network

Kartarina Kartarina, Lalu Zazuli Azhar Mardedi, Miftahul Madani, Miftahul Jihad, Regina Aprilia Riberu
2021 JTIM Jurnal Teknologi Informasi dan Multimedia  
The deep learning approach with Siamese Neural Network is by comparing two patterns and producing an appropriate output based on the similarity of the two patterns.  ...  The application of Siamese Neural Network with an Android Smartphone is the right choice because it can facilitate the community in terms of usage.  ...  Hasil Uji Coba Black Box Black Box Testing merupakan metode pengujian yang bertujuan untuk menguji perangkat lunak tanpa mengetahui struktur internal kode atau Program.  ... 
doi:10.35746/jtim.v2i4.114 fatcat:ghllvvheizcpdlzaub2qer3gaa

Stealing Black-Box Functionality Using The Deep Neural Tree Architecture [article]

Daniel Teitelman, Itay Naeh, Shie Mannor
2020 arXiv   pre-print
This paper makes a substantial step towards cloning the functionality of black-box models by introducing a Machine learning (ML) architecture named Deep Neural Trees (DNTs).  ...  This new architecture can learn to separate different tasks of the black-box model, and clone its task-specific behavior.  ...  Deep Neural Tree The Model Here we propose a deep learning architecture with the ability to clone black-boxes without memory (the output of the black-box does not depend on previous inputs).  ... 
arXiv:2002.09864v1 fatcat:rqabgnarg5eyvfkgkzcqhxdhyu

Horizontal and Vertical Ensemble with Deep Representation for Classification [article]

Jingjing Xie, Bing Xu, Zhang Chuang
2013 arXiv   pre-print
In this paper, we propose Horizontal Voting Vertical Voting and Horizontal Stacked Ensemble methods to improve the classification performance of deep neural networks.  ...  However, how to use limited size of labeled data to achieve good classification performance with deep neural network, and how can the learned features further improve classification remain indefinite.  ...  Using very limited number of labeled data and massive unlabeled data, we have achieved a good performance in ICML 2013 Black Box Leaning Challenge, by exploiting the power of deep neural networks.  ... 
arXiv:1306.2759v1 fatcat:l4xfqtk4lncjxox6e4m6ak7jn4

Black-box Adversarial ML Attack on Modulation Classification [article]

Muhammad Usama, Junaid Qadir, Ala Al-Fuqaha
2019 arXiv   pre-print
in black-box settings.  ...  Recently, many deep neural networks (DNN) based modulation classification schemes have been proposed in the literature.  ...  We have used a surrogate deep neural network for crafting adversarial examples and then showed that adversarial examples crafted for modulation classification are transferable to other deep learning based  ... 
arXiv:1908.00635v1 fatcat:x3uqiovidzdsjmcikmr6pm5aui

Stealing Neural Networks via Timing Side Channels [article]

Vasisht Duddu, Debasis Samanta, D Vijay Rao, Valentina E. Balas
2019 arXiv   pre-print
In this paper, a black box Neural Network extraction attack is proposed by exploiting the timing side channels to infer the depth of the network.  ...  Deep learning is gaining importance in many applications. However, Neural Networks face several security and privacy threats.  ...  comments which greatly improved the quality of the paper.  ... 
arXiv:1812.11720v4 fatcat:hts4m64pabh37fp2jalgkoysu4

Security issues and defensive approaches in deep learning frameworks

Hongsong Chen, Yongpeng Zhang, Yongrui Cao, Jing Xie
2021 Tsinghua Science and Technology  
We start with a description of the framework of deep learning algorithms and a detailed analysis of attacks and vulnerabilities in them.  ...  However, the security issues of deep learning frameworks are among the main risks preventing the wide application of it.  ...  In terms of adversarial knowledge, attacks can be classified into white-box attacks, black-box attacks, and semiwhite-box attacks.  ... 
doi:10.26599/tst.2020.9010050 fatcat:zeklghvvurhdzpozagqs32g67a

Improving Transparency of Deep Neural Inference Process [article]

Hiroshi Kuwajima, Masayuki Tanaka, Masatoshi Okutomi
2019 arXiv   pre-print
However, the inference process of deep learning is black-box, and not very suitable to safety-critical systems which must exhibit high transparency.  ...  to improve a neural network.  ...  neural inference process, to address the black-box property of deep neural networks for safety critical applications.  ... 
arXiv:1903.05501v1 fatcat:lwenegrjnnghtjpdymtii2gx6i

DeepSigns: A Generic Watermarking Framework for IP Protection of Deep Learning Models [article]

Bita Darvish Rouhani and Huili Chen and Farinaz Koushanfar
2018 arXiv   pre-print
Proof-of-concept evaluations on MNIST, and CIFAR10 datasets, as well as a wide variety of neural network architectures including Wide Residual Networks, Convolution Neural Networks, and Multi-Layer Perceptrons  ...  DeepSigns, for the first time, introduces a generic watermarking methodology that can be used for protecting DL owner's IP rights in both white-box and black-box settings, where the adversary may or may  ...  function (pdf) of the activation sets in various layers of a deep neural network.  ... 
arXiv:1804.00750v2 fatcat:2n4gb6gt2zenlbhoaaa2tvtesq

Adversarial Examples: Attacks and Defenses for Deep Learning [article]

Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li
2018 arXiv   pre-print
Adversarial examples are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage.  ...  The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments.  ...  The authors generated adversarial examples on multiple deep neural networks with full knowledge and tested them on a black-box model.  ... 
arXiv:1712.07107v3 fatcat:5wcz4h4eijdsdjeqwdpzbfbjeu

Monocular Depth Estimators: Vulnerabilities and Attacks [article]

Alwyn Mathew, Aditya Prakash Patra, Jimson Mathew
2020 arXiv   pre-print
The white-box and black-box test compliments the effectiveness of the proposed attack. We also perform adversarial example transferability tests, mainly cross-data transferability.  ...  Recent advancements of neural networks lead to reliable monocular depth estimation.  ...  prediction of the deep neural network.  ... 
arXiv:2005.14302v1 fatcat:kzrap72jhbhajnne67vmeleljy

Predicting parametric spatiotemporal dynamics by multi-resolution PDE structure-preserved deep learning [article]

Xin-Yang Liu and Hao Sun and Jian-Xun Wang
2022 arXiv   pre-print
This physics-inspired learning architecture design endows PPNN with excellent generalizability and long-term prediction accuracy compared to the state-of-the-art black-box ConvResNet baseline.  ...  ., physics-informed neural networks, the physics prior is mainly utilized to regularize neural network training by incorporating governing equations into the loss function in a soft manner.  ...  Wolf, Construction of reduced-order models for fluid flows using deep feedforward neural networks, Journal of Fluid Mechanics 872 (2019) 963-994.  ... 
arXiv:2205.03990v1 fatcat:7sue5mv5rfbfvp4uumlvplxuyy

Relationship between manifold smoothness and adversarial vulnerability in deep learning with local errors [article]

Zijian Jiang, Jianwen Zhou, Haiping Huang
2020 arXiv   pre-print
For this purpose, we choose a deep neural network trained by local errors, and then analyze emergent properties of trained networks through the manifold dimensionality, manifold smoothness, and the generalization  ...  Here, we establish a fundamental relationship between geometry of hidden representations (manifold perspective) and the generalization capability of the deep networks.  ...  -18831109 of the 100-talentprogram of Sun Yat-sen University.  ... 
arXiv:2007.02047v2 fatcat:nybnqy6kjrecxbrcb7cuwelzwm
« Previous Showing results 1 — 15 out of 58,904 results