A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Formal Security Analysis of Neural Networks using Symbolic Intervals
[article]
2018
arXiv
pre-print
Due to the increasing deployment of Deep Neural Networks (DNNs) in real-world security-critical domains including autonomous vehicles and collision avoidance systems, formally checking security properties ...
Our experiments demonstrate that symbolic interval analysis is a promising new direction towards rigorously analyzing different security properties of DNNs. ...
Any opinions, findings, conclusions, or recommendations expressed herein are those of the authors, and do not necessarily reflect those of the US Government, ONR, or NSF. ...
arXiv:1804.10829v3
fatcat:3t427gxoovdarbgbyf7u4kjjmy
Efficient Formal Safety Analysis of Neural Networks
[article]
2018
arXiv
pre-print
Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within ...
neural networks. ...
Any opinions, findings, conclusions, or recommendations expressed herein are those of the authors, and do not necessarily reflect those of the US Government, ONR, or NSF. ...
arXiv:1809.08098v3
fatcat:j6kjfrohqnecrkmqyzg2lqrdja
ReluDiff: Differential Verification of Deep Neural Networks
[article]
2020
arXiv
pre-print
Our method consists of a fast but approximate forward interval analysis pass followed by a backward pass that iteratively refines the approximation until the desired property is verified. ...
As deep neural networks are increasingly being deployed in practice, their efficiency has become an important issue. ...
PRELIMINARIES First, we review the basics of interval analysis for neural networks. ...
arXiv:2001.03662v1
fatcat:xad4hdubxbgrvopozpdcf3jtt4
The illusion of structure or insufficiency of approach? the un(3) of unruly problems
2020
Figshare
We call this class of problemshard-to-represent, and are characterized by the difficulties of quantification and symbolization, as well as the inherent un-physicality of a system. ...
To counter these difficulties, we propose a new analytical paradigm called perceptual analysis, which brings an umbrella of diverse approaches to bear. ...
Acknowledgments We would like to thank the Saturday Morning NeuroSim group (YouTube: https://tinyurl.com/Sat-Morning-NeuroSim ) for their comments and time to discuss the development of this essay. ...
doi:10.6084/m9.figshare.11998731
fatcat:aujgoms3xnd2nn7f4ebvqfmese
A Review of Formal Methods applied to Machine Learning
[article]
2021
arXiv
pre-print
The large majority of them verify trained neural networks and employ either SMT, optimization, or abstract interpretation techniques. ...
We review state-of-the-art formal methods applied to the emerging field of the verification of machine learning systems. ...
The neural network is encoded as a Boolean combination of linear constraints using a Cartesian product of intervals. ...
arXiv:2104.02466v2
fatcat:6ghs5huoynbc5h7lndajmsoxyu
PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier
[article]
2021
arXiv
pre-print
In this paper, we introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs -- i.e. polytopic specifications on the input and output of the network. ...
As a consequence, techniques of formal verification have been recognized as crucial to the design and deployment of safe NNs. ...
.: Formal security analysis of neural networks using symbolic intervals. In: 27th {USENIX} Security Symposium ({USENIX} Security 18). pp. 1599-1614 (2018) 32. ...
arXiv:2006.10864v2
fatcat:fpxkh5p62rg55jaym62haehgm4
Beyond Robustness: Resilience Verification of Tree-Based Classifiers
[article]
2021
arXiv
pre-print
Our results show that resilience verification is useful and feasible in practice, yielding a more reliable security assessment of both standard and robust decision tree models. ...
We then introduce a formally sound data-independent stability analysis for decision trees and decision tree ensembles, which we experimentally assess on public datasets and we leverage for resilience verification ...
Preliminaries
neural networks, which provide robustness guarantees for all Our analysis leverages intervals of real numbers. ...
arXiv:2112.02705v1
fatcat:ahw6lbkf7fbnnlubo7z5zuq4wy
MixTrain: Scalable Training of Verifiably Robust Neural Networks
[article]
2018
arXiv
pre-print
The adversarially robust training only makes the networks robust against a subclass of attackers and we reveal such weaknesses by developing a new attack based on interval gradients. ...
Making neural networks robust against adversarial inputs has resulted in an arms race between new defenses and attacks. ...
Sound over-approximation techniques like symbolic interval analysis provides the lower and upper bounds (in terms of two parallel linear equations over symbolic inputs) of a neural network's output for ...
arXiv:1811.02625v2
fatcat:yznavdhrvrbyzjfjd2xeh5aj6a
Neural and Bayesian Networks to Fight Crime: the NBNC Meta-Model of Risk Analysis
[chapter]
2011
Artificial Neural Networks - Application
Currently, security officers of the Italian banking system are using the three risk indexes in their analysis and robbery risk management. ...
of the NBNC meta-model; • a brief discussion about the method used in order to derivate a Bayesian network from a database through an ANN; www.intechopen.com Artificial Neural Networks -Application ...
Neural and Bayesian Networks to Fight Crime: the NBNC Meta-Model of Risk Analysis, Artificial Neural Networks -Application, Dr. ...
doi:10.5772/15444
fatcat:dycuich2wvaafpcasyu6bsnbn4
Combining Graph Neural Networks with Expert Knowledge for Smart Contract Vulnerability Detection
2021
IEEE Transactions on Knowledge and Data Engineering
Recent researches focus on the symbolic execution and formal analysis of smart contracts for vulnerability detection, yet to achieve a precise and scalable solution. ...
In this paper, we explore using graph neural networks and expert knowledge for smart contract vulnerability detection. ...
Graph Neural Network With the remarkable success of neural networks, graph neural network has been investigated extensively in various fields such as graph classification [32, 33] , program analysis ...
doi:10.1109/tkde.2021.3095196
fatcat:ta2jlqgyk5gyzaov5orldr3dnm
Scalable Verification of Quantized Neural Networks (Technical Report)
[article]
2022
arXiv
pre-print
Formal verification of neural networks is an active topic of research, and recent advances have significantly increased the size of the networks that verification tools can handle. ...
In particular, we propose three techniques for making SMT-based verification of quantized neural networks more scalable. ...
Efficient formal safety analysis of neural networks . Wang, S.; Pei, K.; Whitehouse, J.; Yang, J.; and Jana, S. 2018b. Formal security analysis of neural networks using symbolic intervals. ...
arXiv:2012.08185v2
fatcat:wk6y2yzk3ncttaaeovqbu4fj74
DiffRNN: Differential Verification of Recurrent Neural Networks
[article]
2020
arXiv
pre-print
As the memory footprint and energy consumption of such components become a bottleneck, there is interest in compressing and optimizing such networks using a range of heuristic techniques. ...
Recurrent neural networks (RNNs) such as Long Short Term Memory (LSTM) networks have become popular in a variety of applications such as image processing, data classification, speech recognition, and as ...
DIFFRNN leverages interval analysis to directly and more accurately compute difference in the values of neurons of the two networks from the input layer to output layer. ...
arXiv:2007.10135v1
fatcat:f5se22tlvje6ppgo2chdxjmmgq
Verification of Neural Network Behaviour: Formal Guarantees for Power System Applications
[article]
2020
arXiv
pre-print
Such methods have the potential to build the missing trust of power system operators on neural networks, and unlock a series of new applications in power systems. ...
Developing a rigorous framework based on mixed integer linear programming, our methods can determine the range of inputs that neural networks classify as safe or unsafe, and are able to systematically ...
As an example, neural networks can be used to predict the security margin of the power systems (e.g. the damping ratio of the least damped mode in small signal stability analysis). ...
arXiv:1910.01624v3
fatcat:p4knzoxu3ffb7a6jlv55eafp2i
On Training Robust PDF Malware Classifiers
[article]
2019
arXiv
pre-print
We demonstrate how the worst-case behavior of a malware classifier with respect to specific robustness properties can be formally verified. ...
Furthermore, we find that training classifiers that satisfy formally verified robustness properties can increase the evasion cost of unbounded (i.e., not bounded by the robustness properties) attackers ...
Different methods have been used to formally verify the robustness of neural networks over input regions [20, 23, 24, 31, 34, 37, 44] , such as abstract transformations [25] , symbolic interval analysis ...
arXiv:1904.03542v2
fatcat:p36c6kaxwzavbedcbbkaqybqyy
Feature Extraction Functions for Neural Logic Rule Learning
[article]
2021
arXiv
pre-print
Combining symbolic human knowledge with neural networks provides a rule-based ante-hoc explanation of the output. ...
In this paper, we propose feature extracting functions for integrating human knowledge abstracted as logic rules into the predictive behavior of a neural network. ...
A recent paper [9] gives a detailed analysis on the methodology used in [7] , comparing its performance to other neural symbolic methods and arguing that it is not very effective in transferring knowledge ...
arXiv:2008.06326v4
fatcat:5xuiulpwofeh5ig5c3co6ppwbm
« Previous
Showing results 1 — 15 out of 5,506 results