Filters








1,269 Hits in 6.0 sec

Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness [article]

Jiankai Jin, Olga Ohrimenko, Benjamin I. P. Rubinstein
2022 arXiv   pre-print
We show that the attack can be carried out against several linear classifiers that have exact certifiable guarantees and against neural network verifiers that return a certified lower bound on a robust  ...  In this work, we show that these guarantees can be invalidated due to limitations of floating-point representation that cause rounding errors.  ...  Since floating-point numbers can represent only a subset of real values, rounding is likely to occur when computing robust guarantees and can cause overestimation of the certified radius R.  ... 
arXiv:2205.10159v1 fatcat:t34aqd4h4nh7pljx4orinnmzou

Integer-arithmetic-only Certified Robustness for Quantized Neural Networks [article]

Haowen Lin, Jian Lou, Li Xiong, Cyrus Shahabi
2021 arXiv   pre-print
A line of work on tackling adversarial examples is certified robustness via randomized smoothing that can provide a theoretical robustness guarantee.  ...  We show our approach can obtain a comparable accuracy and 4x~5x speedup over floating-point arithmetic certified robust methods on general-purpose CPUs and mobile devices on two distinct datasets (CIFAR  ...  We take another direct way, which provides a tighter certified robustness guarantee.  ... 
arXiv:2108.09413v1 fatcat:wldpffzsxzfnzf7v6o6iesn3hy

Certified Federated Adversarial Training [article]

Giulio Zizzo, Ambrish Rawat, Mathieu Sinn, Sergio Maffeis, Chris Hankin
2021 arXiv   pre-print
Many robust aggregation schemes rely on certain numbers of benign clients being present in a quorum of workers.  ...  This can be hard to guarantee when clients can join at will, or join based on factors such as idle system status, and connected to power and WiFi.  ...  any one FL round.  ... 
arXiv:2112.10525v1 fatcat:u3bf7xcqqbeaxpy647dupkkgii

Sound Randomized Smoothing in Floating-Point Arithmetics [article]

Václav Voráček, Matthias Hein
2022 arXiv   pre-print
We present a simple example where randomized smoothing certifies a radius of 1.26 around a point, even though there is an adversarial example in the distance 0.8 and extend this example further to provide  ...  However, we show that randomized smoothing is no longer sound for limited floating-point precision.  ...  The example is based on the observation that we are able to determine if a floating-point number x could be a result of floating-point addition a ⊕ n where a is known and n is arbitrary.  ... 
arXiv:2207.07209v1 fatcat:ja4dxcsqvzbb7kr76vur6hch2m

SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification [article]

Ashwinee Panda, Saeed Mahloujifar, Arjun N. Bhagoji, Supriyo Chakraborty, Prateek Mittal
2021 arXiv   pre-print
We propose a theoretical framework for analyzing the robustness of defenses against poisoning attacks, and provide robustness and convergence analysis of our algorithm.  ...  In model poisoning attacks, the attacker reduces the model's performance on targeted sub-tasks (e.g. classifying planes as birds) by uploading "poisoned" updates.  ...  In the first step of the protocol, the prover generates a commitment to their update vector over the floating point domain.  ... 
arXiv:2112.06274v1 fatcat:btoycf5nojgzferdjgwumtpvdi

Fooling a Complete Neural Network Verifier

Dániel Zombori, Balázs Bánhelyi, Tibor Csendes, István Megyeri, Márk Jelasity
2021 International Conference on Learning Representations  
We offer a simple defense against our particular attack based on adding a very small perturbation to the network weights.  ...  In practice, however, both the networks and the verifiers apply limited-precision floating point arithmetic.  ...  Note that, in practice, we set ω > 2 54 for attacking double precision floating point arithmetic, because smaller values do not guarantee adversariality.  ... 
dblp:conf/iclr/ZomboriBCMJ21 fatcat:qhcu45zkhnbzzm4gdg27y2uttm

Scaling Polyhedral Neural Network Verification on GPUs

Christoph Müller, François Serre, Gagandeep Singh, Markus Püschel, Martin Vechev
2021 arXiv   pre-print
Certifying the robustness of neural networks against adversarial attacks is essential to their reliable adoption in safety-critical systems such as autonomous driving and medical diagnosis.  ...  The key technical insight behind GPUPoly is the design of custom, sound polyhedra algorithms for neural network verification on a GPU.  ...  of a CPU), and (iii) is sound for floating point arithmetic, capturing all results possible under different rounding modes and orders of execution of floating point operations, thus handling associativity  ... 
arXiv:2007.10868v2 fatcat:y3xy5acgkngqhmn4bbey7h2fzq

Pay attention to your loss: understanding misconceptions about 1-Lipschitz neural networks [article]

Louis Béthune, Thibaut Boissin, Mathieu Serrurier, Franck Mamalet, Corentin Friedrich, Alberto González-Sanz
2022 arXiv   pre-print
Then, relying on a robustness metric which reflects operational needs we characterize the most robust classifier: the WGAN discriminator.  ...  Lipschitz constrained networks have gathered considerable attention in deep learning community, with usages ranging from Wasserstein distance estimation to the training of certifiably robust classifiers  ...  A special thank to Agustin Picard for useful advice and thorough reading of the paper.  ... 
arXiv:2104.05097v5 fatcat:nigc7slturhejbgeiugrrahbie

How to Certify Machine Learning Based Safety-critical Systems? A Systematic Literature Review [article]

Florian Tambon, Gabriel Laberge, Le An, Amin Nikanjam, Paulina Stevia Nouwou Mindom, Yann Pequignot, Foutse Khomh, Giulio Antoniol, Ettore Merlo, François Laviolette
2021 arXiv   pre-print
Method: We conduct a Systematic Literature Review (SLR) of research papers published between 2015 to 2020, covering topics related to the certification of ML systems.  ...  In total, we identified 217 papers covering topics considered to be the main pillars of ML certification: Robustness, Uncertainty, Explainability, Verification, Safe Reinforcement Learning, and Direct  ...  Many thanks also goes to Freddy Lécué from Thalès, who provided us feedback on an early version of this manuscript. They all contributed to improving this SLR.  ... 
arXiv:2107.12045v3 fatcat:43vqxywawbeflhs6ehzovvsevm

COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks [article]

Fan Wu, Linyi Li, Chejian Xu, Huan Zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, Bo Li
2022 arXiv   pre-print
In this work, we focus on certifying the robustness of offline RL in the presence of poisoning attacks, where a subset of training trajectories could be arbitrarily manipulated.  ...  While a vast body of research has explored test-time (evasion) attacks in RL and corresponding defenses, its robustness against training-time (poisoning) attacks remains largely unanswered.  ...  However, the major distinction is that their algorithm tries to derive a probabilistic guarantee of RL robustness against state perturbations, while COPA-SEARCH derives deterministic guarantee of RL robustness  ... 
arXiv:2203.08398v1 fatcat:gbpq5wlskbcqja5m6gvpaosidy

ANCER: Anisotropic Certification via Sample-wise Volume Maximization [article]

Francisco Eiras, Motasem Alfarra, M. Pawan Kumar, Philip H. S. Torr, Puneet K. Dokania, Bernard Ghanem, Adel Bibi
2021 arXiv   pre-print
Our empirical results demonstrate that ANCER achieves state-of-the-art ℓ_1 and ℓ_2 certified accuracy on both CIFAR-10 and ImageNet at multiple radii, while certifying substantially larger regions in terms  ...  Moreover, (ii) we propose evaluation metrics allowing for the comparison of general certificates - a certificate is superior to another if it certifies a superset region - with the quantification of each  ...  Curse of dimensionality on randomized smoothing for certifiable robustness.  ... 
arXiv:2107.04570v3 fatcat:mcuylig6qzbn5m7bd3pfwkz5su

Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective

Mark Huasong Meng, Guangdong Bai, Sin Gee Teo, Zhe Hou, Yan Xiao, Yun Lin, Jin Song Dong
2022 IEEE Transactions on Dependable and Secure Computing  
Adversarial robustness, which concerns the reliability of a neural network when dealing with maliciously manipulated inputs, is one of the hottest topics in security and machine learning.  ...  Furthermore, neural networks themselves are often vulnerable to adversarial attacks.  ...  ACKNOWLEDGMENTS This research is supported in part by A*STAR, the University of Queensland under the NSRSG grant 4018264-617225 and the GSP Seed Funding, National Research Foundation and National University  ... 
doi:10.1109/tdsc.2022.3179131 fatcat:bmx36evvzvdzrnmch5fiek25tu

Verifying Neural Networks Against Backdoor Attacks [article]

Long H. Pham, Jun Sun
2022 arXiv   pre-print
One of them is backdoor attacks, i.e., a neural network may be embedded with a backdoor such that a target output is almost always generated in the presence of a trigger.  ...  To the best of our knowledge, the only line of work which certifies the absence of backdoor is based on randomized smoothing, which is known to significantly reduce neural network performance.  ...  When an image is used in a classification task with a neural network, its feature values are typically normalized into floating-point numbers (e.g., dividing the original values by 255 to get normalized  ... 
arXiv:2205.06992v1 fatcat:5hyea7kwcvbznkmmh6dv6q2i2a

DDNFS: a Distributed Digital Notary File System

Alexander Zanger
2011 International journal of network security and its applications  
Safeguarding online communications using public key cryptography is a well-established practice today, but with the increasing reliance on 'faceless', solely online entities one of the core aspects of  ...  We propose that the real-world concept of a notary or certifying witness can be adapted to today's online environment quite easily, and that such a system when combined with peer-to-peer technologies for  ...  Bayou's authors decided on this simpler operation in order to get away with minimal communication guarantees.  ... 
doi:10.5121/ijnsa.2011.3508 fatcat:msykqtvfundq5cwzv72vnlc6pi

Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators [article]

David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele
2022 arXiv   pre-print
Moreover, we present a novel adversarial bit error attack and are able to obtain robustness against both targeted and untargeted bit-level attacks.  ...  In this paper, we show that a combination of robust fixed-point quantization, weight clipping, as well as random bit error training (RandBET) or adversarial bit error training (AdvBET) improves robustness  ...  Thus larger test sets would be required to get stronger guarantees, e.g., for n = 10 5 one would get 1.7%.  ... 
arXiv:2104.08323v2 fatcat:z5uwu4mpdvejvdwwpqt3bjvve4
« Previous Showing results 1 — 15 out of 1,269 results