Filters








1,476 Hits in 6.2 sec

Scalable Polyhedral Verification of Recurrent Neural Networks [article]

Wonryong Ryou, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Dan, Martin Vechev
<span title="2021-06-10">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and nonlinear  ...  recurrent update functions by combining sampling, optimization, and Fermat's theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines  ...  Fig. 1 shows how our proposed R2: Robustness certifier for Recurrent neural networks proves the robustness of the model.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.13300v3">arXiv:2005.13300v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wzopdvt3jnch5lvqsiy2psiuve">fatcat:wzopdvt3jnch5lvqsiy2psiuve</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201212014734/https://arxiv.org/pdf/2005.13300v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/35/37/353725c7df33be6079eda64a723484e31e4b3fdf.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.13300v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Robustness against Adversarial Attacks in Neural Networks using Incremental Dissipativity [article]

Bernardo Aquino, Arash Rahnama, Peter Seiler, Lizhen Lin, Vijay Gupta
<span title="2022-02-14">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This work proposes an incremental dissipativity-based robustness certificate for neural networks in the form of a linear matrix inequality for each layer.  ...  We also propose an equivalent spectral norm bound for this certificate which is scalable to neural networks with multiple layers.  ...  ACKNOWLEDGMENT The authors acknowledge comments from Julien Béguinot (Télécom Paris -Institut Polytechnique de Paris) and Léo Monbroussou ( École Normale Supérieure Paris-Saclay -Institut Polytechnique  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.12906v2">arXiv:2111.12906v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/rslbawn5avc5hmyieq3o5odjcu">fatcat:rslbawn5avc5hmyieq3o5odjcu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220218001633/https://arxiv.org/pdf/2111.12906v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1b/9c/1b9c0eb395118579be38feb9ecae1a0019ebc140.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.12906v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications

Wenjie Ruan, Xinping Yi, Xiaowei Huang
<span title="2021-10-26">2021</span> <i title="ACM"> Proceedings of the 30th ACM International Conference on Information &amp; Knowledge Management </i> &nbsp;
This tutorial will particularly highlight state-of-the-art techniques in adversarial attacks and robustness verification of deep neural networks (DNNs).  ...  We will also introduce some effective countermeasures to improve robustness of deep learning models, with a particular focus on adversarial training.  ...  their extension to recurrent neural networks [10] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3459637.3482029">doi:10.1145/3459637.3482029</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ekos2t5jmfgahpim76txpf7qxu">fatcat:ekos2t5jmfgahpim76txpf7qxu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211104191946/https://dl.acm.org/doi/pdf/10.1145/3459637.3482029" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2e/58/2e58fa5702bbe600d38235f0286267bcfbe30fdd.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3459637.3482029"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> acm.org </button> </a>

Fastened CROWN: Tightened Neural Network Robustness Certificates

Zhaoyang Lyu, Ching-Yun Ko, Zhifeng Kong, Ngai Wong, Dahua Lin, Luca Daniel
<span title="2020-04-03">2020</span> <i title="Association for the Advancement of Artificial Intelligence (AAAI)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/wtjcymhabjantmdtuptkk62mlq" style="color: black;">PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE</a> </i> &nbsp;
We then propose an optimization-based approach FROWN (Fastened CROWN): a general algorithm to tighten robustness certificates for neural networks.  ...  Extensive experiments on various networks trained individually verify the effectiveness of FROWN in safeguarding larger robust regions.  ...  Acknowledgement This work is partially supported by the General Research Fund (Project 14236516) of the Hong Kong Research Grants Council, and MIT-Quest program.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1609/aaai.v34i04.5944">doi:10.1609/aaai.v34i04.5944</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bmyeg2exhjgnba6p6bcswsdwxe">fatcat:bmyeg2exhjgnba6p6bcswsdwxe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201104132507/https://aaai.org/ojs/index.php/AAAI/article/download/5944/5800" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ff/30/ff302ea2c7445866c3cca4bf907afc5f9370e62e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1609/aaai.v34i04.5944"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability [article]

Kai Y. Xiao, Vincent Tjeng, Nur Muhammad Shafiullah, Aleksander Madry
<span title="2019-04-23">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Specifically, we aim to train deep neural networks that not only are robust to adversarial perturbations but also whose robustness can be verified more easily.  ...  We explore the concept of co-design in the context of neural network verification.  ...  In general, the process of training a robust neural network and then formally verifying its robustness happens in two steps. • Step 1: Training 2018) , propose a method for step 2 (the certification  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1809.03008v3">arXiv:1809.03008v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/uyvzflnngfabdpwsptje7rcxfu">fatcat:uyvzflnngfabdpwsptje7rcxfu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200930001120/https://arxiv.org/pdf/1809.03008v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/de/49/de49430578bb3f8de3e610423255662c45f17610.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1809.03008v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Review of Formal Methods applied to Machine Learning [article]

Caterina Urban, Antoine Miné
<span title="2021-04-21">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The large majority of them verify trained neural networks and employ either SMT, optimization, or abstract interpretation techniques.  ...  Thanks to the availability of mature tools, their use is well established in the industry, and in particular to check safety-critical applications as they undergo a stringent certification process.  ...  Acknowledgements This work is partially supported by Airbus and the European Research Council under Consolidator Grant Agreement 681393 -MOPSA.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.02466v2">arXiv:2104.02466v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6ghs5huoynbc5h7lndajmsoxyu">fatcat:6ghs5huoynbc5h7lndajmsoxyu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210423044332/https://arxiv.org/pdf/2104.02466v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1a/a6/1aa699801cb186229ec4761296d1c87c18184f59.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.02466v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

POPQORN: Quantifying Robustness of Recurrent Neural Networks [article]

Ching-Yun Ko, Zhaoyang Lyu, Tsui-Wei Weng, Luca Daniel, Ngai Wong, Dahua Lin
<span title="2019-05-17">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
It remains an open problem to quantify robustness for recurrent networks, especially LSTM and GRU.  ...  We demonstrate its effectiveness on different network architectures and show that the robustness quantification on individual steps can lead to new insights.  ...  Acknowledgment The authors would like to thank Zhuolun Leon He for useful discussion.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1905.07387v1">arXiv:1905.07387v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xg7p7nsgmvalli4mtfc3dq7m7y">fatcat:xg7p7nsgmvalli4mtfc3dq7m7y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200829062355/https://arxiv.org/pdf/1905.07387v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f7/05/f70532fb31ae309b9e496632d21337b2bb045663.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1905.07387v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Scalable Polyhedral Verification of Recurrent Neural Networks [chapter]

Wonryong Ryou, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Dan, Martin Vechev
<span title="">2021</span> <i title="Springer International Publishing"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
AbstractWe present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and  ...  non-linear recurrent update functions by combining sampling, optimization, and Fermat's theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem  ...  Introduction Recurrent neural networks (RNNs) are widely used to model long-term dependencies in lengthy sequential signals [11, 27, 43] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-81685-8_10">doi:10.1007/978-3-030-81685-8_10</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kbklnexlzfatdn7wnhljqjsecy">fatcat:kbklnexlzfatdn7wnhljqjsecy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210717201857/http://link.springer.com/content/pdf/10.1007/978-3-030-81685-8_10.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f4/bb/f4bbcce0328f7944dc755a35a44c3fd16aa43133.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-030-81685-8_10"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> springer.com </button> </a>

DeepGalaxy: Testing Neural Network Verifiers via Two-Dimensional Input Space Exploration [article]

Xuan Xie, Fuyuan Zhang
<span title="2022-01-20">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Deep neural networks (DNNs) are widely developed and applied in many areas, and the quality assurance of DNNs is critical.  ...  Specifically, we (1) propose a line of mutation rules, including model level mutation and specification level mutation, to effectively explore the two-dimensional input space of neural network verifiers  ...  and verify regular properties for recurrent neural networks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2201.08087v1">arXiv:2201.08087v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/m75hwzuifzgljfmmzoxf7bu6ja">fatcat:m75hwzuifzgljfmmzoxf7bu6ja</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220122092930/https://arxiv.org/pdf/2201.08087v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/82/34/82341dcf3a251fb0da5f5a252896fa7431368e73.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2201.08087v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Fantastic Four: Differentiable Bounds on Singular Values of Convolution Layers [article]

Sahil Singla, Soheil Feizi
<span title="2021-06-12">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and provable robustness of deep networks.  ...  We show that our spectral bound is an effective regularizer and can be used to bound either the lipschitz constant or curvature values (eigenvalues of the Hessian) of neural networks.  ...  Dhillon, and Luca Daniel. Towards fast computation of certified robustness for relu networks. In International Conference on Machine Learning (ICML), July 2018.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.10258v3">arXiv:1911.10258v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zftn2r6bzzfttayah2rx7pqbpe">fatcat:zftn2r6bzzfttayah2rx7pqbpe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210616035830/https://arxiv.org/pdf/1911.10258v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d4/3e/d43ebb23a3728b874cf7663698c5ec7b3f4227c8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.10258v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Training Provably Robust Models by Polyhedral Envelope Regularization [article]

Chen Liu, Mathieu Salzmann, Sabine Süsstrunk
<span title="2021-09-20">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Training certifiable neural networks enables one to obtain models with robustness guarantees against adversarial attacks.  ...  We demonstrate the flexibility and effectiveness of our framework on standard benchmarks; it applies to networks of different architectures and general activation functions.  ...  Therefore, our method is also applicable to other network architectures, such as convolutional neural networks (CNN), residual networks (ResNet) and recurrent neural networks (RNN). III.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.04792v3">arXiv:1912.04792v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ak4wf4ntvfexldnwiivkoszx5q">fatcat:ak4wf4ntvfexldnwiivkoszx5q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210922054951/https://arxiv.org/pdf/1912.04792v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/82/0b/820bfb20eca77766890428ab2ab45b8d1123abb5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.04792v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A survey in Adversarial Defences and Robustness in NLP [article]

Shreya Goyal, Sumanth Doddapaneni, Mitesh M.Khapra, Balaraman Ravindran
<span title="2022-04-12">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In recent years, it has been seen that deep neural networks are lacking robustness and are likely to break in case of adversarial perturbations in input data.  ...  This survey also highlights the fragility of the advanced deep neural networks in NLP and the challenges in defending them.  ...  A new set of methods to achieve robustness were proposed in the direction of providing a certificate of robustness for a neural network, attempting to put an end to this race.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.06414v2">arXiv:2203.06414v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2ukd44px35e7ppskzkaprfw4ha">fatcat:2ukd44px35e7ppskzkaprfw4ha</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220425122110/https://arxiv.org/pdf/2203.06414v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4a/b2/4ab26e3d3ed3dc26f3fd372de3dc0c8a156041fa.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.06414v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Holistic Adversarial Robustness of Deep Learning Models [article]

Pin-Yu Chen, Sijia Liu
<span title="2022-02-15">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models, including attacks, defenses, verification  ...  Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability.  ...  Throughout this paper, we focus on adversarial robustness of neural networks for classification tasks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.07201v1">arXiv:2202.07201v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/q2ush5pqyjgu7nxragxrp6k7re">fatcat:q2ush5pqyjgu7nxragxrp6k7re</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220223170000/https://arxiv.org/pdf/2202.07201v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/20/4d/204d5e963fc944d492637f2e6fadc6ddce39862f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.07201v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

On Robust Classification using Contractive Hamiltonian Neural ODEs [article]

Muhammad Zakwan, Liang Xu, Giancarlo Ferrari-Trecate
<span title="2022-03-22">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Deep neural networks can be fragile and sensitive to small input perturbations that might cause a significant change in the output.  ...  In this paper, we employ contraction theory to improve the robustness of neural ODEs (NODEs).  ...  Contractivity has been used to improve the well-posedness of implicit neural networks [16] and the trainability of recurrent neural networks [17] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.11805v1">arXiv:2203.11805v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/36ov7xmhwbb3jpu6zsi3myi24y">fatcat:36ov7xmhwbb3jpu6zsi3myi24y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220516200500/https://arxiv.org/pdf/2203.11805v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f7/38/f738af39e4bda0f9e744e334076f6fdc2521dcd1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.11805v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Fastened CROWN: Tightened Neural Network Robustness Certificates [article]

Zhaoyang Lyu, Ching-Yun Ko, Zhifeng Kong, Ngai Wong, Dahua Lin, Luca Daniel
<span title="2019-12-02">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We then propose an optimization-based approach FROWN (Fastened CROWN): a general algorithm to tighten robustness certificates for neural networks.  ...  Extensive experiments on various networks trained individually verify the effectiveness of FROWN in safeguarding larger robust regions.  ...  Acknowledgement This work is partially supported by the General Research Fund (Project 14236516) of the Hong Kong Research Grants Council, and MIT-Quest program.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.00574v1">arXiv:1912.00574v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zsv3wcnc7vedxkz6kyyfq7xasu">fatcat:zsv3wcnc7vedxkz6kyyfq7xasu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200928202714/https://arxiv.org/pdf/1912.00574v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a9/48/a948ff0c0eba359e141f5dcee1f8f8cfccc7f7a7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.00574v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 1,476 results