Filters








36,964 Hits in 6.3 sec

Algorithms for Verifying Deep Neural Networks

Changliu Liu, Tomer Arnon, Christopher Lazarus, Christopher Strong, Clark Barrett, Mykel J. Kochenderfer
2020 Foundations and Trends® in Optimization  
economics and finance, engineering design, scheduling and resource allocation, and other areas Information for Librarians  ...  Foundations and Trends ® in Optimization publishes survey and tutorial articles in the following topics: • algorithm design, analysis, and implementation (especially, on modern computing platforms • models  ...  To illustrate, in classification problems, it can be useful to verify that points near a training example belong to the same class as that example.  ... 
doi:10.1561/2400000035 fatcat:udnpbqyaqbeatcjrbkohospau4

Training verified learners with learned verifiers [article]

Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, Pushmeet Kohli
2018 arXiv   pre-print
The key idea is to simultaneously train two networks: a predictor network that performs the task at hand,e.g., predicting labels given inputs, and a verifier network that computes a bound on how well the  ...  This paper proposes a new algorithmic framework, predictor-verifier training, to train neural networks that are verifiable, i.e., networks that provably satisfy some desired input-output properties.  ...  Hence, there is a need to go beyond data-driven training of neural networks and towards verified training, where neural networks are trained both to fit the data and satisfy a specification provably, i.e  ... 
arXiv:1805.10265v2 fatcat:ratq2s4kdjh3jhesxt4w7444qe

Scaling provable adversarial defenses [article]

Eric Wong, Frank R. Schmidt, Jan Hendrik Metzen, J. Zico Kolter
2018 arXiv   pre-print
First, we present a technique for extending these training procedures to much more general networks, with skip connections (such as ResNets) and general nonlinearities; the approach is fully modular, and  ...  Second, in the specific case of ℓ_∞ adversarial perturbations and networks with ReLU nonlinearities, we adopt a nonlinear random projection for training, which scales linearly in the number of hidden units  ...  a method for training networks via the verified bounds).  ... 
arXiv:1805.12514v2 fatcat:3ctlp2ltxjc7vpelhbfzkdlea4

Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming [article]

Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, Aditi Raghunathan, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian Goodfellow, Percy Liang, Pushmeet Kohli
2020 arXiv   pre-print
Convex relaxations have emerged as a promising approach for verifying desirable properties of neural networks like robustness to adversarial perturbations.  ...  For two verification-agnostic networks on MNIST and CIFAR-10, we significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.  ...  AR was supported by a Google PhD Fellowship and Open Philanthropy Project AI Fellowship.  ... 
arXiv:2010.11645v2 fatcat:2z2gfv5kivd4bgq7zsd4nhwwaa

Difficulty-aware Image Super Resolution via Deep Adaptive Dual-Network [article]

Jinghui Qin, Ziwei Xie, Yukai Shi, Wushao Wen
2019 arXiv   pre-print
Specifically, we propose a dual-way SR network that one way is trained to focus on easy image regions and another is trained to handle hard image regions.  ...  Our SR approach that uses the region mask to adaptively enforce the dual-way SR network yields superior results.  ...  To address this problem, RCAN [10] proposed a residual in residual(RIR) to solve the difficulty of training deep SR network and a channel attention mechanism to improve the representation ability of  ... 
arXiv:1904.05802v2 fatcat:rgfsumhgbrbyhizay3ouzbz6lq

DeepSplit: Scalable Verification of Deep Neural Networks via Operator Splitting [article]

Shaoru Chen, Eric Wong, J. Zico Kolter, Mahyar Fazlyab
2022 arXiv   pre-print
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non-convex optimization problem, for which several past works have proposed convex  ...  We demonstrate our method in bounding the worst-case performance of large convolutional networks in image classification and reinforcement learning settings, and in reachability analysis of neural network  ...  from reinforcement learning: verifying the robustness of deep Q-networks (DQNs) to adversarial state perturbations [16] .  ... 
arXiv:2106.09117v3 fatcat:2rxrxad6tffdxdrslpte44z7s4

Skin sensitizer classification using dual-input machine learning model

Kazushi Matsumura
2020 Chem-Bio Informatics Journal  
Recently, several machine learning approaches, such as the gradient boosting decision tree (GBDT) and deep neural networks (DNNs), have been applied to chemical reactivity prediction, showing remarkable  ...  Herein, we performed a study on DNN-and GBDT-based modeling to investigate their potential for use in predicting skin sensitizers.  ...  Among classification algorithms, deep neural network (DNN), which is a type of artificial neural network with more than one layer, has been used to predict chemical reactivity, and won various competitions  ... 
doi:10.1273/cbij.20.54 fatcat:mlz5zji7xbfyxf7pepmzuhjcoi

Deep Residual Learning for Accelerated MRI using Magnitude and Phase Networks [article]

Dongwook Lee, Jaejun Yoo, Sungho Tak, Jong Chul Ye
2018 arXiv   pre-print
The proposed deep residual learning networks are composed of magnitude and phase networks that are separately trained.  ...  To address this, we investigate deep residual learning networks to remove aliasing artifacts from artifact corrupted images.  ...  We compared a baseline approach to handle the complex data and verify that separate magnitude and phase network training did not degrade performance.  ... 
arXiv:1804.00432v1 fatcat:f6z66cwpuvf2dlcjtetj6yr3xi

DeepSplit: Scalable Verification of Deep Neural Networks via Operator Splitting

Shaoru Chen, Eric Wong, J. Zico Kolter, Mahyar Fazlyab
2022 IEEE Open Journal of Ultrasonics, Ferroelectrics, and Frequency Control  
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non-convex optimization problem, for which several past works have proposed convex  ...  We demonstrate our method in bounding the worst-case performance of large convolutional networks in image classification and reinforcement learning settings, and in reachability analysis of neural network  ...  b: State-robust RL We demonstrate our approach on a non-classification benchmark from reinforcement learning: verifying the robustness of deep Q-networks (DQNs) to adversarial state perturbations [16]  ... 
doi:10.1109/ojcsys.2022.3187429 fatcat:o6sjwbfeuretdlolwcesyelmfy

Constrained Deep Networks: Lagrangian Optimization via Log-Barrier Extensions [article]

Hoel Kervadec, Jose Dolz, Jing Yuan, Christian Desrosiers, Eric Granger, Ismail Ben Ayed
2022 arXiv   pre-print
This study investigates imposing hard inequality constraints on the outputs of convolutional neural networks (CNN) during training.  ...  training stability, more so when dealing with a large number of constraints.  ...  And last, we measure the time needed to train a single epoch, including the dual update for the Standard Lagrangian and ReLU Lagrangian [12] .  ... 
arXiv:1904.04205v5 fatcat:nd3o2rpimbhbxjg2luwbhk4wz4

Dual Stage Augmented Colorful Texture Synthesis from Hand Sketch

Jinxuan Liu, Tiange Zhang, Ying Gao, Shu Zhang, Jinxuan Sun, Junyu Dong, Hui Yu
2019 2019 25th International Conference on Automation and Computing (ICAC)  
The network in the second stage is pre-trained using our constructed dataset to learn how to translate the grayscale image to a colorful image.  ...  In order to enable the synthesized image not only possesses the texture features, but also the vibrant color, we propose a cascaded network model that generates a texture image through a dual-stage network  ...  CONCLUSION In this paper, we propose a dual stage deep learning model for hand sketch to colorful texture synthesis problem. Compared to the previous methods, our approach is more robust.  ... 
doi:10.23919/iconac.2019.8895125 dblp:conf/iconac/LiuZGZSD019 fatcat:3oa64kxrgjdz7gjd3fj4zc6zoa

Dual camera snapshot hyperspectral imaging system via physics informed learning [article]

Hui Xie, Zhuang Zhao, Jing Han, Yi Zhang, Lianfa Bai, Jun Lu
2021 arXiv   pre-print
We consider using the system's optical imaging process with convolutional neural networks (CNNs) to solve the snapshot hyperspectral imaging reconstruction problem, which uses a dual-camera system to capture  ...  Extensive simulation and experimental results show that our method without training can be adapted to a wide imaging environment with good performance.  ...  Acknowledgments: We thank Jiang Yue, Xiaoyu Chen, and Ruobing Ouyang. Jiutao Mu for technical support Conflicts of Interest: The authors declare no conflicts of interest.  ... 
arXiv:2109.02643v2 fatcat:5wirlbqrnzhszmryoc3ku4nz5i

Provable defenses against adversarial examples via the convex outer adversarial polytope [article]

Eric Wong, J. Zico Kolter
2018 arXiv   pre-print
Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that  ...  We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations on the training data.  ...  Acknowledgements This work was supported by a DARPA Young Faculty Award, under grant number N66001-17-1-4036. We thank Frank R. Schmidt for providing helpful comments on an earlier draft of this work.  ... 
arXiv:1711.00851v3 fatcat:u6dxtu4rtjg6rlywbtvqtcwe2u

Coarse-to-Fine Lung Nodule Segmentation in CT Images with Image Enhancement and Dual-branch Network

Zhitong Wu, Qianjun Zhou, Feng Wang
2021 IEEE Access  
In this paper, we propose a coarse-to-fine lung nodule segmentation method by combining image enhancement and a Dual-branch neural network.  ...  Second, we propose a Dual-branch network based on U-Net (DB U-Net) which can effectively explore information from both 2D slices and the relationships between neighboring slices for more precise and consistent  ...  To this end, we propose a novel coarse-to-fine approach by integrating image enhancement and deep learning approaches.  ... 
doi:10.1109/access.2021.3049379 fatcat:7j35x2avovcwxp3xe7zns5pmku

A Dual Approach to Scalable Verification of Deep Networks [article]

Krishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy Mann, Pushmeet Kohli
2018 arXiv   pre-print
In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs.  ...  Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified.  ...  ACKNOWLEDGEMENTS The authors would like to thank Brendan O'Donoghue, Csaba Szepesvari, Rudy Bunel, Jonathan Uesato and  ... 
arXiv:1803.06567v2 fatcat:jxiioie3crfuvchtcki5lcqv7e
« Previous Showing results 1 — 15 out of 36,964 results