Filters








4,718 Hits in 6.4 sec

A Provably Correct Algorithm for Deep Learning that Actually Works [article]

Eran Malach, Shai Shalev-Shwartz
2018 arXiv   pre-print
We describe a layer-by-layer algorithm for training deep convolutional networks, where each step involves gradient updates for a two layer network followed by a simple clustering algorithm.  ...  Furthermore, we show that our algorithm actually works in practice (on the CIFAR dataset), achieving results in the same ballpark as that of vanilla convolutional neural networks that are being trained  ...  The work of [2] shows a provably efficient algorithm for learning a deep representation, but this algorithm seems far from capturing the behavior of algorithms used in practice.  ... 
arXiv:1803.09522v2 fatcat:bxaw4gs5vzcrbjy2engkofcy4e

Provable Bounds for Learning Some Deep Representations [article]

Sanjeev Arora and Aditya Bhaskara and Rong Ge and Tengyu Ma
2013 arXiv   pre-print
We give algorithms with provable guarantees that learn a class of deep nets in the generative model view popularized by Hinton and others.  ...  Our generative model is an n node multilayer neural net that has degree at most n^γ for some γ <1 and each edge has a random edge weight in [-1,1].  ...  Some deep learning papers mistakenly cite an old paper for such a result, but the result that actually exists is far weaker. 3 provable bounds  ... 
arXiv:1310.6343v1 fatcat:2s2634gpmrf4fp3gfqq3ngwmpm

TOD: GPU-accelerated Outlier Detection via Tensor Operations [article]

Yue Zhao, George H. Chen, Zhihao Jia
2022 arXiv   pre-print
Notably, TOD allows straightforward integration of additional OD algorithms and provides a unified framework for combining classical OD algorithms with deep learning methods.  ...  This decomposition enables TOD to accelerate OD computations by leveraging recent advances in deep learning infrastructure in both hardware and software.  ...  ACKNOWLEDGEMENT We would like to thank the anonymous reviewers for their helpful comments. Yue Zhao is partially supported by a Norton Graduate Fellowship.  ... 
arXiv:2110.14007v2 fatcat:5fwqcku3z5ettdus3hlwu5wft4

Neural Lyapunov Control [article]

Ya-Chien Chang, Nima Roohi, Sicun Gao
2020 arXiv   pre-print
We propose new methods for learning control policies and neural network Lyapunov functions for nonlinear control problems, with provable guarantee of stability.  ...  The framework consists of a learner that attempts to find the control and Lyapunov functions, and a falsifier that finds counterexamples to quickly guide the learner towards solutions.  ...  We demonstrate that neural networks and deep learning can find provably stable controllers in a direct way and tackle the full nonlinearity of the systems, and significantly outperform existing methods  ... 
arXiv:2005.00611v3 fatcat:sbbzcgo4ejff3lhg4u6mk7vrle

Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods [article]

Shiyu Duan, Jose C. Principe
2022 arXiv   pre-print
In particular, they allow for greater modularity and transparency in deep learning workflows, aligning deep learning with the mainstream computer science engineering that heavily exploits modularization  ...  This tutorial paper surveys provably optimal alternatives to end-to-end backpropagation (E2EBP) -- the de facto standard for training deep architectures.  ...  ACKNOWLEDGMENT This work was supported by the Defense Advanced Research Projects Agency (FA9453-18-1-0039) and the Office of Naval Research (N00014-18-1-2306).  ... 
arXiv:2101.03419v3 fatcat:o4k6m4vtvzaafjhwvnnmgbim5a

Provable Repair of Deep Neural Networks [article]

Matthew Sotoudeh, Aditya V. Thakur
2021 arXiv   pre-print
For safety specifications addressing convex polytopes containing infinitely many points, our Provable Polytope Repair algorithm can find a provably minimal repair satisfying the specification for DNNs  ...  We introduce the provable repair problem, which is the problem of repairing a network N to construct a new network N' that satisfies a given specification.  ...  This work is supported in part by NSF grant CCF-2048123 and a Facebook Probability and Programming research award.  ... 
arXiv:2104.04413v2 fatcat:zef2xrr2zbedzdmvwrgthnto7u

Provable defenses against adversarial examples via the convex outer adversarial polytope [article]

Eric Wong, J. Zico Kolter
2018 arXiv   pre-print
We propose a method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations on the training data.  ...  We illustrate the approach on a number of tasks to train classifiers with robust adversarial guarantees (e.g. for MNIST, we produce a convolutional classifier that provably has less than 5.8 adversarial  ...  Acknowledgements This work was supported by a DARPA Young Faculty Award, under grant number N66001-17-1-4036. We thank Frank R. Schmidt for providing helpful comments on an earlier draft of this work.  ... 
arXiv:1711.00851v3 fatcat:u6dxtu4rtjg6rlywbtvqtcwe2u

Deep Partition Aggregation: Provable Defense against General Poisoning Attacks [article]

Alexander Levine, Soheil Feizi
2021 arXiv   pre-print
A provable defense provides a certificate for each test sample, which is a lower bound on the magnitude of any adversarial distortion of the training set that can corrupt the test sample's classification  ...  We propose two novel provable defenses against poisoning attacks: (i) Deep Partition Aggregation (DPA), a certified defense against a general poisoning threat model, defined as the insertion or deletion  ...  Curves for Rosenfeld et al. (2020) are adapted from Figure 1 in that work.  ... 
arXiv:2006.14768v2 fatcat:gfttpdivavgyvj3r3y5ahuercu

Provably Safe Reinforcement Learning: A Theoretical and Experimental Comparison [article]

Hanna Krasowski, Jakob Thumm, Marlon Müller, Xiao Wang, Matthias Althoff
2022 arXiv   pre-print
Ensuring safety of reinforcement learning (RL) algorithms is crucial for many real-world tasks. However, vanilla RL does not guarantee safety for an agent.  ...  We therefore introduce a categorization for existing provably safe RL methods, and present the theoretical foundations for both continuous and discrete action spaces.  ...  For RL algorithms that learn the Q-function, we exemplify the effects of discrete action masking for deep Q-network (DQN) (Mnih et al., 2013) , which is most commonly used for Q-learning with discrete  ... 
arXiv:2205.06750v1 fatcat:6mkf42ygxzgfnl25e26jfusk64

Provably Safe Deep Reinforcement Learning for Robotic Manipulation in Human Environments [article]

Jakob Thumm, Matthias Althoff
2022 arXiv   pre-print
Deep reinforcement learning (RL) has shown promising results in the motion planning of manipulators.  ...  Therefore, we propose a shielding mechanism that ensures ISO-verified human safety while training and deploying RL algorithms on manipulators.  ...  This work presents the first provably safe robot manipulator control based on deep RL in human environments. A. Related work Gu et al.  ... 
arXiv:2205.06311v1 fatcat:4i7crcuehvasvl3xdydnuuzyoq

Recent Advances in Neural Program Synthesis [article]

Neel Kant
2018 arXiv   pre-print
In recent years, deep learning has made tremendous progress in a number of fields that were previously out of reach for artificial intelligence.  ...  The successes in these problems has led researchers to consider the possibilities for intelligent systems to tackle a problem that humans have only recently themselves considered: program synthesis.  ...  A Unique Challenge for Deep Learning The task of program synthesis is quite different from others that deep learning has excelled at.  ... 
arXiv:1802.02353v1 fatcat:klvndhzs6vbjfjhizsqg4xexym

CAMUS: A Framework to Build Formal Specifications for Deep Perception Systems Using Simulators [article]

Julien Girard-Satabin , Zakaria Chihani
2019 arXiv   pre-print
Along with this theoretical formulation , we provide a tool to translate deep learning models into standard logical formulae.  ...  The topic of provable deep neural network robustness has raised considerable interest in recent years.  ...  However, the deep learning field is different, since the subject of verification (the deep learning model) is actually obtained through a learning algorithm, which is not tailored to satisfy a specification  ... 
arXiv:1911.10735v1 fatcat:ayhcor6l4zhoxgrbnhb2gmve4y

Provable ICA with Unknown Gaussian Noise, and Implications for Gaussian Mixtures and Autoencoders [article]

Sanjeev Arora, Rong Ge, Ankur Moitra, Sushant Sachdeva
2012 arXiv   pre-print
We present a new algorithm for Independent Component Analysis (ICA) which has provable performance guarantees.  ...  than that of a standard Gaussian random variable and η is an n-dimensional Gaussian random variable with unknown covariance Σ: We give an algorithm that provable recovers A and Σ up to an additive ϵ and  ...  A rigorous analysis of deep learning -say, an algorithm that provably learns the parameters of an RBM-is another problem that is wide open, and involves subtle variations on the problem we considered here  ... 
arXiv:1206.5349v2 fatcat:5owua5lqc5e5paqlskfnsjpmci

PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking [article]

Chong Xiang, Arjun Nitin Bhagoji, Vikash Sehwag, Prateek Mittal
2021 arXiv   pre-print
In this paper, we propose a general defense framework called PatchGuard that can achieve high provable robustness while maintaining high clean accuracy against localized adversarial patches.  ...  Towards this end, we present our robust masking defense that robustly detects and masks corrupted features to recover the correct prediction.  ...  Acknowledgements We are grateful to David Wagner for shepherding the paper and anonymous reviewers at USENIX Security for their valuable feedback.  ... 
arXiv:2005.10884v5 fatcat:czqjos4w3new7hwkojlgui2q5u

Deep Exploration via Bootstrapped DQN [article]

Ian Osband, Charles Blundell, Alexander Pritzel, Benjamin Van Roy
2016 arXiv   pre-print
We propose bootstrapped DQN, a simple algorithm that explores in a computationally and statistically efficient manner through use of randomized value functions.  ...  Efficient exploration in complex environments remains a major challenge for reinforcement learning.  ...  Figure 13 : 13 Shallow exploration methods do not work. Figure 14 : 14 A stochastic MDP that requires deep exploration. Figure 15 : 15 Learning and regret bounds on a stochastic MDP.  ... 
arXiv:1602.04621v3 fatcat:pfstw4ib3vebrennwffyqlfxuu
« Previous Showing results 1 — 15 out of 4,718 results