Filters








6,407 Hits in 4.6 sec

Generating probabilistic safety guarantees for neural network controllers

Sydney M. Katz, Kyle D. Julian, Christopher A. Strong, Mykel J. Kochenderfer
<span title="2021-10-19">2021</span> <i title="Springer Science and Business Media LLC"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/h4nnd7sxwzcwhetu5qkjbcdh6u" style="color: black;">Machine Learning</a> </i> &nbsp;
We show that our method is able to generate meaningful probabilistic safety guarantees for aircraft collision avoidance neural networks that are loosely inspired by Airborne Collision Avoidance System  ...  In this work, we develop a method to use the results from neural network verification tools to provide probabilistic safety guarantees on a neural network controller.  ...  Conclusion In this work, we have introduced an approach to generate probabilistic safety guarantees on a neural network controller and applied it to an open source collision avoidance system inspired by  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s10994-021-06065-9">doi:10.1007/s10994-021-06065-9</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pqtuywwhenga5lvqzavd4nmhb4">fatcat:pqtuywwhenga5lvqzavd4nmhb4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211024043011/https://arxiv.org/pdf/2103.01203v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9b/08/9b08b94ecfe28950216c91f3d6eabc213056d155.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s10994-021-06065-9"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Safety Verification for Deep Neural Networks with Provable Guarantees (Invited Paper)

Marta Z. Kwiatkowska, Michael Wagner
<span title="2019-08-26">2019</span> <i > <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/iv4yo5vao5ctfjfushi4akt5xi" style="color: black;">International Conference on Concurrency Theory</a> </i> &nbsp;
This paper describes progress with developing automated verification techniques for deep neural networks to ensure safety and robustness of their decisions with respect to input perturbations.  ...  Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning.  ...  Since neural networks have a natural probabilistic interpretation, they lend themselves to frameworks for computing probabilistic guarantees on their robustness.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.4230/lipics.concur.2019.1">doi:10.4230/lipics.concur.2019.1</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/concur/Kwiatkowska19.html">dblp:conf/concur/Kwiatkowska19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tyy75rhfjrcyzhqjtle6c4jzju">fatcat:tyy75rhfjrcyzhqjtle6c4jzju</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200218071238/https://drops.dagstuhl.de/opus/volltexte/2019/10903/pdf/LIPIcs-CONCUR-2019-1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/13/8c/138c1e263c32495001b9d18f693a261fbdaa9242.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.4230/lipics.concur.2019.1"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Uncertainty Quantification with Statistical Guarantees in End-to-End Autonomous Driving Control [article]

Rhiannon Michelmore, Matthew Wicker, Luca Laurenti, Luca Cardelli, Yarin Gal, Marta Kwiatkowska
<span title="2019-09-21">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Deep neural network controllers for autonomous driving have recently benefited from significant performance improvements, and have begun deployment in the real world.  ...  Bayesian neural networks, which assume a prior over the weights, have been shown capable of producing such uncertainty measures, but properties surrounding their safety have not yet been quantified for  ...  A Bayesian Neural Network (BNN) is a neural network with a prior distribution on its weights.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.09884v1">arXiv:1909.09884v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kpykayzgl5gd7gmmsdrgu7ldtq">fatcat:kpykayzgl5gd7gmmsdrgu7ldtq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200726113243/https://arxiv.org/pdf/1909.09884v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.09884v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Data-Driven Assessment of Deep Neural Networks with Random Input Uncertainty [article]

Brendon G. Anderson, Somayeh Sojoudi
<span title="2020-10-02">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The method applies to deep neural networks of all sizes and structures, and to random input uncertainty with a general distribution.  ...  When using deep neural networks to operate safety-critical systems, assessing the sensitivity of the network outputs when subject to uncertain inputs is of paramount importance.  ...  This is because the random output Y = f (X) of the neural network is guaranteed to have safety level at leastr(Ŷ) with high probability.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.01171v1">arXiv:2010.01171v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/r3fvtuxtnjgybkjpp77zz3gfoi">fatcat:r3fvtuxtnjgybkjpp77zz3gfoi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201007002529/https://arxiv.org/pdf/2010.01171v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.01171v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Probabilistic Guarantees for Safe Deep Reinforcement Learning [article]

Edoardo Bacci, David Parker
<span title="2020-07-08">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
probabilistic guarantees on safe behaviour over a finite time horizon.  ...  It produces bounds on the probability of safe operation of the controller for different initial configurations and identifies regions where correct behaviour can be guaranteed.  ...  In the context of probabilistic verification, neural networks have been used to find POMDP policies with guarantees [11, 10] , but with recurrent neural networks and for discrete, not continuous, state  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.07073v2">arXiv:2005.07073v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wfngzaajozfdfnjwi3nxdiz5ei">fatcat:wfngzaajozfdfnjwi3nxdiz5ei</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200830020934/https://arxiv.org/pdf/2005.07073v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/46/67/4667193f72fc5cca5bdbe1d5bc16af4dd4c6aa8e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.07073v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Probabilistic performance validation of deep learning-based robust NMPC controllers [article]

Benjamin Karg, Teodoro Alamo, Sergio Lucia
<span title="2019-10-30">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We use a probabilistic validation technique based on finite families, combined with the idea of generalized maximum and constraint backoff to enable statistically valid conclusions related to general performance  ...  its quality using a posteriori probabilistic validation techniques.  ...  The parameters for the probabilistic safety certificate were chosen to = 0.02 and = 1 × 10 −6 .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.13906v1">arXiv:1910.13906v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yesowtwvjbcpnmda6wm4dq7nla">fatcat:yesowtwvjbcpnmda6wm4dq7nla</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200726235933/https://arxiv.org/pdf/1910.13906v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.13906v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Reactive motion planning with probabilistic safety guarantees [article]

Yuxiao Chen, Ugo Rosolia, Chuchu Fan, Aaron D. Ames, Richard Murray
<span title="2020-11-26">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We proved generalization bound for the predictive model using three different methods, post-bloating, support vector machine (SVM), and conformal analysis, all capable of generating stochastic guarantees  ...  The prediction is then fed to a motion planning module based on model predictive control.  ...  Conformal regression for probabilistic guarantees In this section, we discuss a third approach to provide probabilistic guarantees for our trained classifier f .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.03590v2">arXiv:2011.03590v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/33sgbyna5zc2je2whgfv2ct6am">fatcat:33sgbyna5zc2je2whgfv2ct6am</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201203081307/https://arxiv.org/pdf/2011.03590v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8b/14/8b14bfa9e05cadfa5cf801228003ba919dcb2310.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.03590v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Verified Probabilistic Policies for Deep Reinforcement Learning [article]

Edoardo Bacci, David Parker
<span title="2022-06-01">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Progress has been made in this area by building on existing work for verification of deep neural networks and of continuous-state dynamical systems.  ...  Deep reinforcement learning is an increasingly popular technique for synthesising policies to control an agent's interaction with its environment.  ...  We extend existing MILP-based methods for neural networks to cope with the softmax encoding used for probabilistic policies.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2201.03698v2">arXiv:2201.03698v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dix7xgasfrewbjrqdygzgf65fu">fatcat:dix7xgasfrewbjrqdygzgf65fu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220610083700/https://arxiv.org/pdf/2201.03698v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f0/41/f0416a2d62a61f7aeee80c07b5ef7a087829b730.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2201.03698v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Neural Lyapunov Differentiable Predictive Control [article]

Sayak Mukherjee, Ján Drgoňa, Aaron Tuor, Mahantesh Halappanavar, Draguna Vrabie
<span title="2022-05-22">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We present a learning-based predictive control methodology using the differentiable programming framework with probabilistic Lyapunov-based stability guarantees.  ...  We also provide a sampling-based statistical guarantee for the training of NLDPC from the distribution of initial conditions.  ...  A semidefinite programming-based safety verification and robustness of neural network control policies have been investigated in [24] . [25] presents adaptive safe learning with Lyapunov provable safety  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.10728v1">arXiv:2205.10728v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/s3xhqomskrajje7ogqv7cea3wy">fatcat:s3xhqomskrajje7ogqv7cea3wy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220525113400/https://arxiv.org/pdf/2205.10728v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/01/d3/01d3eab1ff79404f897dcbef542bc61368483eb4.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.10728v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Safe Interactive Model-Based Learning [article]

Marco Gallieri and Seyed Sina Mirrazavi Salehian and Nihat Engin Toklu and Alessio Quaglino and Jonathan Masci and Jan Koutník and Faustino Gomez
<span title="2019-11-18">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Safety is formally verified a-posteriori with a probabilistic method that utilizes the Noise Contrastive Priors (NPC) idea to build a Bayesian RNN forward model with an additive state uncertainty estimate  ...  A min-max control framework, based on alternate minimisation and backpropagation through the forward model, is used for the offline computation of the controller and the safe set.  ...  All of the code used for this paper was implemented from scratch by the authors using PyTorch. Finally, we thank everyone at NNAISENSE for contributing to a successful and inspiring R&D environment.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.06556v2">arXiv:1911.06556v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fx22zrnlhfdsxfqen66utzk3zi">fatcat:fx22zrnlhfdsxfqen66utzk3zi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200823141730/https://arxiv.org/pdf/1911.06556v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d9/fe/d9feaf474cafb8d2aad29bb9ad2330db1b56c5f6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.06556v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

On the probabilistic analysis of neural networks

Corina Păsăreanu, Hayes Converse, Antonio Filieri, Divya Gopinath
<span title="2020-06-29">2020</span> <i title="ACM"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/wvv27s77dvd5flktsj246kcxwu" style="color: black;">Proceedings of the IEEE/ACM 15th International Symposium on Software Engineering for Adaptive and Self-Managing Systems</a> </i> &nbsp;
Neural networks are powerful tools for automated decision-making, seeing increased application in safety-critical domains, such as autonomous driving.  ...  We investigate here the use of symbolic analysis and constraint solution space quantification to precisely quantify probabilistic properties in neural networks.  ...  ACAS-Xu is a safety-critical collision avoidance system for unmanned aircraft control [12] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3387939.3391594">doi:10.1145/3387939.3391594</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/icse/PasareanuCFG20.html">dblp:conf/icse/PasareanuCFG20</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/nts6gb4qo5chzapmqmh4ld2k4e">fatcat:nts6gb4qo5chzapmqmh4ld2k4e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210427172032/https://spiral.imperial.ac.uk:8443/bitstream/10044/1/83606/2/Probabilistic_Analysis_of_Neural_Nets__SEAMS2020_short_%281%29.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e0/6d/e06d563443517a684430d158f99cede1bf5867f5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3387939.3391594"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> acm.org </button> </a>

NR-RRT: Neural Risk-Aware Near-Optimal Path Planning in Uncertain Nonconvex Environments [article]

Fei Meng, Liangliang Chen, Han Ma, Jiankun Wang, Max Q.-H. Meng
<span title="2022-05-14">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Specifically, a deterministic risk contours map is maintained by perceiving the probabilistic nonconvex obstacles, and a neural network sampler is proposed to predict the next most-promising safe state  ...  Balancing the trade-off between safety and efficiency is of significant importance for path planning under uncertainty.  ...  By introducing the sum of squares (SOS) techniques, the algorithm can provide safety guarantees for the edges of a tree without the need for time discretization [6] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.06951v1">arXiv:2205.06951v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bq6kwbip3vaiffo2wgnrqzh3zq">fatcat:bq6kwbip3vaiffo2wgnrqzh3zq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220518135034/https://arxiv.org/pdf/2205.06951v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fe/fc/fefc1b9cc46a0bc9b271d37e65a120809999a250.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.06951v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Scalable Synthesis of Verified Controllers in Deep Reinforcement Learning [article]

Zikang Xiong, Suresh Jagannathan
<span title="2021-10-29">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Our key insight involves separating safety verification from neural controller, using pre-computed verified safety shields to constrain neural controller training which does not only focus on safety.  ...  There has been significant recent interest in devising verification techniques for learning-enabled controllers (LECs) that manage safety-critical systems.  ...  Conclusion In this paper, we present a new pipeline that synthesizes a neural network controller with expressive safety guarantees.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.10219v2">arXiv:2104.10219v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wmghro6mpzcmboj2ai5ihlplju">fatcat:wmghro6mpzcmboj2ai5ihlplju</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211108064526/https://arxiv.org/pdf/2104.10219v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7e/16/7e16597ea9744b0b70eb49ee200e7deff082756d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.10219v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Verification for Machine Learning, Autonomy, and Neural Networks Survey [article]

Weiming Xiang and Patrick Musau and Ayana A. Wild and Diego Manzanas Lopez and Nathaniel Hamilton and Xiaodong Yang and Joel Rosenfeld and Taylor T. Johnson
<span title="2018-10-03">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This survey presents an overview of verification techniques for autonomous systems, with a focus on safety-critical autonomous cyber-physical systems (CPS) and subcomponents thereof.  ...  Autonomy in CPS is enabling by recent advances in artificial intelligence (AI) and machine learning (ML) through approaches such as deep neural networks (DNNs), embedded in so-called learning enabled components  ...  Therefore, there is an urgent need for methods that can provide formal guarantees about the behavioral properties and specifications of neural networks, especially for the purpose of safety assurance  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1810.01989v1">arXiv:1810.01989v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/a5ax66lsxbho3fuxuh55ypnm6m">fatcat:a5ax66lsxbho3fuxuh55ypnm6m</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191016020644/https://arxiv.org/pdf/1810.01989v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/09/2e/092edc499d206fe1272593e4c0af464db30008c4.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1810.01989v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Active Safety Envelopes using Light Curtains with Probabilistic Guarantees [article]

Siddharth Ancha, Gaurav Pathak, Srinivasa G. Narasimhan, David Held
<span title="2021-07-08">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We show that generating light curtains that sense random locations (from a particular distribution) can quickly discover the safety envelope for scenes with unknown objects.  ...  Our method accurately estimates safety envelopes while providing probabilistic safety guarantees that can be used to certify the efficacy of a robot perception system to detect and avoid dynamic obstacles  ...  Neural network forecasting policy: We use a 2D convolutional neural network to forecast safety envelopes in the next timestep.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.04000v1">arXiv:2107.04000v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oovmjm3mpzc3zb3qqpiu2oviju">fatcat:oovmjm3mpzc3zb3qqpiu2oviju</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210714064604/https://arxiv.org/pdf/2107.04000v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a8/fb/a8fbbabd38c0138b731eb734764bb07f8fc3f849.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.04000v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 6,407 results