Filters








1,102 Hits in 3.1 sec

Runtime Safety Assurance Using Reinforcement Learning [article]

Christopher Lazarus, James G. Lopez, Mykel J. Kochenderfer
2020 arXiv   pre-print
We can bypass formal verification of non-pedigreed components by incorporating Runtime Safety Assurance (RTSA) as mechanism to ensure safety.  ...  We frame the design of RTSA with the Markov decision process (MDP) framework and use reinforcement learning (RL) to solve it.  ...  Runtime Safety Assurance (RTSA) aims to do this as a runtime monitoring safeguard that is capable of switching to a safe recovery controller if the vehicle is at risk of operating unsafely.  ... 
arXiv:2010.10618v1 fatcat:ndx4lwhwtreylhmgbjrbh5mp2a

Towards the Safety of Human-in-the-Loop Robotics: Challenges and Opportunities for Safety Assurance of Robotic Co-Workers [article]

Kerstin Eder, Chris Harper, Ute Leonards
2014 arXiv   pre-print
In this paper we discuss the state of the art in safety assurance, existing as well as emerging standards in this area, and the need for new approaches to safety assurance in the context of learning machines  ...  We then focus on robotic learning from demonstration, the challenges these techniques pose to safety assurance and indicate opportunities to integrate safety considerations into algorithms "by design".  ...  to shift part of safety assurance to runtime.  ... 
arXiv:1404.2229v3 fatcat:ug5puzhzunduxmzspua62u3vce

Towards the safety of human-in-the-loop robotics: Challenges and opportunities for safety assurance of robotic co-workers'

Kerstin Eder, Chris Harper, Ute Leonards
2014 The 23rd IEEE International Symposium on Robot and Human Interactive Communication  
In this paper we discuss the state of the art in safety assurance, existing as well as emerging standards in this area, and the need for new approaches to safety assurance in the context of learning machines  ...  We then focus on robotic learning from demonstration, the challenges these techniques pose to safety assurance and outline opportunities to integrate safety considerations into algorithms "by design".  ...  to shift part of safety assurance to runtime.  ... 
doi:10.1109/roman.2014.6926328 dblp:conf/ro-man/EderHL14 fatcat:bd3nylecr5fplojunjoaczgrte

A Verification Framework for Certifying Learning-Based Safety-Critical Aviation Systems [article]

Ali Baheri, Hao Ren, Benjamin Johnson, Pouria Razzaghi, Peng Wei
2022 arXiv   pre-print
We present a safety verification framework for design-time and run-time assurance of learning-based components in aviation systems. Our proposed framework integrates two novel methodologies.  ...  From the run-time assurance perspective, we propose reachability- and statistics-based online monitoring and safety guards for a learning-based decision-making model to complement the offline verification  ...  The preemptive shields and post-posed shields are synthesized to enforce runtime safety for reinforcement learning agents [30] . We use gym-PyBullet-drones [31] as our main simulator platform.  ... 
arXiv:2205.04590v2 fatcat:gq73ubvlajb5dbjpwlrnvur7o4

Keeping intelligence under control

Piergiuseppe Mallozzi, Patrizio Pelliccione, Claudio Menghi
2018 Proceedings of the 1st International Workshop on Software Engineering for Cognitive Services - SE4COG '18  
To overcome this issue, we believe that machine learning techniques should be combined with suitable reasoning mechanisms aimed at assuring that the decisions taken by the machine learning algorithm do  ...  This paper proposes an approach that combines machine learning with runtime monitoring to detect violations of system invariants in the actions execution policies.  ...  In this paper, we use reinforcement learning since it is a powerful machine learning technique for decision making.  ... 
doi:10.1145/3195555.3195558 dblp:conf/icse/MallozziPM18 fatcat:dfzqwvg2vjgyvhmsjerubcu3ui

Verified Probabilistic Policies for Deep Reinforcement Learning [article]

Edoardo Bacci, David Parker
2022 arXiv   pre-print
In this paper, we tackle the problem of verifying probabilistic policies for deep reinforcement learning, which are used to, for example, tackle adversarial environments, break symmetries and manage trade-offs  ...  We implement our approach and illustrate its effectiveness on a selection of reinforcement learning benchmarks.  ...  There are several approaches to assuring safety in reinforcement learning, often leveraging ideas from formal verification, such as the use of temporal logic to specify safety conditions, or the use of  ... 
arXiv:2201.03698v1 fatcat:6q6tle2d45aphn6gicqci7h5f4

Hardening of Artificial Neural Networks for Use in Safety-Critical Applications – A Mapping Study [article]

Rasmus Adler, Mohammed Naveed Akram, Pascal Bauer, Patrik Feth, Pascal Gerber, Andreas Jedlitschka, Lisa Jöckel, Michael Kläs, Daniel Schneider
2019 arXiv   pre-print
Conclusions: Various methods have been proposed to overcome the specific challenges of using ANNs in safety-critical applications.  ...  Context: Across different domains, Artificial Neural Networks (ANNs) are used more and more in safety-critical applications in which erroneous outputs of such ANN can have catastrophic consequences.  ...  […],[…]reinforcement learn[…], […]supervised learn[…],[…]unsupervised learn[…] and Topic safety-critical mission critical, high[…] assur[…], high[…] integ[…], safety, certif[…] and Topic challenge  ... 
arXiv:1909.03036v1 fatcat:v5rkd5w52jhrzhoamt722upndy

Quantifying Assurance in Learning-enabled Systems [article]

Erfan Asaadi, Ewen Denney, Ganesh Pai
2020 arXiv   pre-print
Dependability assurance of systems embedding machine learning(ML) components---so called learning-enabled systems (LESs)---is a key step for their use in safety-critical applications.  ...  In emerging standardization and guidance efforts, there is a growing consensus in the value of using assurance cases for that purpose.  ...  Quantified and probabilistic guarantees in reinforcement learning have been explored in developing assured ML components in [20] .  ... 
arXiv:2006.10345v1 fatcat:zms2cx3c6naalouhvfkf6qdosy

Systems Challenges for Trustworthy Embodied Systems [article]

Harald Rueß
2022 arXiv   pre-print
A new generation of increasingly autonomous and self-learning embodied systems is about to be developed.  ...  We are arguing that traditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven  ...  AI trained using reinforcement learning can be tricked by … an adversarial policy."  ... 
arXiv:2201.03413v2 fatcat:hwprg3zjhvfuro3etecx2t4qua

Towards a framework of enforcing resilient operation of cyber‐physical systems with unknown dynamics

Luan Nguyen, Vijay Gupta
2021 IET Cyber-Physical Systems  
Current techniques for secure-by-design systems engineering do not provide an end-to-end methodology for a designer to provide real-time assurance for safety-critical CPSs by identifying system dynamics  ...  The decision and control module designs a controller to ensure that the correctness specifications are satisfied at runtime.  ...  Third, the learning-based controller will be constructed using concepts from classical control such as dissipativity, as well as reinforcement learning [61] [62] [63] .  ... 
doi:10.1049/cps2.12009 fatcat:5zqm3pnzrjgrxc6c6ia2wvgul4

Deep Learning and Machine Learning in Robotics [From the Guest Editors]

Fabio Bonsignorio, David Hsu, Matthew Johnson-Roberson, Jens Kober
2020 IEEE robotics & automation magazine  
"Assured Runtime Monitoring and Planning" by Esen Yel et al. addresses one of the biggest open questions in robot learning: safety and verification.  ...  and multifidelity reinforcement learning) to key transversal needs (assurance and verification of autono mous operations, gravity compensation, and human skill decoding) while cover ing a wide and diverse  ... 
doi:10.1109/mra.2020.2984470 fatcat:jpzcc556kbftne5nrjhjbdthsq

Safe AI – How is this Possible? [article]

Harald Rueß, Simon Burton
2022 arXiv   pre-print
Ttraditional safety engineering is coming to a turning point moving from deterministic, non-evolving systems operating in well-defined contexts to increasingly autonomous and learning-enabled AI systems  ...  Runtime monitoring may also be used for measuring uncertainties in input-output behavior of ANNs.  ...  network components for perceptive tasks, and runtime monitoring for central safety properties.  ... 
arXiv:2201.10436v2 fatcat:lu5ibn3qc5hormd4w6zjmszplq

An Empirical Analysis of the Use of Real-Time Reachability for the Safety Assurance of Autonomous Vehicles [article]

Patrick Musau, Nathaniel Hamilton, Diego Manzanas Lopez, Preston Robinette, Taylor T. Johnson
2022 arXiv   pre-print
One approach for providing runtime assurance of systems with components that may not be amenable to formal analysis is the simplex architecture, where an unverified component is wrapped with a safety controller  ...  In this paper, we propose using a real-time reachability algorithm for the implementation of the simplex architecture to assure the safety of a 1/10 scale open source autonomous vehicle platform known  ...  In [65] , the authors utilize a reachability regime to guarantee the safety of an autonomous vehicle that makes use of a reinforcement learning controller for a way-pointfollowing task.  ... 
arXiv:2205.01419v1 fatcat:hphqn72orja4jnweko4cgc6nai

Using Formal Methods for Autonomous Systems: Five Recipes for Formal Verification [article]

Matt Luckcuck
2021 arXiv   pre-print
Autonomous systems use software to make decisions without human control, are often embedded in a robotic system, are often safety-critical, and are increasingly being introduced into everyday settings.  ...  Autonomous systems need robust development and verification methods, but formal methods practitioners are often asked: why use Why use Formal Methods for Autonomous Systems?.  ...  The WiseML approach [55] is an RV framework that enforces invariants in autonomous systems that use Reinforcement Learning (RL).  ... 
arXiv:2012.00856v2 fatcat:hatdgqwbabbfdbngmjt4q2rroi

Timing Predictability and Security in Safety-Critical Industrial Cyber-Physical Systems: A Position Paper

Saad Mubeen, Elena Lisova, Aneta Vulgarakis Feljan
2020 Applied Sciences  
Many industrial CPSs are subject to timing predictability, security and functional safety requirements, due to which the developers of these systems are required to verify these requirements during the  ...  Moreover, the paper identifies the gaps in the existing frameworks and techniques for the development of time- and safety-critical CPSs and describes our viewpoint on ensuring timing predictability and  ...  Therefore, most approaches towards reinforcement learning provide no guarantee about the safety of the learned controller or about the safety of actions taken during learning, which is against the best  ... 
doi:10.3390/app10093125 fatcat:vjm7uxjvkbfazon6kmsvib53ku
« Previous Showing results 1 — 15 out of 1,102 results