A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Supporting DNN Safety Analysis and Retraining through Heatmap-based Unsupervised Learning
[article]
2021
arXiv
pre-print
Deep neural networks (DNNs) are increasingly important in safety-critical systems, for example in their perception layer to analyze images. Unfortunately, there is a lack of methods to ensure the functional safety of DNN-based components. We observe three major challenges with existing practices regarding DNNs in safety-critical systems: (1) scenarios that are underrepresented in the test set may lead to serious safety violation risks, but may, however, remain unnoticed; (2) characterizing such
arXiv:2002.00863v4
fatcat:flsmhjljm5hnjazq2n2uq42gwy