A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Scalable Polyhedral Verification of Recurrent Neural Networks
[chapter]
2021
Lecture Notes in Computer Science
AbstractWe present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and non-linear recurrent update functions by combining sampling, optimization, and Fermat's theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron. Using Prover, we present the first study
doi:10.1007/978-3-030-81685-8_10
fatcat:kbklnexlzfatdn7wnhljqjsecy