EnnCore: End-to-End Conceptual Guarding of Neural Architectures

Edoardo Manino, Danilo Carvalho, Yi Dong, Julia Rozanova, Xidan Song, Mustafa A. Mustafa, André Freitas, Gavin Brown, Mikel Lujan, Xiaowei Huang, Lucas C. Cordeiro
2022 AAAI Conference on Artificial Intelligence  
The EnnCore project addresses the fundamental security problem of guaranteeing safety, transparency, and robustness in neural-based architectures. Specifically, EnnCore aims at enabling system designers to specify essential conceptual/behavioral properties of neural-based systems, verify them, and thus safeguard the system against unpredictable behavior and attacks. In this respect, EnnCore will pioneer the dialogue between contemporary explainable neural models and full-stack neural software
more » ... rification. This paper describes existing studies' limitations, our research objectives, current achievements, and future trends towards this goal. In particular, we describe the development and evaluation of new methods, algorithms, and tools to achieve fully-verifiable intelligent systems, which are explainable, whose correct behavior is guaranteed, and robust against attacks. We also describe how En-nCore will be validated on two diverse and high-impact application scenarios: securing an AI system for (i) cancer diagnosis and (ii) energy demand response.
dblp:conf/aaai/ManinoCDRSMFBL022 fatcat:weaswjcuwjeslhv443vrgouywe