Filters








6,110 Hits in 5.2 sec

Towards Global Explanations of Convolutional Neural Networks With Concept Attribution

Weibin Wu, Yuxin Su, Xixian Chen, Shenglin Zhao, Irwin King, Michael R. Lyu, Yu-Wing Tai
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
With the growing prevalence of convolutional neural networks (CNNs), there is an urgent demand to explain their behaviors.  ...  However, existing methods overwhelmingly conduct separate input attribution or rely on local approximations of models, making them fail to offer faithful global explanations of CNNs.  ...  The work described in this paper was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14210717 of the General Research Fund and CUHK 2300174 of the Collaborative  ... 
doi:10.1109/cvpr42600.2020.00868 dblp:conf/cvpr/WuSCZKLT20a fatcat:56rjl52ma5ekbo6wz4y2hg4cfy

Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey [article]

Arun Das, Paul Rad
2020 arXiv   pre-print
However, the black-box nature of deep neural networks challenges its use in mission critical applications, raising ethical and judicial concerns inducing lack of trust.  ...  explanations of AI decisions.  ...  Methods BRL Bayesian Rule List CaCE Causal Concept Effect CAM Class Activation Mapping CAV Concept Activation Vectors CNN Convolutional Neural Network DeConvNet Deconvolution Neural Network  ... 
arXiv:2006.11371v2 fatcat:6eaz3rbaenflxchjdynmvwlc4i

Drug discovery with explainable artificial intelligence [article]

José Jiménez-Luna, Francesca Grisoni, Gisbert Schneider
2020 arXiv   pre-print
Deep learning bears promise for drug discovery, including advanced image analysis, prediction of molecular structure and function, and automated generation of innovative chemical entities with bespoke  ...  This review summarizes the most prominent algorithmic concepts of explainable artificial intelligence, and dares a forecast of the future opportunities, potential applications, and remaining challenges  ...  This concept is widely used in convolutional neural networks for image analysis.  ... 
arXiv:2007.00523v2 fatcat:vwbm5ctaengetbsrkqjf54hoei

Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods [article]

Zohaib Salahuddin, Henry C Woodruff, Avishek Chatterjee, Philippe Lambin
2021 arXiv   pre-print
Furthermore, we report the progress made towards evaluating the explanations produced by various interpretability methods.  ...  Therefore, there is a need to ensure interpretability of deep neural networks before they can be incorporated in the routine clinical workflow.  ...  Concept Attribution Concept attribution provides global explanations for the deep neural network in terms of high-level image concepts [89] .  ... 
arXiv:2111.02398v1 fatcat:glrfdkbcqrbqto2nrl7dnlg3gq

Drug discovery with explainable artificial intelligence

José Jiménez-Luna, Francesca Grisoni, Gisbert Schneider
2020 Nature Machine Intelligence  
the re-emergence of neural networks in chemistry and healthcare.  ...  This advance is mostly owed to the ability of deep learning algorithms, that is, artificial neural networks with multiple processing layers, to model complex nonlinear inputoutput relationships, and perform  ...  This concept is widely used in convolutional neural networks for image analysis.  ... 
doi:10.1038/s42256-020-00236-4 fatcat:nlkwpc2jvvhcblmiulbdzzxaiq

Towards Fully Interpretable Deep Neural Networks: Are We There Yet? [article]

Sandareka Wickramanayake, Wynne Hsu, Mong Li Lee
2021 arXiv   pre-print
This paper provides a review of existing methods to develop DNNs with intrinsic interpretability, with a focus on Convolutional Neural Networks (CNNs).  ...  Despite the remarkable performance, Deep Neural Networks (DNNs) behave as black-boxes hindering user trust in Artificial Intelligence (AI) systems.  ...  (Melis & Jaakkola, 2018 ) propose a Self-Explainable Neural Network (SENN), which is a generalized version of (Li et al., 2018) .  ... 
arXiv:2106.13164v1 fatcat:jwfo4qmq6fdm5fu4p6tcbf6lru

Xplique: A Deep Learning Explainability Toolbox [article]

Thomas Fel, Lucas Hervier, David Vigouroux, Antonin Poche, Justin Plakoo, Remi Cadene, Mathieu Chalvidal, Julien Colin, Thibaut Boissin, Louis Bethune, Agustin Picard, Claire Nicodeme (+3 others)
2022 arXiv   pre-print
It interfaces with one of the most popular learning libraries: Tensorflow as well as other libraries including PyTorch, scikit-learn and Theano.  ...  Acknowledgments This work was conducted as part of the DEEL project 1  ...  Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks.  ... 
arXiv:2206.04394v1 fatcat:2nlymoajwbhy3nnjrvd5y5x6uy

CXAI: Explaining Convolutional Neural Networks for Medical Imaging Diagnostic

Zakaria Rguibi, Abdelmajid Hajami, Dya Zitouni, Amine Elqaraoui, Anas Bedraoui
2022 Electronics  
To identify specific aspects of explainability that may catalyse building trust in deep learning models, we will use some techniques to demonstrate many aspects of explaining convolutional neural networks  ...  In this paper, we investigated two major directions for explaining convolutional neural networks: feature-based post hoc explanatory methods that try to explain already trained and fixed target models  ...  Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/electronics11111775 fatcat:7zcybfaydfgkzhud4eg7cyyr3q

A Survey on Neural Network Interpretability [article]

Yu Zhang, Peter Tiňo, Aleš Leonardis, Ke Tang
2021 arXiv   pre-print
Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems.  ...  In this survey, we conduct a comprehensive review of the neural network interpretability research. We first clarify the definition of interpretability as it has been used in many different contexts.  ...  Active, Hidden semantics as Explanation (global) Another method aims to make a convolutional neural network learn better (disentangled) hidden semantics.  ... 
arXiv:2012.14261v3 fatcat:hrsunbookrhjhbxlmv6pcw44w4

Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations [article]

Fred Hohman, Haekyu Park, Caleb Robinson, Duen Horng Chau
2019 arXiv   pre-print
As predictions are often computed from millions of weights that are optimized over millions of images, such explanations can easily miss a bigger picture.  ...  Summit scales to large data, such as the ImageNet dataset with 1.2M images, and leverages neural network feature visualization and dataset examples to help users distill large, complex neural network models  ...  To visualize how low-level concepts near early layers of a network combine to form high-level concepts towards later layers, we seek to form a graph from the entire neural network, using the aggregated  ... 
arXiv:1904.02323v3 fatcat:yjfezaon5ngw7et3afvxgmhywm

Explaining Neural Networks Semantically and Quantitatively [article]

Runjin Chen, Hao Chen, Ge Huang, Jie Ren, Quanshi Zhang
2018 arXiv   pre-print
This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically.  ...  The analysis of the specific rationale of each prediction made by the CNN presents a key issue of understanding neural networks, but it is also of significant practical values in certain applications.  ...  Quantitative explanations α i y i for the male attribute.  ... 
arXiv:1812.07169v1 fatcat:e3d4cgdc6zhxndndvkkh24hhhq

Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey [article]

Vanessa Buhrmester, David Münch, Michael Arens
2019 arXiv   pre-print
In this survey we differ the mechanisms and properties of explaining systems for Deep Neural Networks for Computer Vision tasks.  ...  As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized to be non-transparent and their predictions not traceable by humans.  ...  With this approach they were able to analyze the neural network by introducing a novel variant of the Doconvnet to visualize the concepts learned by higher network layers of the CNN.  ... 
arXiv:1911.12116v1 fatcat:qgeg6rz6qzgrfikhsgah77yz2a

GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks [article]

Lucie Charlotte Magister, Dmitry Kazhdan, Vikash Singh, Pietro Liò
2021 arXiv   pre-print
While graph neural networks (GNNs) have been shown to perform well on graph-based data from a variety of fields, they suffer from a lack of transparency and accountability, which hinders trust and consequently  ...  GCExplainer is an unsupervised approach for post-hoc discovery and extraction of global concept-based explanations for GNNs, which puts the human in the loop.  ...  Traffic prediction with advanced Graph Neural Networks. https://deepmind.com/ blog/article/traffic-prediction-withadvanced-graph-neural-networks, sep 2020.  ... 
arXiv:2107.11889v1 fatcat:zf3ub7mfwrgzteu7jflcr4juli

Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance [article]

Ramprasaath R. Selvaraju, Prithvijit Chattopadhyay, Mohamed Elhoseiny, Tilak Sharma, Dhruv Batra, Devi Parikh, Stefan Lee
2018 arXiv   pre-print
Individual neurons in convolutional neural networks supervised for image-level classification tasks have been shown to implicitly learn semantically meaningful concepts ranging from simple textures and  ...  We demonstrate our approach on a diverse set of semantic inputs as external domain knowledge including attributes and natural language captions.  ...  We thank Yash Goyal and Nirbhay Modhe for help with figures; Peter Vajda and Manohar Paluri for helpful discussions.  ... 
arXiv:1808.02861v1 fatcat:y42sqmtbafcvjmgthtdzyiqi5a

Choose Your Neuron: Incorporating Domain Knowledge Through Neuron-Importance [chapter]

Ramprasaath R. Selvaraju, Prithvijit Chattopadhyay, Mohamed Elhoseiny, Tilak Sharma, Dhruv Batra, Devi Parikh, Stefan Lee
2018 Lecture Notes in Computer Science  
Individual neurons in convolutional neural networks supervised for image-level classification tasks have been shown to implicitly learn semantically meaningful concepts ranging from simple textures and  ...  We demonstrate our approach on a diverse set of semantic inputs as external domain knowledge including attributes and natural language captions.  ...  We thank Yash Goyal and Nirbhay Modhe for help with figures; Peter Vajda and Manohar Paluri for helpful discussions.  ... 
doi:10.1007/978-3-030-01261-8_32 fatcat:rlg7b3fh2vfonbqvc2wrmtztna
« Previous Showing results 1 — 15 out of 6,110 results