Filters








1,361 Hits in 4.9 sec

Extracting an Explanatory Graph to Interpret a CNN

Quanshi Zhang, Xin Wang, Ruiming Cao, Ying Nian Wu, Feng Shi, Song-Chun Zhu
2020 IEEE Transactions on Pattern Analysis and Machine Intelligence  
This paper introduces an explanatory graph representation to reveal object parts encoded inside convolutional layers of a CNN.  ...  The explanatory graph is constructed to organize each mined part as a graph node.  ...  This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI  ... 
doi:10.1109/tpami.2020.2992207 pmid:32386138 fatcat:vfeuf43ganftdn4slv64l5yla4

Interpreting CNN Knowledge via an Explanatory Graph [article]

Quanshi Zhang, Ruiming Cao, Feng Shi, Ying Nian Wu, Song-Chun Zhu
2017 arXiv   pre-print
More importantly, we learn the explanatory graph for a pre-trained CNN in an unsupervised manner, i.e., without a need of annotating object parts.  ...  This paper learns a graphical model, namely an explanatory graph, which reveals the knowledge hierarchy hidden inside a pre-trained CNN.  ...  Supervised-AOG used part annotations to select filters from CNNs to localize parts. Unsup-RL methods include CNN-PDD, CNN-PDDft, and our method.  ... 
arXiv:1708.01785v3 fatcat:vct2ttai75bplhqeghapwwwyli

Explanatory Graphs for CNNs [article]

Quanshi Zhang, Xin Wang, Ruiming Cao, Ying Nian Wu, Feng Shi, Song-Chun Zhu
2018 arXiv   pre-print
This paper introduces a graphical model, namely an explanatory graph, which reveals the knowledge hierarchy hidden inside conv-layers of a pre-trained CNN.  ...  More crucially, given a pre-trained CNN, the explanatory graph is learned without a need of annotating object parts.  ...  maps of intermediate conv-layers of a CNN and organize the layerwise knowledge hierarchy using an explanatory graph.  ... 
arXiv:1812.07997v1 fatcat:vq7pzmcqgbhzvlulnt4ypgt5mq

Visual Interpretability for Deep Learning: a Survey [article]

Quanshi Zhang, Song-Chun Zhu
2018 arXiv   pre-print
CNN representations, learning of CNNs with disentangled representations, and middle-to-end learning based on model interpretability.  ...  We focus on convolutional neural networks (CNNs), and we revisit the visualization of CNN representations, methods of diagnosing representations of pre-trained CNNs, approaches for disentangling pre-trained  ...  regard an explanatory graph as a compression of feature maps of conv-layers.  ... 
arXiv:1802.00614v2 fatcat:g55ax3lso5axtb6cn7munbaidi

Human-centric Transfer Learning Explanation via Knowledge Graph [Extended Abstract] [article]

Yuxia Geng, Jiaoyan Chen, Ernesto Jimenez-Ruiz, Huajun Chen
2019 arXiv   pre-print
of a target domain predicted by models from multiple source domains in zero-shot learning (ZSL).  ...  The first one explains the transferability of features learned by Convolutional Neural Network (CNN) from one domain to another through pre-training and fine-tuning, while the second justifies the model  ...  Acknowledgments The work is supported by NSFC 91846204/61673338 and the SIRIUS Centre for Scalable Data Access (Research Council of Norway, project 237889).  ... 
arXiv:1901.08547v1 fatcat:52bqpgmp5vemtcker4psazfcwy

Review Study of Interpretation Methods for Future Interpretable Machine Learning

Jian-Xun Mi, An-Di Li, Li-Fang Zhou
2020 IEEE Access  
Visualization of nodes in explanatory graph [78] and filters in interpretable CNN [79]; a) Image patches based on explanatory graph method, b) The visualized filters in interpretable CNN, c) The filters  ...  They proposed to interpret CNN knowledge by explanatory graphs [78] . It introduces a graphical network model to reveal the knowledge hierarchy hidden inside a pretrained CNN.  ... 
doi:10.1109/access.2020.3032756 fatcat:mnxd3se2cnf55hyhscwbpgsu4u

CTformer: Convolution-free Token2Token Dilated Vision Transformer for Low-dose CT Denoising [article]

Dayang Wang, Fenglei Fan, Zhan Wu, Rui Liu, Fei Wang, Hengyong Yu
2022 arXiv   pre-print
We interpret the CTformer by statically inspecting patterns of its internal attention maps and dynamically tracing the hierarchical attention flow with an explanatory graph.  ...  However, unlike CNNs, the potential of vision transformers in LDCT denoising was little explored so far.  ...  To complement this dynamic information, inspired by [48] , we propose to construct an explanatory graph to describe the hierarchical flow of the attention.  ... 
arXiv:2202.13517v1 fatcat:hdzvqexrinanjdqfn3ynom7qju

Explanatory Multi-Scale Adversarial Semantic Embedding Space Learning for Zero-Shot Recognition

Huiting Li
2022 Open Journal of Applied Sciences  
The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space.  ...  We also propose to reconstruct semantic patterns produced by explanatory graphs, which can make semantic embedding space more sensitive to usefully semantic information and less sensitive to useless information  ...  The optimization procedure can be concluded as: 1) Building explanatory graphs. We construct an explanatory graph for each training class. We use VGG19 to extract explanatory graphs.  ... 
doi:10.4236/ojapps.2022.123023 fatcat:vg36r25yz5abjkgu2kre66dify

A Survey on Explainability in Machine Reading Comprehension [article]

Mokanarangan Thayaparan, Marco Valentino, André Freitas
2020 arXiv   pre-print
This paper presents a systematic review of benchmarks and approaches for explainability in Machine Reading Comprehension (MRC).  ...  We also present the evaluation methodologies to assess the performance of explainable systems.  ...  We refer to explainability as a specialisation of the higher level concept of interpretability.  ... 
arXiv:2010.00389v1 fatcat:jzxjysnma5ee5auvplfxxfar2u

Short Text Embedding Autoencoders with Attention-based Neighborhood Preservation

Chao Wei, Lijun Zhu, Jiaoxiang Shi
2020 IEEE Access  
According to the sorting of activation to each hidden unit, we can choose a small part of the most active or most informative short texts from the whole training set as an interpretation subset.  ...  (CNN) and a subnetwork based on a transformer embedding encoder [9] .  ... 
doi:10.1109/access.2020.3042778 fatcat:k4rn6rwurzcpheet74zudnfwfu

"What is relevant in a text document?": An interpretable machine learning approach

Leila Arras, Franziska Horn, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek, Grigori Sidorov
2017 PLoS ONE  
This enables one to distill relevant information from text documents without an explicit semantic information extraction step.  ...  Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits  ...  Dim Semantic Extraction Explanatory Power Index Fig 1 . 1 Fig 1. Diagram of a CNN-based interpretable machine learning system.  ... 
doi:10.1371/journal.pone.0181142 pmid:28800619 pmcid:PMC5553725 fatcat:juajiti46feijcqusraesxvt6q

CXAI: Explaining Convolutional Neural Networks for Medical Imaging Diagnostic

Zakaria Rguibi, Abdelmajid Hajami, Dya Zitouni, Amine Elqaraoui, Anas Bedraoui
2022 Electronics  
and preliminary analysis and choice of the model architecture with an accuracy of 98% ± 0.156% from 36 CNN architectures with different configurations.  ...  To identify specific aspects of explainability that may catalyse building trust in deep learning models, we will use some techniques to demonstrate many aspects of explaining convolutional neural networks  ...  Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/electronics11111775 fatcat:7zcybfaydfgkzhud4eg7cyyr3q

Interactively Transferring CNN Patterns for Part Localization [article]

Quanshi Zhang, Ruiming Cao, Shengming Zhang, Mark Redmonds, Ying Nian Wu, Song-Chun Zhu
2017 arXiv   pre-print
Thus, given a CNN pre-trained for object classification, this paper proposes a method that first summarizes the knowledge hidden inside the CNN into a dictionary of latent activation patterns, and then  ...  We use very few (e.g., three) annotations of a semantic object part to retrieve certain latent patterns from conv-layers to represent the target part.  ...  The method of [18] is proposed to learn an explanatory graph to explain the knowledge hierarchy inside a pre-trained CNN.  ... 
arXiv:1708.01783v2 fatcat:iiova7tjdnfovp5aagezcvyfmu

Interpretable Pneumonia Detection by Combining Deep Learning and Explainable Models with Multisource Data

Hao Ren, Aslan B. Wong, Wanmin Lian, Weibin Cheng, Ying Zhang, Jianwei He, Qingfeng Liu, Jiasheng Yang, Chen Zhang, Kaishun Wu, Haodi Zhang
2021 IEEE Access  
We are constructing a large-scale knowledge graph related to pneumonia, so that the classification ability of the model will be further improved. FIGURE 1 . 1 An overview of the framework.  ...  DISCUSSION A. PERFORMANCE COMPARISON OF CNN MODELS. In M ulN et, the CNN model is an essential part used to learn and analyze chest X-ray images. The CNN model chosen in this paper is DenseNet121.  ... 
doi:10.1109/access.2021.3090215 fatcat:fonmcy37azgctdc5wtpz2yru4a

Towards Interpretable R-CNN by Unfolding Latent Structures [article]

Tianfu Wu, Wei Sun, Xilai Li, Xi Song, Bo Li
2018 arXiv   pre-print
We propose an AOGParsing operator to substitute the RoIPooling operator widely used in R-CNN.  ...  We utilize a top-down hierarchical and compositional grammar model embedded in a directed acyclic AND-OR Graph (AOG) to explore and unfold the space of latent part configurations of regions of interest  ...  Related Work In the literature, many work focused on interpreting posthoc interpretability of deep neural networks by associating explanatory semantic information with nodes in a deep neu-ral network.  ... 
arXiv:1711.05226v2 fatcat:selrfdgp4rcijingwlfasoa7ge
« Previous Showing results 1 — 15 out of 1,361 results