Filters








26,354 Hits in 6.5 sec

Towards better understanding of gradient-based attribution methods for Deep Neural Networks [article]

Marco Ancona, Enea Ceolini, Cengiz Öztireli, Markus Gross
2018 arXiv   pre-print
Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years.  ...  In this work, we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them.  ...  We would like to thank Brian McWilliams and David Tedaldi for their helpful feedback.  ... 
arXiv:1711.06104v4 fatcat:berlhp2kqfbubbycnb5oplwaoq

Towards better understanding of gradient-based attribution methods for Deep Neural Networks

Marco Ancona, Enea Ceolini, Cengiz Oztireli, Markus Gross
2018
Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years.  ...  In this work, we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them.  ...  We would like to thank Brian McWilliams and David Tedaldi for their helpful feedback.  ... 
doi:10.3929/ethz-b-000249929 fatcat:krfgjurneja7pcagatxptr4gbm

Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey [article]

Arun Das, Paul Rad
2020 arXiv   pre-print
However, the black-box nature of deep neural networks challenges its use in mission critical applications, raising ethical and judicial concerns inducing lack of trust.  ...  Nowadays, deep neural networks are widely used in mission critical systems such as healthcare, self-driving vehicles, and military which have direct impact on human lives.  ...  the compositional nature of deep neural networks to improve attributions.  ... 
arXiv:2006.11371v2 fatcat:6eaz3rbaenflxchjdynmvwlc4i

Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms [article]

Zhong Qiu Lin, Mohammad Javad Shafiee, Stanislav Bochkarev, Michael St. Jules, Xiao Yu Wang, Alexander Wong
2019 arXiv   pre-print
While by no means perfect, the hope is that the proposed machine-centric strategy helps push the conversation forward towards better metrics for evaluating explainability methods and improve trust in deep  ...  In this study, we explore a more machine-centric strategy for quantifying the performance of explainability methods on deep neural networks via the notion of decision-making impact analysis.  ...  process of a deep neural network.  ... 
arXiv:1910.07387v2 fatcat:ldk7v2e55jesdmwzizd4lhpaja

Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks [article]

Ian E. Nielsen, Dimah Dera, Ghulam Rasool, Nidhal Bouaynaya, Ravi P. Ramachandran
2022 arXiv   pre-print
While many methods for explaining the decisions of deep neural networks exist, there is currently no consensus on how to evaluate them.  ...  With the rise of deep neural networks, the challenge of explaining the predictions of these networks has become increasingly recognized.  ...  , but as versatile, efficient, accurate and scalable as deep neural networks.  ... 
arXiv:2107.11400v4 fatcat:lkrqy24ehra7voghpgtqiwohna

CXAI: Explaining Convolutional Neural Networks for Medical Imaging Diagnostic

Zakaria Rguibi, Abdelmajid Hajami, Dya Zitouni, Amine Elqaraoui, Anas Bedraoui
2022 Electronics  
In this paper, we investigated two major directions for explaining convolutional neural networks: feature-based post hoc explanatory methods that try to explain already trained and fixed target models  ...  To identify specific aspects of explainability that may catalyse building trust in deep learning models, we will use some techniques to demonstrate many aspects of explaining convolutional neural networks  ...  Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/electronics11111775 fatcat:7zcybfaydfgkzhud4eg7cyyr3q

Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods [article]

Zohaib Salahuddin, Henry C Woodruff, Avishek Chatterjee, Philippe Lambin
2021 arXiv   pre-print
Finally we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis.  ...  Deep neural networks have shown same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power.  ...  It is difficult to determine which method works better for understanding the deep neural networks for a particular medical imaging application, as explanations can be subjective.  ... 
arXiv:2111.02398v1 fatcat:glrfdkbcqrbqto2nrl7dnlg3gq

Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution [article]

Gary S. W. Goh, Sebastian Lapuschkin, Leander Weber, Wojciech Samek, Alexander Binder
2020 arXiv   pre-print
Integrated gradients as an attribution method for deep neural network models offers simple implementability.  ...  We apply both methods to the image classification problem, using the ILSVRC2012 ImageNet object recognition dataset, and a couple of pretrained image models to generate attribution maps of their predictions  ...  With the aim to better understand the complex input-tooutput behavior of a deep neural network, a number of previous work [5] - [16] focus on the problem of attribution.  ... 
arXiv:2004.10484v1 fatcat:eyttvqa2vjcknf2tj2rijo6ahe

Towards Interpretable Attention Networks for Cervical Cancer Analysis [article]

Ruiqi Wang, Mohammad Ali Armin, Simon Denman, Lars Petersson, David Ahmedt-Aristizabal
2021 arXiv   pre-print
Here, we evaluate various state-of-the-art deep learning models and attention-based frameworks for the classification of images of multiple cervical cells.  ...  Many previous works focus on the analysis of isolated cervical cells, or do not offer sufficient methods to explain and understand how the proposed models reach their classification decisions on multi-cell  ...  The integrated gradient model computes the attribution of the prediction of the deep neural network by using the gradient operation.  ... 
arXiv:2106.00557v1 fatcat:o43ogh4jznhgnogfcip3xg5pcq

Fine-grained Interpretation and Causation Analysis in Deep NLP Models [article]

Hassan Sajjad, Narine Kokhlikyan, Fahim Dalvi, Nadir Durrani
2021 arXiv   pre-print
This paper is a write-up for the tutorial on "Fine-grained Interpretation and Causation Analysis in Deep NLP Models" that we are presenting at NAACL 2021.  ...  The former introduces methods to analyze individual neurons and a group of neurons with respect to a language property or a task.  ...  These recent works are not only enabling better understanding of these networks, but are also leading towards better, fairer and more environmental-friendly models, which are all important goals for the  ... 
arXiv:2105.08039v2 fatcat:e7mpttcbrrhrdm2mklmqxoy6km

Learning how to explain neural networks: PatternNet and PatternAttribution [article]

Pieter-Jan Kindermans, Kristof T. Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, Sven Dähne
2017 arXiv   pre-print
DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model.  ...  improved explanations for deep networks.  ...  We are grateful to Chris Olah and Gregoire Montavon for the valuable discussions.  ... 
arXiv:1705.05598v2 fatcat:z4hno2qyk5hdpnrwa6ij74mrb4

Recovering Localized Adversarial Attacks [chapter]

Jan Philip Göpfert, Heiko Wersing, Barbara Hammer
2019 Lecture Notes in Computer Science  
However, their reasoning is hidden inside a black box, in spite of a number of proposed approaches that try to provide human-understandable explanations for the predictions of neural networks.  ...  In this contribution, we focus on the capabilities of explainers for convolutional deep neural networks in an extreme situation: a setting in which humans and networks fundamentally disagree.  ...  In general, we desire a better understanding of adversarial attacks, robustness against them, the certainty of predictions and their explanations, and of how deep convolutional neural networks divide the  ... 
doi:10.1007/978-3-030-30487-4_24 fatcat:sjf5ucvw7jdlnjh3hirvt2choy

Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View [article]

Di Jin and Elena Sergeeva and Wei-Hung Weng and Geeticka Chauhan and Peter Szolovits
2021 arXiv   pre-print
Moreover, we discuss how these methods, originally developed for solving general-domain problems, have been adapted and applied to healthcare problems and how they can help physicians better understand  ...  DL based clinical decision support systems for diagnosis, prognosis, and treatment.  ...  Towards better understanding of gradient-based attribution methods for deep neural networks. In International Conference on Learning Representations.  ... 
arXiv:2112.02625v1 fatcat:omcm44vj2ffthcpna27typyvau

Interpreting Super-Resolution Networks with Local Attribution Maps [article]

Jinjin Gu, Chao Dong
2021 arXiv   pre-print
However, it is acknowledged that deep learning and deep neural networks are difficult to interpret. SR networks inherit this mysterious nature and little works make attempt to understand them.  ...  We propose a novel attribution approach called local attribution map (LAM), which inherits the integral gradient method yet with two unique features.  ...  Different from the aforementioned gradient-based attribution methods, Class Activation Mapping (CAM) [64] generates class activation maps using the global average pooling in convolution neural networks  ... 
arXiv:2011.11036v2 fatcat:2jlvttelqjg3ti7drn63pi72oy

Towards Best Practice in Explaining Neural Network Decisions with LRP [article]

Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder, Wojciech Samek, Sebastian Lapuschkin
2020 arXiv   pre-print
Within the last decade, neural network based predictors have demonstrated impressive - and at times super-human - capabilities.  ...  In this paper we investigate - and for the first time quantify - the effect of this current best practice on feedforward neural networks in a visual object detection setting.  ...  INTRODUCTION In recent years, deep neural networks (DNN) have become the state of the art method in many different fields, but are mainly applied as black-box predictors.  ... 
arXiv:1910.09840v3 fatcat:m47aoeuopbgwdkfsdlclh2duxe
« Previous Showing results 1 — 15 out of 26,354 results