Filters








90,324 Hits in 9.3 sec

Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation [article]

Sahil Singla, Eric Wallace, Shi Feng, Soheil Feizi
2019 arXiv   pre-print
First, they use first-order approximations of the loss function neglecting higher-order terms such as the loss curvatures.  ...  Second, we compute the importance of group-features in deep learning interpretation by introducing a sparsity regularization term.  ...  Understanding Impact of the group-feature In this section, we study the impact of the group feature in deep learning interpretation.  ... 
arXiv:1902.00407v2 fatcat:xamhpudt6bbl5pyu45rftmnqsy

Explainable Artificial Intelligence (XAI) for 6G: Improving Trust between Human and Machine [article]

Weisi Guo
2019 arXiv   pre-print
This loss of trust means we cannot understand the impact of: 1) poor/bias/malicious data, and 2) neural network design on decisions; nor can we explain to the engineer or the public the network's actions  ...  As we migrate from traditional model-based optimisation to deep learning, the trust we have in our optimisation modules decrease.  ...  Acknowledgements: The author wishes to acknowledge EC H2020 grant 778305: DAWN4IoE -Data Aware Wireless Network for Internet-of-Everything, and The Alan Turing Institute under the EPSRC grant EP/N510129  ... 
arXiv:1911.04542v2 fatcat:2lm7iyoyunbkhkfya5txeos3zm

Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations [article]

Michael Harradon, Jeff Druce, Brian Ruttenberg
2018 arXiv   pre-print
Deep neural networks are complex and opaque. As they enter application in a variety of important and safety critical domains, users seek methods to explain their output predictions.  ...  We then build a bayesian causal model using these extracted concepts as variables in order to explain image classification.  ...  Beyond manipulating the image space and monitoring the impact on the output, other works have considered an analysis on the learned features of the network in order to glean understanding on how the network  ... 
arXiv:1802.00541v1 fatcat:j2j7uhwre5fjnhxa2ng7yhjnuy

How can machine learning help measure the physical properties of galaxies?

Viviana Acquaviva
2020 Zenodo  
In this talk, I review some applications of machine learning and deep learning to the problem of measuring galaxy physical properties, and highlight what in my opinion are the most significant challenges  ...  we need to solve in order to be ready for the next generation of surveys.  ...  > 1000 redshifts to I AB = 25, p ample training set sizes for the complementary deep in UBV RIK s to I AB = 25 (similar to the limits us With careful modelling of photometric errors a loss in the Bayesian  ... 
doi:10.5281/zenodo.3601337 fatcat:rk7wqlbarra2nnai65jphrkfty

Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep Learning [article]

Jiaheng Xie, Xiao Liu
2022 arXiv   pre-print
Although deep learning champions viewership prediction, it lacks interpretability, which is fundamental to increasing the adoption of predictive models and prescribing measurements to improve viewership  ...  Following the design-science paradigm, we propose a novel interpretable IT system, Precise Wide and Deep Learning (PrecWD), to precisely interpret viewership prediction.  ...  The W&D framework is explicitly designed to address the low and high-order feature interactions in interpreting the importance of features [11] .  ... 
arXiv:2101.01076v5 fatcat:wzbxm32ndvespfnlgngawqfkyq

Deep Learning Interpretation of Echocardiograms [article]

Amirata Ghorbani, David Ouyang, Abubakar Abid, Bryan He, Jonathan H. Chen, Robert A. Harrington, David H. Liang, Euan A. Ashley, James Y. Zou
2019 bioRxiv   pre-print
Echocardiography uses ultrasound technology to capture high temporal and spatial resolution images of the heart and surrounding structures and is the most common imaging modality in cardiovascular medicine  ...  Machine learning on echocardiography images can streamline repetitive tasks in the clinical workflow, standardize interpretation in areas with insufficient qualified cardiologists, and more consistently  ...  , echocardiography is a high impact and highly tractable application of machine learning in medical imaging.  ... 
doi:10.1101/681676 fatcat:ev5722u2afcrnbyptzb35ccmjq

An Investigation of Interpretability Techniques for Deep Learning in Predictive Process Analytics [article]

Catarina Moreira and Renuka Sindhgatta and Chun Ouyang and Peter Bruza and Andreas Wichert
2020 arXiv   pre-print
This paper explores interpretability techniques for two of the most successful learning algorithms in medical decision-making literature: deep neural networks and random forests.  ...  In one of the techniques, we intercepted some hidden layers of these neural networks and used autoencoders in order to learn what is the representation of the input in the hidden layers.  ...  proposed in order to take into account the decisionmaker's ability to interpret and understand the predictions of the deep learning algorithms [6] .  ... 
arXiv:2002.09192v1 fatcat:fe7lvotj4zf6lbb2rvhyy7fxqa

Review on Interpretable Machine Learning in Smart Grid

Chongchong Xu, Zhicheng Liao, Chaojie Li, Xiaojun Zhou, Renyou Xie
2022 Energies  
In recent years, machine learning, especially deep learning, has developed rapidly and has shown remarkable performance in many tasks of the smart grid field.  ...  The smart grid is a critical infrastructure area, so machine learning models involving it must be interpretable in order to increase user trust and improve system reliability.  ...  Interpretable ML makes people understand the decisions of ML models and be able to track and locate the cause of fault, which help grid management and reduce losses.  ... 
doi:10.3390/en15124427 fatcat:rl4xx53kjbemphx5vgynfytwli

Demystifying Brain Tumour Segmentation Networks: Interpretability and Uncertainty Analysis [article]

Parth Natekar, Avinash Kori, Ganapathy Krishnamurthi
2020 arXiv   pre-print
Increasing transparency and interpretability of such deep learning techniques are necessary for the complete integration of such methods into medical practice.  ...  We believe that the emergence of such human-understandable organization and concepts might aid in the acceptance and integration of such methods in medical diagnosis.  ...  As we have discussed in the respective sections, each of these inferences might have an impact on our understanding of deep learning models in the context of brain tumor segmentation.  ... 
arXiv:1909.01498v3 fatcat:refgs253ffebvhb7tkfs3aamvq

Improving a neural network model by explanation-guided training for glioma classification based on MRI data [article]

Frantisek Sefcik, Wanda Benesova
2021 arXiv   pre-print
Despite the statistically high accuracy of deep learning models, their output is often a decision of "black box".  ...  Thus, Interpretability methods have become a popular way to gain insight into the decision-making process of deep learning models.  ...  These features make them pivotal for the future progress of human society. Despite the statistically high accuracy of deep learning methods, their output is often "black-box" decision.  ... 
arXiv:2107.02008v1 fatcat:7gva57e6njbe5kqc42m4vr526u

Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis

Parth Natekar, Avinash Kori, Ganapathy Krishnamurthi
2020 Frontiers in Computational Neuroscience  
Increasing transparency and interpretability of such deep learning techniques is necessary for the complete integration of such methods into medical practice.  ...  We believe that the emergence of such human-understandable organization and concepts might aid in the acceptance and integration of such methods in medical diagnosis.  ...  As we have discussed in the respective sections, each of these inferences might have an impact on our understanding of deep learning models in the context of brain tumor segmentation.  ... 
doi:10.3389/fncom.2020.00006 pmid:32116620 pmcid:PMC7025464 fatcat:oi4f4egd25h4pli5qf3lyry3wm

Towards Complementary Explanations Using Deep Neural Networks [chapter]

Wilson Silva, Kelwin Fernandes, Maria J. Cardoso, Jaime S. Cardoso
2018 Lecture Notes in Computer Science  
Interpretability is a fundamental property for the acceptance of machine learning models in highly regulated areas.  ...  Recently, deep neural networks gained the attention of the scientific community due to their high accuracy in vast classification problems.  ...  by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF).  ... 
doi:10.1007/978-3-030-02628-8_15 fatcat:nulgsh6fjzhshmmkjt2erlufuy

A Simple and Interpretable Predictive Model for Healthcare [article]

Subhadip Maji, Raghav Bali, Sree Harsha Ankem, Kishore V Ayyadevara
2020 arXiv   pre-print
These deep learning models, with trainable parameters running into millions, require huge amounts of compute and data to train and deploy.  ...  We model and showcase our work's results on the task of predicting first occurrence of a diagnosis, often overlooked in existing works.  ...  ACKNOWLEDGEMENTS We would like to thank Vineet Shukla and Saikumar Chintareddy for helpful discussions and inputs to improve the solution, and the whole diagnosis prediction team for their contributions  ... 
arXiv:2007.13351v1 fatcat:wdlv3xf7jvchflg43xdb4e4tce

On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI—Three Challenges for Future Research

Giuseppe Futia, Antonio Vetrò
2020 Information  
Deep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems.  ...  In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementation of symbolic AI—while being less flexible and robust to noise compared to deep learning models  ...  As reported by Adadi [26] , since deep learning algorithms are based on high-degree interactions between input features, the disaggregation of such functions in a human understandable form and with human  ... 
doi:10.3390/info11020122 fatcat:77ni2i6tdrhqxopw25vbybghi4

Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods [article]

Zohaib Salahuddin, Henry C Woodruff, Avishek Chatterjee, Philippe Lambin
2021 arXiv   pre-print
In this narrative review, we utilized systematic keyword searches and domain expertise to identify nine different types of interpretability methods that have been used for understanding deep learning models  ...  In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair and ensure accountability.  ...  Visualization of high-dimensional latent space in two dimensions to identify similarities and outliers. Loss of information when the high-dimensional feature space is projected to two dimensions.  ... 
arXiv:2111.02398v1 fatcat:glrfdkbcqrbqto2nrl7dnlg3gq
« Previous Showing results 1 — 15 out of 90,324 results