A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Improving performance of deep learning models with axiomatic attribution priors and expected gradients
[article]
2020
arXiv
pre-print
This improves model performance on many real-world tasks where previous attribution priors fail. ...
Recent research has demonstrated that feature attribution methods for deep networks can themselves be incorporated into training; these attribution priors optimize for a model whose attributions have certain ...
Introduction Recent work on interpreting machine learning (ML) models focuses on feature attribution methods. ...
arXiv:1906.10670v2
fatcat:xolv2dq7mvb2djvagme6ksn6ka
Explainable Video Action Reasoning via Prior Knowledge and State Transitions
[article]
2019
arXiv
pre-print
In this work, we propose a novel action reasoning framework that uses prior knowledge to explain semantic-level observations of video state changes. ...
Our method takes advantage of both classical reasoning and modern deep learning approaches. ...
Learning action models from prior knowledge. ...
arXiv:1908.10700v1
fatcat:itbcavk37fgkfmnhn5syzze6iy
Learning Deep Attribution Priors Based On Prior Knowledge
[article]
2020
arXiv
pre-print
learning models. ...
Feature attribution methods, which explain an individual prediction made by a model as a sum of attributions for each input feature, are an essential tool for understanding the behavior of complex deep ...
Figure 1 : 1 (a) An attribution method Φ is used to explain the decision of an arbitrary black-box model. (b) Overview of the DAPr framework. ...
arXiv:1912.10065v3
fatcat:n6s3ioxto5e5baelmt37iteb5y
The computational relationship between reinforcement learning, social inference, and paranoia
2022
PLoS Computational Biology
Consistent with prior work we show that paranoia was associated with uncertainty around a partner's behavioural policy and rigidity in harmful intent attributions in the social task. ...
We show relationships between decision temperature in the non-social task and priors over harmful intent attributions and uncertainty over beliefs about partners in the social task. ...
In the social task, attributional model comparison uncovered that a Bayesian-Belief model that
PLOS COMPUTATIONAL BIOLOGY used separate weights on harmful intent and self-interest attributions to explain ...
doi:10.1371/journal.pcbi.1010326
pmid:35877675
pmcid:PMC9352206
fatcat:27mexdlavbdnhp4o7srcc5gkli
The Role of College Students' College-attendance Value and Achievement Goals in Desired Learning Outcomes
2018
Eurasia Journal of Mathematics, Science and Technology Education
The hierarchical motivation modeling denotes that college-attendance values explain achievement goals, eliminating the effect of prior academic performance (i.e., the covariate). ...
The purpose of this study is twofold, testing how hierarchical motivation modeling explains college students' academic performances over subsequent semesters; extending the motivation modeling on the university ...
The study provides novel but useful insights into the effect of hierarchical motivation modeling on the attributes, controlling for prior academic performance. ...
doi:10.29333/ejmste/97196
fatcat:xrrtbtk5dfcvdm5ljklvx5zxci
Explaining Neural Networks Semantically and Quantitatively
[article]
2018
arXiv
pre-print
We analyze the typical bias-interpreting problem of the explainable model and develop prior losses to guide the learning of the explainable additive model. ...
In this study, we propose to distill knowledge from the CNN into an explainable additive model, so that we can use the explainable model to provide a quantitative explanation for the CNN prediction. ...
Quantitative explanations α i y i for the male attribute. ...
arXiv:1812.07169v1
fatcat:e3d4cgdc6zhxndndvkkh24hhhq
Explainable Machine Learning with Prior Knowledge: An Overview
[article]
2021
arXiv
pre-print
We propose to harness prior knowledge to improve upon the explanation capabilities of machine learning models. ...
The complexity of machine learning models has elicited research to make them more explainable. ...
Acknowledgements This research has been funded by the Federal Ministry of Education and Research of Germany as part of the Competence Center Machine Learning Rhine-Ruhr ML2R (01-S18038ABC). ...
arXiv:2105.10172v1
fatcat:x7q3gku75fexhbj6ibvisgshuu
Explainable Person Re-Identification with Attribute-guided Metric Distillation
[article]
2021
arXiv
pre-print
In this paper, we propose a post-hoc method, named Attribute-guided Metric Distillation (AMD), to explain existing ReID models. ...
Moreover, we propose an attribute prior loss to make the interpreter generate attribute-guided attention maps and to eliminate biases caused by the imbalanced distribution of attributes. ...
Firstly, a ReID model F(·) trained on person ReID data is used as the target model and fixed during learning the interpreter G(·). ...
arXiv:2103.01451v2
fatcat:nhtapiezmbfchifvak2n3mbdxy
Privileged Attribution Constrained Deep Networks for Facial Expression Recognition
[article]
2022
arXiv
pre-print
We propose the Privileged Attribution Loss (PAL), a method that directs the attention of the model towards the most salient facial regions by encouraging its attribution maps to correspond to a heatmap ...
Furthermore, we introduce several channel strategies that allow the model to have more degrees of freedom. ...
These methods have originally been used to explain network predictions, but have also recently been used to constrain the learning of these models. ...
arXiv:2203.12905v2
fatcat:fdvxcjk3rndc7grbezrcmasicy
Learning Decision Trees Recurrently Through Communication
[article]
2021
arXiv
pre-print
In addition, our model assigns a semantic meaning to each decision in the form of binary attributes, providing concise, semantic and relevant rationalizations to the user. ...
The key aspect of our model is its ability to build a decision tree whose structure is encoded into the memory representation of a Recurrent Neural Network jointly learned by two models communicating through ...
Our model and ablations. Our Explainable Observer-Classifier (XOC) model uses the attribute loss to incorporate explainable binary decisions. ...
arXiv:1902.01780v3
fatcat:urh26lbxgbentlj3xiamx3slwy
Commonsense Justification for Action Explanation
2018
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
In particular, we have developed an approach based on the generative Conditional Variational Autoencoder (CVAE) that models object relations/attributes of the world as latent variables and jointly learns ...
a performer that predicts actions and an explainer that gathers commonsense evidence to justify the action. ...
This approach models the perceived attributes/relations as latent variables and jointly learns a performer which predicts actions based on attributes/relations and a explainer which selects a subset of ...
doi:10.18653/v1/d18-1283
dblp:conf/emnlp/YangGSC18
fatcat:tujk3fb32jdptfzw6vythpalay
Tasks Structure Regularization in Multi-Task Learning for Improving Facial Attribute Prediction
[article]
2021
arXiv
pre-print
To address this problem, we use a new Multi-Task Learning (MTL) paradigm in which a facial attribute predictor uses the knowledge of other related attributes to obtain a better generalization performance ...
Second, it is assumed that the structure of the tasks is unknown, and then structure and parameters of the tasks are learned jointly by using a Laplacian regularization framework. ...
RELATED WORK In this section, we briefly explain related work for facial attribute prediction, and MTL models, and also the work which have used MTL paradigm in deep learning models. ...
arXiv:2108.04353v2
fatcat:mgjkixgh6zatloev3epan3q5na
Identifying Suitable Tasks for Inductive Transfer Through the Analysis of Feature Attributions
[article]
2022
arXiv
pre-print
In this paper, we leverage explainability techniques to effectively predict whether task pairs will be complementary, through comparison of neural network activation between single-task models. ...
However, it is common for prior works to only report where transfer learning was beneficial, ignoring the significant trial-and-error required to find effective settings for transfer. ...
Attribution-based explainability has become especially popular in the literature [5, 6, [12] [13] [14] , however, research into explainability for transfer learning is sparse. ...
arXiv:2202.01096v1
fatcat:rswxedomgfgsncdli7kz34cfni
Attribute Learning for Understanding Unstructured Social Activity
[chapter]
2012
Lecture Notes in Computer Science
propose a novel model for learning the latent attributes which alleviate the dependence of existing models on exact and exhaustive manual specification of the attribute-space. ...
Recently, attribute learning has emerged as a promising paradigm for transferring learning to sparsely labelled classes in object or single-object short action classification. ...
To learn the latent portion of the attribute-space, we could simply leave the remaining portion α la of the prior unconstrained; however while resulting latent topics/attributes will explain the data, ...
doi:10.1007/978-3-642-33765-9_38
fatcat:nsltcn7qyjcwfmlct67ce6h5du
Explaining Image Classifiers Using Contrastive Counterfactuals in Generative Latent Spaces
[article]
2022
arXiv
pre-print
We introduce a novel method to generate causal and yet interpretable counterfactual explanations for image classifiers using pretrained generative models without any re-training or conditioning. ...
On the task of face attribute classification, we show how different attributes influence the classifier output by providing both causal and contrastive feature attributions, and the corresponding counterfactual ...
[3] use a causal prior graph and existing annotations to explain the predictions. ...
arXiv:2206.05257v1
fatcat:lzmm6qrqcnajbl6mh4tlgcbh64
« Previous
Showing results 1 — 15 out of 495,215 results