530,791 Hits in 5.2 sec

On the Robustness of Most Probable Explanations [article]

Hei Chan, Adnan Darwiche
2012 arXiv   pre-print
In Bayesian networks, a Most Probable Explanation (MPE) is a complete variable instantiation with a highest probability given the current evidence.  ...  In this paper, we discuss the problem of finding robustness conditions of the MPE under single parameter changes.  ...  We would also like to thank James Park for reviewing this paper and making the observation on how to compute k(e, u) in Equation 6 .  ... 
arXiv:1206.6819v1 fatcat:ewyprj6j3zfwfigc6lgjr27noq

On the Robustness of Interpretability Methods [article]

David Alvarez-Melis, Tommi S. Jaakkola
2018 arXiv   pre-print
We argue that robustness of explanations---i.e., that similar inputs should give rise to similar explanations---is a key desideratum for interpretability.  ...  Finally, we propose ways that robustness can be enforced on existing interpretability approaches.  ...  Robustness The notion of robustness we seek concerns variations of a prediction's "explanation" with respect to changes in the input leading to that prediction.  ... 
arXiv:1806.08049v1 fatcat:jyi2olxa3zbrbfvojwejh4p44e

A Comparison of Explanatory Measures in Abductive Inference [chapter]

Jian-Dong Huang, David H. Glass, Mark McCartney
2020 Communications in Computer and Information Science  
third approach, Most Probable Explanation (MPE).  ...  Experiments on the robustness of the measures with respect to incorrect model assumptions show that ML is more robust in general, but that MPE and PCM are more robust when the degree of competition is  ...  The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation.  ... 
doi:10.1007/978-3-030-50153-2_23 fatcat:qrjgckhst5hsxfkia2cl377hee

Classifying and Detecting Plan-Based Misconceptions for Robust Plan Recognition

Randall J. Calistri-Yeh
1991 The AI Magazine  
Unlike some of the earlier, unprincipled classifications, this approach is based on the structure of plans and provides a complete classification of 34 AI MAGAZINE  ...  In addition, the work that has been done with plan-based misconceptions has generally ignored the problem of ambiguity (Pollack 1986; Quilici, Dyer, and Flowers 1988; and van Beek 1987) .  ...  Acknowledgments The research described in this thesis was performed at Brown University under the guidance of Eugene Charniak. Some work FALL 1991 35  ... 
doi:10.1609/aimag.v12i3.911 dblp:journals/aim/Calistri-Yeh91 fatcat:hednmyu2jba7dacdqpp4k5jmmy

Abductively robust inference

Finnur Dellsén
2017 Analysis  
Since this inferential pattern is structurally similar to an argumentative strategy known as Inferential Robustness Analysis (IRA), it effectively combines the most attractive features of IBE and IRA into  ...  Abstract: Inference to the Best Explanation (IBE) is widely criticized for being an unreliable form of ampliative inference -partly because the explanatory hypotheses we have considered at a given time  ...  Second, probability appears to behave very differently from absolute levels of explanatory loveliness in that the probability of one explanatory hypothesis inevitably takes away from the probability of  ... 
doi:10.1093/analys/anx049 fatcat:cc26lsr2svg4bo5bg3hsdo4vka

The reasonable doubt standard as inference to the best explanation

Hylke Jellema
2020 Synthese  
Furthermore, this account is not susceptible to the most important arguments against IBE in criminal trials or to arguments against other, non-explanationist interpretations of the BARD standard.  ...  This article defends an inference to the best explanation (IBE)-based approach on which guilt is only established BARD if (1) the best guilt explanation in a case is substantially more plausible than any  ...  I would also like to thank Pepa Mellema and Stefan Sleeuw for commenting on previous versions of this article.  ... 
doi:10.1007/s11229-020-02743-8 fatcat:4aorhyr2rnb6hj5gmnmayrzwe4

Toward Robust Real-World Inference: A New Perspective on Explanation-Based Learning [chapter]

Gerald DeJong
2006 Lecture Notes in Computer Science  
The statistical paradigm affords a robustness in the real-world that has eluded symbolic logic.  ...  A simple algorithm provides a first illustration of the approach. Some important properties are proven including tractability and robustness with respect to the real world.  ...  Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of NSF or DARPA.  ... 
doi:10.1007/11871842_14 fatcat:mo2d4x6xyna2pafsxqkwskvv6q

Sparse Robust Regression for Explaining Classifiers [chapter]

Anton Björklund, Andreas Henelius, Emilia Oikarinen, Kimmo Kallonen, Kai Puolamäki
2019 Lecture Notes in Computer Science  
Our method extends current state-of-the-art robust regression methods, especially in terms of scalability on large datasets.  ...  Real-world datasets are often characterised by outliers, points far from the majority of the points, which might negatively influence modelling of the data.  ...  Supported by the Academy of Finland (decisions 326280 and 326339). We acknowledge the computational resources provided by Finnish Grid and Cloud Infrastructure [12] .  ... 
doi:10.1007/978-3-030-33778-0_27 fatcat:tpcbbyemtvekjpwj63bmniowxa

Evaluations and Methods for Explanation through Robustness Analysis [article]

Cheng-Yu Hsieh, Chih-Kuan Yeh, Xuanqing Liu, Pradeep Ravikumar, Seungyeon Kim, Sanjiv Kumar, Cho-Jui Hsieh
2021 arXiv   pre-print
Feature based explanations, that provide importance of each feature towards the model prediction, is arguably one of the most intuitive ways to explain a model.  ...  We further extend the explanation to extract the set of features that would move the current prediction to a target class by adopting targeted adversarial attack for the robustness analysis.  ...  of explanations which most of the current explanations do not possess.  ... 
arXiv:2006.00442v2 fatcat:aqhdypkdxnd75jgwjlmroq6hbm

Robust Counterfactual Explanations for Random Forests [article]

Alexandre Forel, Axel Parmentier, Thibaut Vidal
2022 arXiv   pre-print
We show that existing methods give surprisingly low robustness: the validity of naive counterfactuals is below 50% on most data sets and can fall to 20% on large problem instances with many features.  ...  We study the link between the robustness of ensemble models and the robustness of base learners and frame the generation of robust counterfactual explanations as a chance-constrained optimization problem  ...  Perhaps one of the most closely related streams of research investigates the robustness of counterfactuals to distribution shift: a change in the probability distribution underlying the features and labels  ... 
arXiv:2205.14116v1 fatcat:t7ryjkbo6ra3jl6ecz7dh7ynge


Dan Baras
2016 Episteme: A journal of individual and social epistemology  
In this article I offer two closely related accounts for the type of explanation needed in order to address Field's challenge.  ...  Some consider this argument, known as the Benacerraf–Field argument, as the strongest challenge to robust realism about mathematics (Field 1989, 2001), normativity (Enoch 2011), and even logic (Schechter  ...  Acknowledgments Although my name alone stands beside the title of this paper, it is better viewed as the product of a collaboration with many colleagues who have immensely contributed to my  ... 
doi:10.1017/epi.2016.5 fatcat:sajnm7zp65chrdvtjpol4xrox4

Analyzing and Improving the Robustness of Tabular Classifiers using Counterfactual Explanations

Peyman Rasouli, Ingrid Chieh Yu
2021 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)  
Counterfactual explanations are a specific class of post-hoc explanation methods that provide minimal modification to the input features in order to obtain a particular outcome from the model.  ...  In addition to the resemblance of counterfactual explanations to the universal perturbations, the possibility of generating instances from specific classes makes such approaches suitable for analyzing  ...  This is a proper metric for comparing different models trained on the same data set that allows the expert to select the most robust one.  ... 
doi:10.1109/icmla52953.2021.00209 fatcat:j2sss3wihncdvooiodg3o7mkna

A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines [article]

Vadim Borisov, Johannes Meier, Johan van den Heuvel, Hamed Jalali, Gjergji Kasneci
2021 arXiv   pre-print
Many approaches address the issue of interpreting artificial neural networks, but often provide divergent explanations.  ...  In this paper, we propose a technique for aggregating the feature attributions of different explanatory algorithms using Restricted Boltzmann Machines (RBMs) to achieve a more reliable and robust interpretation  ...  All these difficulties resulted in a large number of different explanation methods and in a lack of consensus on which techniques are most reliable.  ... 
arXiv:2111.07379v1 fatcat:v353gwp3wfccxczkfdvayixsga

Unifying Model Explainability and Robustness via Machine-Checkable Concepts [article]

Vedant Nanda, Till Speicher, John P. Dickerson, Krishna P. Gummadi, Muhammad Bilal Zafar
2020 arXiv   pre-print
Our framework defines a large number of concepts that the DNN explanations could be based on and performs the explanation-conformity check at test time to assess prediction robustness.  ...  In this paper, we propose a robustness-assessment framework, at the core of which is the idea of using machine-checkable concepts.  ...  One of the most important desiderata of explainability is model robustness, whereby explanations are used to assess the extent to which some downstream task could rely on the model's predictions.  ... 
arXiv:2007.00251v2 fatcat:sdoq4c4lb5a2bnrk4vhhmvdnei

Dropping Pixels for Adversarial Robustness

Hossein Hosseini, Sreeram Kannan, Radha Poovendran
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
We argue that subsampling pixels can be thought to provide a set of robust features for the input image and, thus, improves robustness without performing adversarial training.  ...  We show that this approach significantly improves robustness against adversarial examples in all cases of bounded L 0 , L 2 and L ∞ perturbations, while reducing the standard accuracy by a small value.  ...  Most explanation methods are based on some form of the gradient of the classifier function with respect to input [21] .  ... 
doi:10.1109/cvprw.2019.00017 dblp:conf/cvpr/HosseiniKP19 fatcat:j5p6gv6vqvbt5lugvpb2rixa7a
« Previous Showing results 1 — 15 out of 530,791 results