Filters








3,486 Hits in 5.1 sec

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs [article]

Harini Suresh, Kathleen M. Lewis, John V. Guttag, Arvind Satyanarayan
2021 arXiv   pre-print
Here, we present two visual analytics modules that facilitate an intuitive assessment of model reliability.  ...  Interpretability methods aim to help users build trust in and understand the capabilities of machine learning models.  ...  The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air  ... 
arXiv:2102.08540v2 fatcat:tv5loul3zrdnznzlatiugwtfmq

Review on Interpretable Machine Learning in Smart Grid

Chongchong Xu, Zhicheng Liao, Chaojie Li, Xiaojun Zhou, Renyou Xie
2022 Energies  
Unfortunately, the black-box nature of most machine learning models remains unresolved, and many decisions of intelligent systems still lack explanation.  ...  The smart grid is a critical infrastructure area, so machine learning models involving it must be interpretable in order to increase user trust and improve system reliability.  ...  We generally create intrinsically interpretable ML models through mediations and constraints such as linearization, rules, examples, sparsity or causality.  ... 
doi:10.3390/en15124427 fatcat:rl4xx53kjbemphx5vgynfytwli

Machine Learning Interpretability: A Survey on Methods and Metrics

Diogo V. Carvalho, Eduardo M. Pereira, Jaime S. Cardoso
2019 Electronics  
The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years.  ...  However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation?  ...  The considered explanations in this work are rule-based and example-based.  ... 
doi:10.3390/electronics8080832 fatcat:3mcv7lccwrbj5hakti2iwvdtu4

DARPA's Explainable Artificial Intelligence (XAI) Program

David Gunning, David Aha
2019 The AI Magazine  
The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations  ...  Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations.  ...  The authors owe a special thanks to Marisa Carrera for her exceptional technical support to the XAI program and her extensive editing skills.  ... 
doi:10.1609/aimag.v40i2.2850 fatcat:2woifhwpdbcvbce5e6dm7xc4vm

Leveraging Explanations in Interactive Machine Learning: An Overview [article]

Stefano Teso, Öznur Alkan, Wolfang Stammer, Elizabeth Daly
2022 arXiv   pre-print
Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities in order to improve model transparency and allow users to form a mental model of a trained ML model  ...  The goal of this paper is to present an overview of research where explanations are combined with interactive capabilities as a mean to learn new models from scratch and to edit and debug existing ones  ...  Research in IML explores ways to learn and manipulate models through an intuitive human-computer interface [109] and encompasses a variety of learning and interaction strategies.  ... 
arXiv:2207.14526v1 fatcat:ayi4rue365avnekierte6uui7y

CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior [article]

Eldar David Abraham, Karel D'Oosterlinck, Amir Feder, Yair Ori Gat, Atticus Geiger, Christopher Potts, Roi Reichart, Zhengxuan Wu
2022 arXiv   pre-print
In this paper, we cast model explanation as the causal inference problem of estimating causal effects of real-world concepts on the output behavior of ML models given actual input data.  ...  We introduce CEBaB, a new benchmark dataset for assessing concept-based explanation methods in Natural Language Processing (NLP).  ...  Acknowledgments and Disclosure of Funding This research is supported in part by a grant from Meta AI.  ... 
arXiv:2205.14140v1 fatcat:l4uw5ny6ijah5edgrqwpgmwyuy

A Short Survey on Machine Learning Explainability: An Application to Periocular Recognition

João Brito, Hugo Proença
2021 Electronics  
Based on these intuitions, the experiments performed show explanations that attempt to highlight the most important periocular components towards a non-match decision.  ...  These kinds of models can be particularly useful to broaden the applicability of machine learning-based systems to domains where—apart from the predictions—appropriate justifications are also required  ...  Acknowledgments: This work was funded by FCT/MEC through national funds and cofunded by the FEDER-PT2020 partnership agreement under the project UIDB/050008/2020.  ... 
doi:10.3390/electronics10151861 fatcat:hqsboo6asfekxnrpaslergpjgm

Explainable Machine Learning in Deployment [article]

Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, Peter Eckersley
2020 arXiv   pre-print
There is thus a gap between explainability in practice and the goal of transparency, since explanations primarily serve internal stakeholders rather than external ones.  ...  Explainable machine learning offers the potential to provide stakeholders with insights into model behavior by using various methods such as feature importance scores, counterfactual explanations, or influential  ...  ACKNOWLEDGMENTS The authors would like to thank the following individuals for their advice, contributions, and/or support: Karina Alexanyan (Partnership  ... 
arXiv:1909.06342v4 fatcat:rw2e7lkfazd2lipawpilhuyy6e

Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? [article]

Peter Hase, Mohit Bansal
2020 arXiv   pre-print
Through two kinds of simulation tests involving text and tabular data, we evaluate five explanations methods: (1) LIME, (2) Anchor, (3) Decision Boundary, (4) a Prototype model, and (5) a Composite approach  ...  Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability across a variety of explanation methods and data domains.  ...  This work was supported by NSF-CAREER Award 1846185, DARPA MCS Grant N66001-19-2-4031, a Royster Society PhD Fellowship, and Google and AWS cloud compute awards.  ... 
arXiv:2005.01831v1 fatcat:atcx2ouwencubo67q3vxa366uy

Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability

Lukas-Valentin Herm, Kai Heinrich, Jonas Wanner, Christian Janiesch
2022 International Journal of Information Management  
Machine learning models with higher performance are often based on more complex algorithms and therefore lack explainability and vice versa.  ...  This is a stark contrast to assumed inherent model interpretability. Further, we found the tradeoff to be situational for example due to data complexity.  ...  DIK0143/02) and managed by the project management agency VDI+VDE Innovation + Technik GmbH.  ... 
doi:10.1016/j.ijinfomgt.2022.102538 fatcat:m4niks5k6vehtja4ntkc4jarbu

Explainable Artificial Intelligence for Tabular Data: A Survey

Maria Sahakyan, Zeyar Aung, Talal Rahwan
2021 IEEE Access  
Furthermore, we categorize the references covered in our survey, indicating the type of the model being explained, the approach being used to provide the explanation, and the XAI problem being addressed  ...  Consequently, despite the existing survey articles that cover a wide range of XAI techniques, it remains challenging for researchers working on tabular data to go through all of these surveys and extract  ...  Tree SHAP [39] 2020 Specific SHAP, adopted for tree-based ML models.  ... 
doi:10.1109/access.2021.3116481 fatcat:6f2obirtk5byxarg3okhhjm5qm

Machine Learning – The Results Are Not the only Thing that Matters! What About Security, Explainability and Fairness? [chapter]

Michał Choraś, Marek Pawlicki, Damian Puchalski, Rafał Kozik
2020 Lecture Notes in Computer Science  
The aspects that can hinder practical and trustful ML and AI are: lack of security of ML algorithms as well as lack of fairness and explainability.  ...  Recent advances in machine learning (ML) and the surge in computational power have opened the way to the proliferation of ML and Artificial Intelligence (AI) in many domains and applications.  ...  This work is funded under the SPARTA project, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 830892.  ... 
doi:10.1007/978-3-030-50423-6_46 fatcat:4r2tbrsc7zh2xc3svsp2wtnds4

Counterfactual Explanations for Models of Code [article]

Jürgen Cito, Isil Dillig, Vijayaraghavan Murali, Satish Chandra
2021 arXiv   pre-print
Machine learning (ML) models play an increasingly prevalent role in many software engineering tasks.  ...  We describe considerations that impact both the ability to find realistic and plausible counterfactual explanations, as well as the usefulness of such explanation to the user of the model.  ...  We walk the reader through examples of counterfactual explanations.  ... 
arXiv:2111.05711v1 fatcat:njyvnjlk2jfldjjwo2hmzaf3ly

Notions of explainability and evaluation approaches for explainable artificial intelligence

Giulia Vilone, Luca Longo
2021 Information Fusion  
and reliability assessed.  ...  They have also suggested various approaches to assess to what degree machine-generated explanations meet these demands.  ...  of a ML model [176] .  ... 
doi:10.1016/j.inffus.2021.05.009 fatcat:pz7ao6nhkngm3osonenbl4e6hy

Adversarial XAI methods in Cybersecurity

Aditya Kuppa, Nhien-An Le-Khac
2021 IEEE Transactions on Information Forensics and Security  
Recent Explainable Artificial Intelligence literature has focused on three main areas : (a) creating and improving explainability methods that help users better understand how the internal of ML models  ...  In this paper, we cover this gap by tackling various cybersecurity properties and threat models related to counterfactual explanations.  ...  For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited.  ... 
doi:10.1109/tifs.2021.3117075 fatcat:q24deiprgbckfmuj6vwmh2dy2a
« Previous Showing results 1 — 15 out of 3,486 results