Filters








374 Hits in 6.7 sec

White-box Induction From SVM Models: Explainable AI with Logic Programming [article]

Farhad Shakerin, Gopal Gupta
2020 arXiv   pre-print
We focus on the problem of inducing logic programs that explain models learned by the support vector machine (SVM) algorithm.  ...  The top-down sequential covering inductive logic programming (ILP) algorithms (e.g., FOIL) apply hill-climbing search using heuristics from information theory.  ...  Acknowledgment Authors acknowledge support from NSF grants IIS 1718945, IIS 1910131, IIP 1916206 and DARPA grant HR001119S0057-ARCOS-FP-036.  ... 
arXiv:2008.03301v1 fatcat:4ckz4hvqnvbi5mrkc5e2k77nue

Maintenance Models Applied to Wind Turbines. A Comprehensive Overview

Yuri Merizalde, Luis Hernández-Callejo, Oscar Duque-Perez, Víctor Alonso-Gómez
2019 Energies  
In this context, this review aims to identify and classify, from a comprehensive perspective, the different types of models used at the strategic, tactical, and operational decision levels of wind turbine  ...  Wind power generation has been the fastest-growing energy alternative in recent years, however, it still has to compete with cheaper fossil energy sources.  ...  Gray box models combine white and black box models; an example is neural networks with fuzzy logic [62] [63] [64] .  ... 
doi:10.3390/en12020225 fatcat:4u2tjzbvbrgsdllokoq6lqynnu

Interpretable and Adaptable Early Warning Learning Analytics Model

Shaleeza Sohail, Atif Alvi, Aasia Khanum
2022 Computers Materials & Continua  
Recently, some models in learning analytics are constructed with the consideration of interpretability but their interpretability is not quantified.  ...  The measure of explainability, fuzzy index, shows that the model is highly interpretable. This system achieves more than 82% recall in both the classification and the context adaptation stages.  ...  The performance of multi view programming approach is consistently better when compared with other white box rule and tree based traditional approaches.  ... 
doi:10.32604/cmc.2022.023560 fatcat:4ku22h6dgjfyrkjwmyomchx7vy

Artificial Intelligence in Software Testing : Impact, Problems, Challenges and Prospect [article]

Zubair Khaliq, Sheikh Umar Farooq, Dawood Ashraf Khan
2022 arXiv   pre-print
Further, the study aims to recognize and explain some of the biggest challenges software testers face while applying AI to testing.  ...  Over time with the inclusion of continuous integration and continuous delivery (CI/CD) pipeline, automation tools are becoming less effective.  ...  Model testing is like a black-box technique where the structural or logical information regarding the model is not a necessity.  ... 
arXiv:2201.05371v1 fatcat:2zwt6e7ojbdgff65fcfzh4o3im

A Survey on the Explainability of Supervised Machine Learning

Nadia Burkart, Marco F. Huber
2021 The Journal of Artificial Intelligence Research  
., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans.  ...  The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans.  ...  That is, by solving Problem 3, one aims for learning a white box model from the hypothesis space of interpretable models I, which is the third way of gaining explainability.  ... 
doi:10.1613/jair.1.12228 fatcat:nd3hfatjknhexb5eabklk657ey

A Survey on the Explainability of Supervised Machine Learning [article]

Nadia Burkart, Marco F. Huber
2020 arXiv   pre-print
., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans.  ...  The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans.  ...  That is, by solving Problem 3, one aims for learning a white box model from the hypothesis space of interpretable models I, which is the third way of gaining explainability.  ... 
arXiv:2011.07876v1 fatcat:ccquewit2jam3livk77l5ojnqq

Classification of Explainable Artificial Intelligence Methods through Their Output Formats

Giulia Vilone, Luca Longo
2021 Machine Learning and Knowledge Extraction  
Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences.  ...  Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret.  ...  The black-box model can be explained through the weights of the white-box estimator that does not need to work globally, but it should approximate the black-box well in the vicinity of a single instance  ... 
doi:10.3390/make3030032 fatcat:2mf4wusxdnanthfdayt7iejgl4

Generative Models of Brain Dynamics – A review [article]

Mahta Ramezanian Panahi, Germán Abrevaya, Jean-Christophe Gagnon-Audet, Vikram Voleti, Irina Rish, Guillaume Dumas
2021 arXiv   pre-print
By way of conclusion, we present several hybrid generative models from recent literature in scientific machine learning, which can be efficiently deployed to yield interpretable models of neural dynamics  ...  While not all of these models span the intersection of neuroscience, AI, and system dynamics, all of them do or can work in tandem as generative models, which, as we argue, provide superior properties  ...  Mahta Ramezanian Panahi, Jean-Christophe Gagnon-Audet, Vikram Voleti, and Irina Rish acknowledges the support from Canada CIFAR AI Chair Program and from the Canada Excellence Research Chairs (CERC) program  ... 
arXiv:2112.12147v2 fatcat:gg2njt2ks5gudk7ewxype2zvni

Vibration Analysis for Machine Monitoring and Diagnosis: A Systematic Review

Mohamad Hazwan Mohd Ghazali, Wan Rahiman, Gang Tang
2021 Shock and Vibration  
Operators are also provided with an early warning for scheduled maintenance.  ...  It involves data acquisition (instrument applied such as analyzer and sensors), feature extraction, and fault recognition techniques using artificial intelligence (AI).  ...  Apart from that, unlike other AI methods such as SVM and NN, it does not rely on the datasets as there is no training or testing stage in fuzzy logic.  ... 
doi:10.1155/2021/9469318 fatcat:4dfinnlm65fz7h2fcto42kwocm

"Why Should I Trust Interactive Learners?" Explaining Interactive Queries of Classifiers to Users [article]

Stefano Teso, Kristian Kersting
2018 arXiv   pre-print
SVMs) and image classification (e.g. neural networks) experiments as well as a user study.  ...  Although interactive learning puts the user into the loop, the learner remains mostly a black box for the user.  ...  Data Models".  ... 
arXiv:1805.08578v1 fatcat:66fi77hoqbah5imkuyxbzgjuta

Meta-Interpretive Learning from noisy images

Stephen Muggleton, Wang-Zhou Dai, Claude Sammut, Alireza Tamaddoni-Nezhad, Jing Wen, Zhi-Hua Zhou
2018 Machine Learning  
This paper describes an Inductive Logic Programming approach called Logical Vision which overcomes some of these limitations.  ...  LV uses Meta-Interpretive Learning (MIL) combined with low-level extraction of high-contrast points sampled from the image to learn recursive logic programs describing the image.  ...  The authors believe that LV has long-term potential as an AI technology with the potential for unifying the disparate areas of logical based learning with visual perception.  ... 
doi:10.1007/s10994-018-5710-8 fatcat:365vnau5drak5ek4ikc2d2pfze

Pairing Conceptual Modeling with Machine Learning [article]

Wolfgang Maass, Veda C. Storey
2021 arXiv   pre-print
With the increasing emphasis on digitizing and processing large amounts of data for business and other applications, it would be helpful to consider how these areas of research can complement each other  ...  We then examine how conceptual modeling can be applied to machine learning and propose a framework for incorporating conceptual modeling into data science projects.  ...  Acknowledgements This paper was based on a keynote presentation given by the first author at the International Conference on Conceptual Modeling.  ... 
arXiv:2106.14251v1 fatcat:n4kujuzttja67jqjs3vz3bdiba

Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach [article]

Valérie Beaudouin, David Bounie (IP Paris, ECOGE, SES), Stéphan Clémençon, Florence d'Alché-Buc, James Eagan, Jayneel Parekh
2020 arXiv   pre-print
The recent enthusiasm for artificial intelligence (AI) is due principally to advances in deep learning.  ...  The originality of this paper is to combine technical, legal and economic aspects of explainability to develop a framework for defining the "right" level of explain-ability in a given context.  ...  has been trained mostly on faces with white skin.  ... 
arXiv:2003.07703v1 fatcat:knsvo6eftzf2fe3eeoekck5xaa

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI [article]

Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, Raja Chatila, Francisco Herrera
2019 arXiv   pre-print
Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability  ...  Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models.  ...  Javier Del Ser also acknowledges funding support from the Consolidated Research Group MATHMODE (IT1294-19) granted by the Department of Education of the Basque Government.  ... 
arXiv:1910.10045v2 fatcat:hgoi7cvkazdd5jycsdfkn565di

Trusted Artificial Intelligence: Towards Certification of Machine Learning Applications [article]

Philip Matthias Winter, Sebastian Eder, Johannes Weissenböck, Christoph Schwald, Thomas Doms, Tom Vogt, Sepp Hochreiter, Bernhard Nessler
2021 arXiv   pre-print
While certain high-risk areas, such as fully autonomous robots in workspaces shared with humans, are still some time away from certification, we aim to cover low-risk applications with our certification  ...  Our holistic approach attempts to analyze Machine Learning applications from multiple perspectives to evaluate and verify the aspects of secure software development, functional requirements, data quality  ...  However, white box testing for ML cannot be directly compared to white box testing for classical software.  ... 
arXiv:2103.16910v1 fatcat:xd37dtaxr5brjmljzvp3sr6lqa
« Previous Showing results 1 — 15 out of 374 results