A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Identifying the Machine Learning Family from Black-Box Models
[chapter]
2018
Lecture Notes in Computer Science
The other approach, based on machine learning, consists in learning a meta-model that is able to predict the model family of a new black-box model. ...
We address the novel question of determining which kind of machine learning model is behind the predictions when we interact with a black-box model. ...
Conclusions and Future Work In this work we addressed the problem of identifying the model family of a black-box learning model. ...
doi:10.1007/978-3-030-00374-6_6
fatcat:c2abzp4eojcvxl3w224eky4xfe
Hybrid machine learning model-based approach for Intelligent Grinding
2019
Zenodo
In fact, black box machine learning models (typically neural networks) are currently being used for high-stakes decision making throughout many industrial sectors. ...
Creating methods for explaining these black box models could alleviate some of the problems [3], but trying to explain black box models, rather than creating models that are interpretable in the first ...
AI methods often exploits black box models. ...
doi:10.5281/zenodo.4789782
fatcat:iva3u2kdpzfgxlsyzo3shh3y3m
Programs as Black-Box Explanations
[article]
2016
arXiv
pre-print
Recent work in model-agnostic explanations of black-box machine learning has demonstrated that interpretability of complex models does not have to come at the cost of accuracy or model flexibility. ...
However, it is not clear what kind of explanations, such as linear models, decision trees, and rule lists, are the appropriate family to consider, and different tasks and models may benefit from different ...
Local, Model-Agnostic Explanations: Our goal here is to explain individual predictions of a complex machine learning system, by treating them in a black-box manner. ...
arXiv:1611.07579v1
fatcat:j5ebojy225h6ja6mzz5vgmclhm
Discovery of moiety preference by Shapley value in protein kinase family using random forest models
2022
BMC Bioinformatics
By using > 200,000 bioactivity test data, we classified inhibitors as kinase family inhibitors or non-inhibitors in the machine learning. ...
The results showed that our RF models achieved good accuracy (> 0.8) for the 10 kinase families. ...
The full contents of the supplement are available online at https:// bmcbi oinfo rmati cs. biome dcent ral. com/ artic les/ suppl ements/ volume-23-suppl ement-4. ...
doi:10.1186/s12859-022-04663-5
pmid:35428180
pmcid:PMC9011936
fatcat:xr6kfl6pqvf2pdnqtceuar7dnm
False perfection in machine prediction: Detecting and assessing circularity problems in machine learning
[article]
2021
arXiv
pre-print
This paper is an excerpt of an early version of Chapter 2 of the book "Validity, Reliability, and Significance. ...
Please see the book's homepage at https://www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?products_id=1688 for a more recent and comprehensive discussion. ...
Acknowledgments This research has been conducted in project SCIDATOS (Scientific Computing for Improved Detection and Therapy of Sepsis), funded by the Klaus Tschira Foundation, Germany (Grant number 00.0277.2015 ...
arXiv:2106.12417v2
fatcat:z4hmggdimfgt5hp6ovyhwd5qry
Explaining Black-box Android Malware Detection
[article]
2018
arXiv
pre-print
In this work, we generalize this approach to any black-box machine- learning model, by leveraging a gradient-based approach to identify the most influential local features. ...
To mitigate this issue, the most popular Android malware detectors use linear, explainable machine-learning models to easily identify the most influential features contributing to each decision. ...
ACKNOWLEDGMENTS This work was partly supported by the EU H2020 project ALOHA, under the European Union's Horizon 2020 research and innovation programme (grant no. 780788), and by the PIS-DAS project, funded ...
arXiv:1803.03544v2
fatcat:3oxzllcpsndcpic6a5oe2kgloe
Improving Current Glycated Hemoglobin Prediction in Adults: Use of Machine Learning Algorithms With Electronic Health Records
2021
JMIR Medical Informatics
Explainable methods were employed to interpret the decisions made by the black box models. ...
When coupled with longitudinal data, the machine learning models outperformed the multiple logistic regression model used in the comparative study. ...
Diagnostic Code
Description
E11
Type 2 Diabetes Mellitus
E14
Diabetes Mellitus
E10
Type 1 Diabetes Mellitus
E139
Familial Diabetes Mellitus
R73
Hyperglycemia
O24
Gestational diabetes ...
doi:10.2196/25237
pmid:34028357
fatcat:rcslqutemfgbnf2jcrl4tnlkeq
Understanding the efficacy, reliability and resiliency of computer vision techniques for malware detection and future research directions
[article]
2019
arXiv
pre-print
My research lies in the intersection of security and machine learning. ...
Via understanding the attack surfaces of machine learning models used for malware detection, we can greatly improve the robustness of the algorithms to combat malware adversaries in the wild. ...
efficacy of black-box ransomware classifiers. ...
arXiv:1904.10504v1
fatcat:qqu2nyn2wbfaroltdhqmu7lpv4
A Literature Review and Research Agenda on Explainable Artificial Intelligence (XAI)
2022
Zenodo
We cannot accept the black box nature of AI models as we encounter the consequences of those decisions. ...
Machine learning models are used to make the important decisions in critical areas such as medical diagnosis, financial transactions. ...
• In medical diagnosis, how can we trust the machine learning model to treat the patients as instructed by a black-box model? ...
doi:10.5281/zenodo.5998487
fatcat:cyaqpxofivamrnzoe5bepe4m5m
Toward the transparency of deep learning in radiological imaging: beyond quantitative to qualitative artificial intelligence
2019
Journal of Medical Artificial Intelligence
Ethical Statement: The author is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved ...
Footnote
Conflicts of Interest: The author has no conflicts of interests to declare. ...
These "black box" problems lead to opaqueness in DL. The aim of this editorial commentary is to help realize the transparency of "black box" machine learning for radiologic imaging. ...
doi:10.21037/jmai.2019.09.06
fatcat:54pgaegp5vbvfolhefyza5hh24
On the Correspondence Between Conformance Testing and Regular Inference
[chapter]
2005
Lecture Notes in Computer Science
Conformance testing for finite state machines and regular inference both aim at identifying the model structure underlying a black box system on the basis of a limited set of observations. ...
Whereas the former technique checks for equivalence with a given conjecture model, the latter techniques addresses the corresponding synthesis problem by means of techniques adopted from automata learning ...
Both techniques aim at identifying the model structure underlying a black box system on the basis of a limited set of observations. ...
doi:10.1007/978-3-540-31984-9_14
fatcat:wamsgtuvrfbdpbudq7dipxavhq
A study about Explainable Articial Intelligence: using decision tree to explain SVM
2020
Revista Brasileira de Computação Aplicada
Most of these recent models are used as black boxes without understanding partially or even completely how different features influence the model prediction avoiding algorithmic transparency. ...
We also outline the main challenges for such methods and conclude that model-agnostic interpretability is a key component in making machine learning more trustworthy. ...
By tting machine learning models based on the Data layer, we get the Black-Box Model layer. Machine Figure 1 : The big picture of explainable machine learning. ...
doi:10.5335/rbca.v12i1.10247
fatcat:vyp7u2j7j5bdlndtialsvki4zy
Fairwashing: the risk of rationalization
[article]
2019
arXiv
pre-print
Black-box explanation is the problem of explaining how a machine learning model -- whose internal logic is hidden to the auditor and generally complex -- produces its outcomes. ...
We empirically evaluate our rationalization technique on black-box models trained on real-world datasets and show that one can obtain rule lists with high fidelity to the black-box model while being considerably ...
Sébastien Gambs is supported by the Canada Research Chair program as well as by a Discovery Grant and a Discovery Accelerator Supplement Grant from NSERC. ...
arXiv:1901.09749v3
fatcat:dpzzjeobdnaubgcrdhstzkp7lq
Using Machine Learning to Guide Cognitive Modeling: A Case Study in Moral Reasoning
[article]
2019
arXiv
pre-print
In this paper, we outline a data-driven, iterative procedure that allows cognitive scientists to use machine learning to generate models that are both interpretable and accurate. ...
The recently released Moral Machine dataset allows us to build a powerful model that can predict the outcomes of these conflicts while remaining simple enough to explain the basis behind human decisions ...
We thank Edmond Awad for providing guidance on navigating the Moral Machine dataset. ...
arXiv:1902.06744v3
fatcat:m6e4vjwo7nfkrhcltncwlhqkiq
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead
[article]
2019
arXiv
pre-print
, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare ...
Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. ...
I would like to acknowledge funding from the Laura and John Arnold Foundation, NIH, NSF, DARPA, the Lord Foundation of North Carolina, and MIT-Lincoln Laboratory. ...
arXiv:1811.10154v3
fatcat:2xqiy3n4irczza67iiuvxyrt7a
« Previous
Showing results 1 — 15 out of 53,209 results