Filters








2,380 Hits in 10.9 sec

Opening the Black Box: Discovering and Explaining Hidden Variables in Type 2 Diabetic Patient Modelling

Leila Yousefi, Stephen Swift, Mahir Arzoky, Lucia Saachi, Luca Chiovato, Allan Tucker
2018 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)  
However, the overuse of hidden variables can lead to complex models that can overfit and are not well understood (being 'black box' in nature).  ...  We exploit Dynamic Time Warping and hierarchical clustering to cluster patients based upon these hidden variables to uncover their meaning with respect to the complications of Type 2 Diabetes Mellitus  ...  INTRODUCTION Type 2 Diabetes is traditionally known as Type 2 Diabetes Mellitus (T2DM), and has been known for thousands of years.  ... 
doi:10.1109/bibm.2018.8621484 dblp:conf/bibm/YousefiSASCT18 fatcat:q3k6f4pw6jfk5gumjjkas77iha

Opening the black box: Personalizing type 2 diabetes patients based on their latent phenotype and temporal associated complication rules

Leila Yousefi, Stephen Swift, Mahir Arzoky, Lucia Saachi, Luca Chiovato, Allan Tucker
2020 Computational intelligence  
Many techniques utilized in modeling diseases are often in the form of a "black box" where the internal workings and complexities are extremely difficult to understand, both from practitioners' and patients  ...  It is widely considered that approximately 10% of the population suffers from type 2 diabetes. Unfortunately, the impact of this disease is underestimated.  ...  For being able to explain the black-box model and hidden variables, we attempted to explore a well-known group of patients.  ... 
doi:10.1111/coin.12313 fatcat:t42aoje46nfqvfwtqo4rdcwmwu

Predicting Type 2 Diabetes Complications and Personalising Patient Using Artificial Intelligence Methodology [chapter]

Leila Yousefi, Allan Tucker
2020 Type 2 Diabetes [Working Title]  
Scholars share a common argument that many Artificial Intelligence techniques that successfully model disease are often in the form of a "black box" where the internal workings and complexities are extremely  ...  The proposed strategy builds probabilistic graphical models for prediction with the inclusion of informative hidden variables.  ...  Acknowledgements I thank the following individuals for their expertise and assistance throughout all aspects of this study and for their insightful suggestions and careful reading of the manuscript.  ... 
doi:10.5772/intechopen.94228 fatcat:dkhosuu5xbhuxp4sszjpcvpmga

Evolving explainable rule sets

Hormoz Shahrzad, Babak Hodjat, Risto Miikkulainen
2022 Proceedings of the Genetic and Evolutionary Computation Conference Companion  
In this work, we present a method which evolves explainable rule-sets using inherently transparent ordinary logic to make models.  ...  Many domains, however, have explainablity and trustworthiness requirements not fulfilled by these approaches. Various methods exist to analyze or interpret black-box models post training.  ...  the black-box model.  ... 
doi:10.1145/3520304.3534023 fatcat:fldi4cmxtjfixozlmjds6buqau

Opening the machine learning black box with Layer-wise Relevance Propagation [article]

Sebastian Lapuschkin, Technische Universität Berlin, Technische Universität Berlin, Klaus-Robert Müller
2019
This black box character hampers acceptance and application of non-linear methods in many application domains, where understanding individual model predictions and thus trust in the model's decisions are  ...  However, due to the nested complex and non-linear structure of many machine learning models, this comes with the disadvantage of them acting as a black box, providing little or no information about the  ...  Acknowledgements Experimental Setup We demonstrate the results on a classifier for the MIT Places data set (Zhou et al., 2014b) provided by the authors of this data set and the Caffe reference model  ... 
doi:10.14279/depositonce-7942 fatcat:2ca4ccz5bzb2lojrqrpsxhmzve

A Comparative Study of Two Rule-Based Explanation Methods for Diabetic Retinopathy Risk Assessment

Najlaa Maaroof, Antonio Moreno, Aida Valls, Mohammed Jabreel, Marcin Szeląg
2022 Applied Sciences  
Local explanation models analyse a decision on a single instance, by using the responses of the system to the points in its neighbourhood to build a surrogate model.  ...  Understanding the reasons behind the decisions of complex intelligent systems is crucial in many domains, especially in healthcare.  ...  We also thank the collaboration of the Ophthalmology department of Hospital Sant Joan de Reus in Catalonia.  ... 
doi:10.3390/app12073358 doaj:6f6dfa21c0bd4f1db3f9b7ae5b54176a fatcat:ok3o5agi2vep5abjfm4sftsqhq

Machine Learning Techniques for Screening and Diagnosis of Diabetes: a Survey

2019 Tehnički Vjesnik  
According to the International Diabetes Federation report, this figure is expected to rise to more than 642 million in 2040, so early screening and diagnosis of diabetes patients have great significance  ...  in detecting and treating diabetes on time.  ...  Acknowledgements This work was supported in part by the Fundamental Research Funds for the Central Universities (Grant No. 18CX02019A).  ... 
doi:10.17559/tv-20190421122826 fatcat:4ez7t2d3zzawdlmpcz6ecwwwda

Interpretable machine learning: Fundamental principles and 10 grand challenges

Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, Chudi Zhong
2022 Statistics Survey  
; (9) Characterization of the "Rashomon set" of good models; and (10) Interpretable reinforcement learning.  ...  These problems are: (1) Optimizing sparse logical models such as decision trees; (2) Optimization of scoring systems; (3) Placing constraints into generalized additive models to encourage sparsity and  ...  Thank you to David Page for providing useful references on early explainable ML. Thank you to the anonymous reviewers that made extremely helpful comments.  ... 
doi:10.1214/21-ss133 fatcat:ahzfoilhmfa2rd4hcauvsn3eyy

Circadian oscillations of cytosine modification in humans contribute to epigenetic variability, aging, and complex disease

Gabriel Oh, Karolis Koncevičius, Sasha Ebrahimi, Matthew Carlucci, Daniel Erik Groot, Akhil Nair, Aiping Zhang, Algimantas Kriščiūnas, Edward S. Oh, Viviane Labrie, Albert H. C. Wong, Juozas Gordevičius (+3 others)
2019 Genome Biology  
Maintenance of physiological circadian rhythm plays a crucial role in human health.  ...  Numerous studies have shown that disruption of circadian rhythm may increase risk for malignant, psychiatric, metabolic, and other diseases.  ...  Chambers and B. Lehne for providing type II diabetes data.  ... 
doi:10.1186/s13059-018-1608-9 pmid:30606238 pmcid:PMC6317262 fatcat:dokao33chjgihojmeigyxs5qbm

Use of a neural network as a predictive instrument for length of stay in the intensive care unit following cardiac surgery

J V Tu, M R Guerriere
1992 Proceedings. Symposium on Computer Applications in Medical Care  
We trained a neural network with a database of 713 patients and 15 input variables to predict patients who would have a prolonged ICU length of stay, which we defined as a stay greater than 2 days.  ...  The performance of the network was also evaluated by calculating the area under the Receiver Operating Characteristic (ROC) curve in the training set, 0.7094 (SE 0.0224), and in the test set, 0.6960 (SE  ...  "Black-box" -Relationships and interactions between input variables and predicted outputs cannot be explained. Requires significant computing power and longer computational times.  ... 
pmid:1482955 pmcid:PMC2248140 fatcat:3h557nsnunbljdb4lbyvvsyuyi

Controlling Safety of Artificial Intelligence-Based Systems in Healthcare

Mohammad Reza Davahli, Waldemar Karwowski, Krzysztof Fiok, Thomas Wan, Hamid R. Parsaei
2021 Symmetry  
The main objective was to propose safety guidelines for implementing AI black-box models to reduce the risk of potential healthcare-related incidents and accidents.  ...  In response to the need to address the safety challenges in the use of artificial intelligence (AI), this research aimed to develop a framework for a safety controlling system (SCS) to address the AI black-box  ...  To address the AI black-box challenge, a considerable amount of research has focused on developing explainable AI to open the black-box [23] .  ... 
doi:10.3390/sym13010102 fatcat:byydrbai6jbddmwryghqwd2tre

Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges [article]

Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, Chudi Zhong
2021 arXiv   pre-print
; (9) Characterization of the "Rashomon set" of good models; and (10) Interpretable reinforcement learning.  ...  These problems are: (1) Optimizing sparse logical models such as decision trees; (2) Optimization of scoring systems; (3) Placing constraints into generalized additive models to encourage sparsity and  ...  Thank you to David Page for providing useful references on early explainable ML. Thank you to the anonymous reviewers that made extremely helpful comments.  ... 
arXiv:2103.11251v2 fatcat:52llnswt3ze5rl3zhbai5bscce

Rule extraction using Recursive-Rule extraction algorithm with J48graft combined with sampling selection techniques for the diagnosis of type 2 diabetes mellitus in the Pima Indian dataset

Yoichi Hayashi, Shonosuke Yukita
2016 Informatics in Medicine Unlocked  
Diabetes is a complex disease that is increasing in prevalence around the world. Type 2 diabetes mellitus (T2DM) accounts for about 90-95% of all diagnosed adult cases of diabetes.  ...  Most present diagnostic methods for T2DM are black-box models, which are unable to provide the reasons underlying diagnosis to physicians; therefore, algorithms that can provide further insight are needed  ...  However, most present diagnostic methods [1, for T2DM are black-box models. A drawback of black-box models is that they cannot adequately reveal information that may be hidden in the data.  ... 
doi:10.1016/j.imu.2016.02.001 fatcat:jd65j2j7vzf2xitrvq2ipsjcda

Analysis and Prediction of Unplanned Intensive Care Unit Readmission using Recurrent Neural Networks with Long Short-Term Memory [article]

Yu-Wei Lin, Yuqian Zhou, Faraz Faghri, Michael J Shaw, Roy H Campbell
2018 bioRxiv   pre-print
We also illustrate the importance of each portion of the features and different combinations of the models to verify the effectiveness of the proposed model.  ...  Identifying high-risk patients likely to suffer from readmission before release benefits both the patients and the medical providers.  ...  Few of the works 50 attempt to understand and interpret the predictive model, especially the approaches 51 that build "black-box" like neural networks. 52 In this study, we focus on the analysis and  ... 
doi:10.1101/385518 fatcat:hbablgbpcjhlxis3akwywcrdme

New unified insights on deep learning in radiological and pathological images: Beyond quantitative performances to qualitative interpretation

Yoichi Hayashi
2020 Informatics in Medicine Unlocked  
We first describe the "black box" problem of shallow NNs, the concept of rule extraction, the renewed attack of the "black box" problem in DNN architectures, and the paradigm shift regarding the transparency  ...  Deep learning (DL) has become the main focus of research in the field of artificial intelligence, despite its lack of explainability and interpretability.  ...  In predictive models, interpretability is important, and thus, the "black box" nature of DL has been severely criticized in medical settings [6] .  ... 
doi:10.1016/j.imu.2020.100329 fatcat:ka6e7bs3h5e7lpnueup5obfrzm
« Previous Showing results 1 — 15 out of 2,380 results