Filters








17,329 Hits in 4.2 sec

Transparent Neural Networks [chapter]

Claes Strannegård, Olle Häggström, Johan Wessberg, Christian Balkenius
2012 Lecture Notes in Computer Science  
Thus we automatically obtain a monolithic computational model which integrates concept formation with deductive, inductive, and abductive reasoning.  ...  We present the transparent neural networks, a graph-based computational model that was designed with the aim of facilitating human understanding.  ...  Conclusion We presented a developmental model, which integrates concept formation and basic deduction, induction, and abduction.  ... 
doi:10.1007/978-3-642-35506-6_31 fatcat:tzgt4cfg75ejbh4f5i44lderdi

Transparency and Explanation in Deep Reinforcement Learning Neural Networks [article]

Rahul Iyer, Yuezhang Li, Huao Li, Michael Lewis, Ramitha Sundar, Katia Sycara
2018 arXiv   pre-print
However, deep neural networks are opaque. In this paper, we report on work in transparency in Deep Reinforcement Learning Networks (DRLN).  ...  Transparency is important not only for user trust, but also for software debugging and certification. In recent years, Deep Neural Networks have made great advances in multiple application areas.  ...  Acknowledgement This research was supported by awards W911NF-13-1-0416 and FA9550-15-1-0442.  ... 
arXiv:1809.06061v1 fatcat:o4qe4ykrl5emba53sxchvmen5q

Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond [article]

Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, Klaus-Robert Müller
2020 arXiv   pre-print
Interpretability and explanation methods for gaining a better understanding about the problem solving abilities and strategies of nonlinear Machine Learning such as Deep Learning (DL), LSTMs, and kernel  ...  With the broader and highly successful usage of machine learning in industry and the sciences, there has been a growing demand for explainable AI.  ...  binary for predictors based on the Caffe neural network format.  ... 
arXiv:2003.07631v1 fatcat:pvjjzqns2bdtxlvganye4yipey

Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods [article]

Zohaib Salahuddin, Henry C Woodruff, Avishek Chatterjee, Philippe Lambin
2021 arXiv   pre-print
Deep neural networks have shown same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power.  ...  Finally we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis.  ...  Discussion Transparency of deep neural networks is an essential clinical, legal, and ethical requirement. We have identified nine different categories of interpretability methods for DL methods.  ... 
arXiv:2111.02398v1 fatcat:glrfdkbcqrbqto2nrl7dnlg3gq

Neural network based 3D tracking with a graphene transparent focal stack imaging system

Dehui Zhang, Zhen Xu, Zhengyu Huang, Audrey Rose Gutierrez, Cameron J Blocker, Che-Hung Liu, Miao-Bin Lien, Gong Cheng, Zhe Liu, Il Yong Chun, Jeffrey A Fessler, Zhaohui Zhong (+1 others)
2021 Nature Communications  
learning including the development of powerful neural networks.  ...  This paper demonstrates 3D tracking of point-like objects with multilayer feedforward neural networks and the extension to tracking positions of multi-point objects.  ...  Keck Foundation and National Science Foundation grants IIS 1838179.  ... 
doi:10.1038/s41467-021-22696-x pmid:33893300 fatcat:xqvrrzc3lrcu7f5n65mliycadm

An uncertainty-aware, shareable, and transparent neural network architecture for brain-age modeling

Tim Hahn, Jan Ernsting, Nils R. Winter, Vincent Holstein, Ramona Leenings, Marie Beisemann, Lukas Fisch, Kelvin Sarink, Daniel Emden, Nils Opel, Ronny Redlich, Jonathan Repple (+21 others)
2022 Science Advances  
DISCUSSION We trained an uncertainty-aware, shareable, and transparent MCCQR Neural Network on N = 10,691 samples from the GNC.  ...  For comparison, we also evaluated a version of our neural network model without uncertainty quantification but with an otherwise identical network structure and hyperparameters [artificial neural network  ...  Data access and responsibility: All PIs take responsibility for the integrity of the respective study data and their components. All authors and coauthors had full access to all study data.  ... 
doi:10.1126/sciadv.abg9471 pmid:34985964 pmcid:PMC8730629 fatcat:zeupkpspqjahfptcixzbz6rt3a

An Uncertainty-Aware, Shareable and Transparent Neural Network Architecture for Brain-Age Modeling [article]

Tim Hahn, Jan Ernsting, Nils R. Winter, Vincent Holstein, Ramona Leenings, Marie Beisemann, Lukas Fisch, Kelvin Sarink, Daniel Emden, Nils Opel, Ronny Redlich, Jonathan Repple (+22 others)
2021 arXiv   pre-print
Here, we introduce an uncertainty-aware, shareable, and transparent Monte-Carlo Dropout Composite-Quantile-Regression (MCCQR) Neural Network trained on N=10,691 datasets from the German National Cohort  ...  However, Machine Learning models underlying the field do not consider uncertainty, thereby confounding results with training data density and variability.  ...  Data access and responsibility: All PIs take responsibility for the integrity of the respective study data and their components. All authors and coauthors had full access to all study data.  ... 
arXiv:2107.07977v1 fatcat:fdx6dgeahba2xmc3ik7hvzemou

Completion Reasoning Emulation for the Description Logic EL+ [article]

Aaron Eberhart, Monireh Ebrahimi, Lu Zhou, Cogan Shimizu, Pascal Hitzler
2019 arXiv   pre-print
We demonstrate that this idea is feasible by training a long short-term memory (LSTM) artificial neural network to learn EL+ reasoning patterns with two different data sets.  ...  We present a new approach to integrating deep learning with knowledge-based systems that we believe shows promise.  ...  Source code and experiment data is available on GitHub https://github.com/aaronEberhart/PySynGenReas.  ... 
arXiv:1912.05063v1 fatcat:bcddshwbqbc2xl3g3a3wxfnwgq

The International Radiomics Platform – An Initiative of the German and Austrian Radiological Societies
Die Internationale Radiomics-Plattform – eine Initiative der Deutschen und Österreichischen Röntgengesellschaften

Daniel Overhoff, Peter Kohlmann, Alex Frydrychowicz, Sergios Gatidis, Christian Loewe, Jan Moltz, Jan-Martin Kuhnigk, Matthias Gutberlet, H. Winter, Martin Völker, Horst Hahn, Stefan O. Schoenberg (+3 others)
2020 RöFo. Fortschritte auf dem Gebiet der Röntgenstrahlen und der bildgebenden Verfahren (Print)  
In the proof-of-concept study, the neural network demonstrated a Dice coefficient of 0.813 compared to the expert's segmentation of the myocardium.  ...  The execution of quantitative analyses with artificial intelligence methods is greatly facilitated by the platform approach of the DRG-ÖRP IRP, since pre-trained neural networks can be integrated and scientific  ...  This is a software solution developed by Fraunhofer MEVIS for training, testing, and applying deep neural networks. ▪ The DRG-ÖRG IRP offers the possibility of integrating pre-trained neural networks and  ... 
doi:10.1055/a-1244-2775 pmid:33242898 fatcat:pyb7lswtb5heva4pymf5rcziku

Visualizing Uncertainty and Saliency Maps of Deep Convolutional Neural Networks for Medical Imaging Applications [article]

Jae Duk Seo
2019 arXiv   pre-print
Not only, we want the models to generalize well but we also want to know the models confidence respect to its decision and which features matter the most.  ...  Acknowledgements This research was supported by Ryerson Vision lab, and the author of this paper wishes to express appreciations toward Dr. Neil Bruce and Dr.  ...  Problem Description Statistical models, such as deep neural networks, that are used in medical imaging domain must be transparent and interpretable.  ... 
arXiv:1907.02940v1 fatcat:rwgdwbztqnfinlcs6drjanvgne

Integrative Modeling and the Role of Neural Constraints

Daniel A. Weiskopf
2016 Philosophy of Science  
Here I consider whether mechanistic analysis provides a useful way to integrate models of cognitive and neural structure.  ...  Cognitive and neural models may depict different, but equally real, causal structures within the mind/brain.  ...  Copyright Philosophy of Science 2016 Preprint (not copyedited or formatted) Please use DOI when citing or quoting  ... 
doi:10.1086/687854 fatcat:xg6bkloasvcn3d2ykcb7gb2rpu

Data science, big data and granular mining

Sankar K. Pal, Saroj K. Meher, Andrzej Skowron
2015 Pattern Recognition Letters  
In the recent past, evolution of research interest has cropped up a relatively new area called, granular computing (GrC), due to the need and challenges from various domains of applications, such  ...  The said framework can be modelled with principles of neural networks, interval analysis, fuzzy sets and rough sets, both in isolation and integration, among other theories.  ...  Fuzzy sets, rough sets, neural networks, interval analysis and their synergistic integrations in granular computing framework have been found to be successful in most of these tasks.  ... 
doi:10.1016/j.patrec.2015.08.001 fatcat:32pup546rbhtbdsk33kxry6yvq

The design and implementation of Language Learning Chatbot with XAI using Ontology and Transfer Learning [article]

Nuobei Shi, Qin Zeng, Raymond Lee
2020 arXiv   pre-print
of neural network in bionics, and explain the output sentence from language model.  ...  From implementation perspective, our Language Learning agent integrated the mini-program in WeChat as front-end, and fine-tuned GPT-2 model of transfer learning as back-end to interpret the responses by  ...  ACKNOWLEDGEMENTS The authors would like to thank for UIC DST for the provision of computer equipment and facilities. This project is supported by UIC research grant R202008.  ... 
arXiv:2009.13984v1 fatcat:jrt6rykpsngejnvndgjzbcfxqa

The Design of Artificial Neural Memories

Jiatai Deng
2020 figshare.com  
evolving weight that is defined by a simple function and reset after each significant event.  ...  pyramids, a virtual gate carrying contingent weight sets the priority and dominance of upstream pyramid.  ...  problem for machine learning and deep neural networks [28] .  ... 
doi:10.6084/m9.figshare.12886883.v1 fatcat:tjqzixycbbcerahs6tbzbhlwja

OXlearn: A new MATLAB-based simulation tool for connectionist models

Nicolas Ruh, Gert Westermann
2009 Behavior Research Methods  
behavior in lesioned networks with human breakdown patterns has been one major line of enquiry in neural network research.  ...  Conclusion In this article, we have presented OXlearn, a new MATLAB-based simulation software for the implementation and analysis of neural network models.  ...  for maximum transparency and can be directly inspected and manipulated.  ... 
doi:10.3758/brm.41.4.1138 pmid:19897821 fatcat:javupxyvxzchtewtb25xl4js5y
« Previous Showing results 1 — 15 out of 17,329 results