A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Deep Learning of Explainable EEG Patterns as Dynamic Spatiotemporal Clusters and Rules in a Brain-Inspired Spiking Neural Network
2021
Sensors
The paper proposes a new method for deep learning and knowledge discovery in a brain-inspired Spiking Neural Networks (SNN) architecture that enhances the model's explainability while learning from streaming ...
rules to support model explainability; and (4) a better understanding of the dynamics in STBD in terms of feature interaction. ...
• Extracting spatiotemporal rules of spike occurrence during the dynamic clustering, which enhanced the interpretability and explainability of SNN learning behavior. ...
doi:10.3390/s21144900
fatcat:cvojatg3ozejzlyxm6gqjnokvm
Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning
[article]
2019
arXiv
pre-print
and explainable models for such systems. ...
allowing for the construction of explainable AI systems. ...
After that, we show the capabilities and applications of neural-symbolic systems for learning, reasoning, and explainability. ...
arXiv:1905.06088v1
fatcat:gm4f3ncukrbevpd7nq5yr75ar4
A Neurophysicist's view to the evolving brain based on neural physiological measurements; diagnosis, development, and accelerated cognitive processing
2013
Frontiers in Computational Neuroscience
Theories such as the Hebbian rule for learning, "Cells that wire together fire together," are famous for their considerations in foundational cognitive neuroscience and address simplistic rules for cognitive ...
We intend to create computational models for learning and neurodevelopment toward explaining complex cognitive decisions and improving cognitive function with emerging brain computing interfaces. ...
Theories such as the Hebbian rule for learning, "Cells that wire together fire together," are famous for their considerations in foundational cognitive neuroscience and address simplistic rules for cognitive ...
doi:10.3389/fncom.2013.00092
pmid:23847527
pmcid:PMC3702022
fatcat:yj6t44zuffcylmkiqjsjw5uxly
Understanding neural networks with neural-symbolic integration
2021
Research Outreach
The convolutional neural network is a type of machine learning paradigm. ...
Together these form rules that explain the network's behaviour. These atoms and rules are equivalent to the words and sentences we use in our everyday language. ...
The research team are planning to explore ways of embedding rules into convolutional neural networks. ...
doi:10.32907/ro-126-1906089938
fatcat:tjlmpt7pobarzbmwoukg4mmxme
Neuro-Symbolic Interpretable Collaborative Filtering for Attribute-based Recommendation
2022
Proceedings of the ACM Web Conference 2022
Thanks to the recent advance on neuro-symbolic computation for automatic rule learning, NS-ICF learns interpretable recommendation rules (consisting of user and item attributes) based on neural networks ...
, especially for the era of deep learning based recommendation. ...
Neural Collaborative Reasoning (NCR) uses logical rules to guide neural network learning and is partially explainable. o RRL [32] . ...
doi:10.1145/3485447.3512042
fatcat:3svywihtgfgnhnkulboadnqog4
DIAGNOSIS WINDOWS PROBLEMS BASED ON HYBRID INTELLIGENCE SYSTEMS
2013
Journal of Engineering Science and Technology
The expert system has the ability to explain and give recommendations by using the rules and the human expert in some conditions. Therefore, we have combined the two technologies. ...
The neural network has unique characteristics which it can complete the uncompleted data, the expert system can't deal with data that is incomplete, but using the neural network individually has some disadvantages ...
A neural expert system as shown in Fig. 1 explains the basic structure. ...
doaj:963e0d0c48234d4393a1bb23b6038e8e
fatcat:ogltwbrfvraongtms74pasg3um
The Idea of Knowledge Supplementation and Explanation Using Neural Networks to Support Decisions in Construction Engineering
2013
Procedia Engineering
In order to ensure more completeness of the knowledge and explain the mechanism of inference, the KBANN (Knowledge Based Artificial Neural Network) algorithm was used, which enables extracting rules that ...
are not a part of the original state of knowledge using trained neural networks. ...
According to the KBANN algorithm, the 164 rules should be used in the neural network's learning process. ...
doi:10.1016/j.proeng.2013.04.041
fatcat:45qbgr7ku5ds3foxvmcgetzvl4
Exact and Approximate Rule Extraction from Neural Networks with Boolean Features
2019
International Joint Conference on Computational Intelligence
Rule extraction from classifiers treated as black boxes is an important topic in explainable artificial intelligence (XAI). ...
This paper presents a technique to extract rules from a neural network where the feature space is Boolean, without looking at the inner structure of the network. ...
There is growing interest in being able to explain the decision making resulting from machine learning models. ...
doi:10.5220/0008362904240433
dblp:conf/ijcci/MereaniH19
fatcat:j4faopm6ljcypp2wb5wqs4qm4e
Does computational neuroscience need new synaptic learning paradigms?
2016
Current Opinion in Behavioral Sciences
in terms of neural implementation. ...
We take learning and synaptic plasticity as an example and point to open questions, such as one-shot learning and acquiring internal representations of the world for flexible planning. ...
A classic paper for explaining receptive field development with a synaptic plasticity (BCM) rule. Still many modern plasticity rules with spiking neurons relate to the BCM rule [41-43].
4. ...
doi:10.1016/j.cobeha.2016.05.012
fatcat:ybxhnabw7vbgza7hcdorelmxre
A comparative analysis of rule-based, model-agnostic methods for explainable artificial intelligence
2020
Irish Conference on Artificial Intelligence and Cognitive Science
Scholars have proposed several methods for extracting rules from data-driven machine-learned models. However, limited work exist on their comparison. ...
The ultimate goal of Explainable Artificial Intelligence is to build models that possess both high accuracy and degree of explainability. ...
Rule Extraction From Neural Network Ensemble (REFNE) was originally designed to extract symbolic rules from trained neural network ensembles, but its application can be also extended to other learning ...
dblp:conf/aics/ViloneRL20
fatcat:yhzkbjbwcfdxpdlqbcexaek4wq
Artificial Neural Networks in Biomedical Engineering: A Review
[chapter]
2001
Computational Mechanics–New Frontiers for the New Millennium
Artificial neural networks in general are explained; some limitations and some proven benefits of neural networks are discussed. ...
Use of artificial neural network techniques in various biomedical engineering applications is summarised. A case study is used to demonstrate the efficacy of artificial neural networks in this area. ...
The next section explains artificial neural networks in general, their rule learning process, their applications and the need for using them in biomedical engineering domain. ...
doi:10.1016/b978-0-08-043981-5.50132-2
fatcat:3yekheij4rap3nhyfg4bvztimq
Multiobjective Genetic Fuzzy Systems
[chapter]
2015
Springer Handbook of Computational Intelligence
A large number of neural and genetic learning methods have been proposed since the early 1990s [3, 4] in order to fully utilize their approximation ability. ...
Mendel, Fuzzy basis functions, universal approximation, and orthogonal least-squares learning. IEEE Trans. on Neural Networks 3: 807-814 (1992). [2] K. ...
A large number of neural and genetic learning methods have been proposed since the early 1990s [3, 4] in order to fully utilize their approximation ability. ...
doi:10.1007/978-3-662-43505-2_77
fatcat:eiektlxwwbctfjc2j555vcl4la
Fuzzy control of pH using genetic algorithms
1993
IEEE transactions on fuzzy systems
A large number of neural and genetic learning methods have been proposed since the early 1990s [3, 4] in order to fully utilize their approximation ability. ...
Mendel, Fuzzy basis functions, universal approximation, and orthogonal least-squares learning. IEEE Trans. on Neural Networks 3: 807-814 (1992). [2] K. ...
A large number of neural and genetic learning methods have been proposed since the early 1990s [3, 4] in order to fully utilize their approximation ability. ...
doi:10.1109/tfuzz.1993.390283
fatcat:kano6qvlmvcmfesuv6cetyqf2a
A New Concept for Explaining Graph Neural Networks
2021
International Workshop on Neural-Symbolic Learning and Reasoning
To overcome this problem we introduce a conceptual approach by suggesting model-level explanation rule extraction through a standard white-box learning method from the generated importance masks. ...
Graph neural networks (GNNs), similarly to other connectionist models, lack transparency in their decision-making. ...
This includes being able to explain the decision making processes of GNNs in learning from complex graph structure. ...
dblp:conf/nesy/HimmelhuberZGRJ21
fatcat:l7cra4glvbde7nea6oymhmprzy
Page 382 of Journal of Cognitive Neuroscience Vol. 16, Issue 3
[page]
2004
Journal of Cognitive Neuroscience
Here, we show that neural computation based on least-square error learning between populations of intensity- coded neurons can explain interpolation and extrapolation
INTRODUCTION
There are many circumstances ...
This single mechanism can explain interpolation and extrapolation capacities of humans in function learning (Busemeyer et al., 1997; DeLosh et al., 1997; Koh & Meyer, 1991), auditory—visual alignment ( ...
« Previous
Showing results 1 — 15 out of 190,733 results