A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
Filters
Learning neural connectivity from firing activity: efficient algorithms with provable guarantees on topology
2018
Journal of Computational Neuroscience
on large-scale datasets of recorded firing activities. ...
In this paper, we formulate the neural network reconstruction as an instance of a graph learning problem, where we observe the behavior of nodes/neurons (i.e., firing activities) and aim to find the links ...
Wulfram Gerstnerfor their kind and helpful discussions on neural models and the proposed approach. We should deeply thank Dr. Yury Zaytsev, Prof. Abigail Morrison and Dr. ...
doi:10.1007/s10827-018-0678-8
pmid:29464489
pmcid:PMC5851696
fatcat:tyf6zrzzwrcijbddzpjxlq6uie
Learning network structures from firing patterns
2016
2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
We develop an iterative NEUral INFerence algorithm NEUINF to identify the type of effective neural connections (i.e. excitatory/inhibitory) based on the Perceptron learning rule. ...
In this paper, we consider this problem over a neural network where our aim is to reconstruct the connectivity between neurons merely by observing their firing activity. ...
Related Work Identifying neural connections from a set of recorded neural activities is an instance of network tomography [13] and has been extensively studied in the past. ...
doi:10.1109/icassp.2016.7471765
dblp:conf/icassp/KarbasiSV16
fatcat:xivdbm5qljedzpqvx24kymktgy
Approaches to Information-Theoretic Analysis of Neural Activity
2006
Biological Theory
Understanding how neurons represent, process, and manipulate information is one of the main goals of neuroscience. ...
However, application of information theory to experimental data is fraught with many challenges. ...
Despite this topological difference, the highly efficient dynamic programming algorithms developed by Sellers (1974) for genetic sequences can be adapted to spike train metrics, so that the calculations ...
doi:10.1162/biot.2006.1.3.302
pmid:19606267
pmcid:PMC2709861
fatcat:cajtrhkvfvbsrpo5klrtymbjb4
Dictionary Learning by Dynamical Neural Networks
[article]
2018
arXiv
pre-print
learning are provably computable by individual neurons. ...
Using spiking neurons to construct our dynamical network, we present a learning process, its rigorous mathematical analysis, and numerical results on several dictionary learning problems. ...
learn a dictionary with provable guarantees. ...
arXiv:1805.08952v1
fatcat:6bnq44tm5nddjlu5ucjowwuqwm
Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks
2015
Neural Networks
Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. ...
Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. ...
The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007(FP7/ -2013 under grant agreement no. 604102 (HBP). ...
doi:10.1016/j.neunet.2015.07.004
pmid:26422422
fatcat:qbpqali565blfphyprdvy6o3lm
Reservoir computing approaches to recurrent neural network training
2009
Computer Science Review
Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir ) is generated randomly and only a readout is ...
It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the ...
Acknowledgments This work is partially supported by Planet Intelligent Systems GmbH, a private company with an inspiring interest in fundamental research. ...
doi:10.1016/j.cosrev.2009.03.005
fatcat:5572on2nqfcubewvymnjlvok6m
Genesis of Organic Computing Systems: Coupling Evolution and Learning
[chapter]
2009
Understanding Complex Systems
It is demonstrated how neural structures can be evolved that efficiently learn solutions for problems from a particular problem class. ...
The emphasis is put on the influence of evolution and development on the structure of neural systems. ...
Acknowledgments Christian Igel acknowledges support from the German Federal Ministry of Education and Research within the Bernstein group "The grounding of higher brain function in dynamic neural fields ...
doi:10.1007/978-3-540-77657-4_7
fatcat:v54a6do2zzhvhm2677wgpz6vhe
Extraction of spatio-temporal primitives of emotional body expressions
2007
Neurocomputing
hardware neural processor with a guaranteed hard-real-time performance. ...
We show that this solution based on neural computations produces an adaptive algorithm for efficient representations in V1. ...
doi:10.1016/j.neucom.2006.10.100
fatcat:esivhljzsbcihdl5qbp756josu
Decoupling the Depth and Scope of Graph Neural Networks
[article]
2022
arXiv
pre-print
Theoretically, decoupling improves the GNN expressive power from the perspectives of graph signal processing (GCN), function approximation (GraphSAGE) and topological learning (GIN). ...
State-of-the-art Graph Neural Networks (GNNs) have limited scalability with respect to the graph and model sizes. ...
Minimal variance sam-
pling with provable guarantees for fast training of graph neural networks. ...
arXiv:2201.07858v1
fatcat:lzoilhqdrnbefntai4onjjoie4
A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability
[article]
2020
arXiv
pre-print
Research to address these concerns is particularly active, with a significant number of papers released in the past few years. ...
In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level performance on several long-standing tasks. ...
This approach is validated on neural networks with fewer than 10 neurons, with logistic activation function. SMT solvers for neural networks. ...
arXiv:1812.08342v5
fatcat:awndtbca4jbi3pcz5y2d4ymoja
A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability
2020
Computer Science Review
This approach is validated on neural networks with fewer than 10 neurons, with logistic activation function. SMT solvers for neural networks. ...
., the computation is conducted over a set of inputs simultaneously, instead of one by one, to enable efficient GPU computation; and has provable guarantees, i.e., both the bounds and the robustness estimates ...
doi:10.1016/j.cosrev.2020.100270
fatcat:biji56htvnglfhl7n3jnuelu2i
Universal Memory Architectures for Autonomous Machines
[article]
2015
arXiv
pre-print
The architecture is simple enough to ensure (1) a quadratic bound (in the number of available sensors) on space requirements, and (2) a quadratic bound on the time-complexity of the update-execute cycle ...
We propose a self-organizing memory architecture for perceptual experience, capable of supporting autonomous learning and goal-directed problem solving in the absence of any prior information about the ...
This analogy with neural networks is not a coincidence: estimating arbitrary intersections from near-synchronous activation of sensors in a planar sensor field has been explored as a means for topological ...
arXiv:1502.06132v1
fatcat:26d6vgn7qbgfxl64xip6irhebu
Biologically inspired intelligent decision making
2013
Bioengineered
A rtificial neural networks (ANNs) are a class of powerful machine learning models for classification and function approximation which have analogs in nature. ...
An ANN learns to map stimuli to responses through repeated evaluation of exemplars of the mapping. ...
The back propagation of errors (backpropagation, "Back-Prop" or BP) is a common mathematically provable gradient descent learning algorithm which uses differentiable activation functions to calculate the ...
doi:10.4161/bioe.26997
pmid:24335433
pmcid:PMC4049912
fatcat:6yz3o5eijvbwrmagbibox6slci
Privacy-Preserving ECG Classification With Branching Programs and Neural Networks
2011
IEEE Transactions on Information Forensics and Security
Sadeghi is with CASED (TU Darmstadt and Fraunhofer SIT), ...
often called NN with fired output. ...
The protocol guarantees that obtains only the chosen string while learns no information on . ...
doi:10.1109/tifs.2011.2108650
fatcat:dehqbq6xifhsrefvseuhbmgt54
2019 Index IEEE Transactions on Industrial Informatics Vol. 15
2019
IEEE Transactions on Industrial Informatics
Zhao, S., +, TII Jan. 2019 139-147
Fires
Efficient Fire Detection for Uncertain Surveillance Environment. ...
., +, TII Sept. 2019
5194-5203
Efficient Fire Detection for Uncertain Surveillance Environment. ...
doi:10.1109/tii.2020.2968165
fatcat:utk3ywxc6zgbdbfsys5f4otv7u
« Previous
Showing results 1 — 15 out of 194 results