Filters








34,301 Hits in 4.4 sec

Hybrid artificial neural network

Nadia Nedjah, Ajith Abraham, Luiza M. Mourelle
2007 Neural computing & applications (Print)  
Artificial neural networks (ANNs) or simply neural networks (NNs) are now a consolidated technique in computational intelligence.  ...  In the following, we outline the contribution of each included paper. In the first paper, entitled "Using evolution to improve neural network learning: pitfalls and solutions", J.  ...  Ó Springer-Verlag London Limited 2007 Artificial neural networks (ANNs) or simply neural networks (NNs) are now a consolidated technique in computational intelligence.  ... 
doi:10.1007/s00521-007-0083-0 fatcat:ktd2xql7ybbuvaccf2z42duydm

Comparison of Methods for Improving the Quality of Prediction Using Artificial Neural Networks

Kirill Uryvaev, Alena Rusak
2019 Majorov International Conference on Software Engineering and Computer Systems  
There are of methods aimed at improving the quality of training of artificial neural networks.  ...  In this paper, the methods for improving the quality of forecasting using artificial neural networks are researched.  ...  Currently, many approaches have been developed to solve this problem, among which artificial neural networks (ANN).  ... 
dblp:conf/micsecs/UryvaevR19 fatcat:bwl5wpgel5d57cgnmqrnvjwsiy

Modeling Sparse Data as Input for Weightless Neural Network

Luis Filipe Kopp, José Barbosa-Filho, Priscila Machado Vieira Lima, Claudio M. de Farias
2019 The European Symposium on Artificial Neural Networks  
In this paper we propose aggregating features into groups at random -a simple method for coping with sparse inputs to Weightless Neural Networks (WiSARD) that would reduce the input size.  ...  In Natural Language Processing (NLP), such challenge is typically faced by bag-of-word solutions wherein the number of useful words is a tiny fraction of the size of the dictionary, leading to sparse input  ...  WNNs are memory-oriented Artificial Neural Networks for pattern recognition applications.  ... 
dblp:conf/esann/KoppBLF19 fatcat:4qkbmnhh55enrg56t45gf3j76q

Sparse LS-SVMs using additive regularization with a penalized validation criterion

Kristiaan Pelckmans, Johan A. K. Suykens, Bart De Moor
2004 The European Symposium on Artificial Neural Networks  
The model, regularization constants and sparseness follow from a convex quadratic program in this case.  ...  In this paper we show that this framework allows to consider a penalized validation criterion that leads to sparse LS-SVMs.  ...  ESANN'2004 proceedings -European Symposium on Artificial Neural Networks Bruges (Belgium), 28-30 April 2004, d-side publi., ISBN 2-930307-04-8, pp. 435-440 be the training data with inputs x i and outputs  ... 
dblp:conf/esann/PelckmansSM04 fatcat:qcxwyux4y5eonkv5htgw3xdg3y

Computational Neuroscience Offers Hints for More General Machine Learning [chapter]

David Rawlinson, Gideon Kowadlo
2017 Lecture Notes in Computer Science  
Machine Learning has traditionally focused on narrow artificial intelligence -solutions for specific problems.  ...  Despite this, we observe two trends in the state-of-the-art: One, increasing architectural homogeneity in algorithms and models.  ...  Biological neural networks continue to outperform artificial ones in several learning characteristics.  ... 
doi:10.1007/978-3-319-63703-7_12 fatcat:lwpi5ne2nneajdmbjxx53z3q3y

Phase transition in sparse associative neural networks

Oleksiy Dekhtyarenko, Valery Tereshko, Colin Fyfe
2005 The European Symposium on Artificial Neural Networks  
We study the phenomenon of phase transition occurring in sparse associative neural networks, which is characterized by the abrupt emergence of associative properties with the growth of network connectivity  ...  The Associative Model The Network We consider a Hopfield-type sparse associative neural network, consisting of n neurons.  ...  ESANN'2005 proceedings -European Symposium on Artificial Neural Networks Bruges (Belgium), 27-29 April 2005, d-side publi., ISBN 2-930307-05-6.  ... 
dblp:conf/esann/DekhtyarenkoTF05 fatcat:6krq2bqez5hotcpkxlnci4uuqm

Sparse Factorization Layers for Neural Networks with Limited Supervision [article]

Parker Koch, Jason J. Corso
2016 arXiv   pre-print
Using our derivations, these layers can be dropped in to existing CNNs, trained together in an end-to-end fashion with back-propagation, and leverage semisupervision in ways classical CNNs cannot.  ...  Whereas CNNs have demonstrated immense progress in many vision problems, they suffer from a dependence on monumental amounts of labeled training data.  ...  In contrast, both of our new layers can be dropped-in to various artificial (and convolutional) neural networks and trained in the same architecture with back-propagation.  ... 
arXiv:1612.04468v1 fatcat:eq537teoofb2tfr2uq5qfikbki

Knowledge accumulating: The general pattern of learning [article]

Zhuoran Xu, Hao Liu
2021 arXiv   pre-print
Theoretically speaking, an artificial neural network can fit any function and reinforcement learning can learn from any delayed reward.  ...  But in solving real world tasks, we still need to spend a lot of effort to adjust algorithms to fit task unique features.  ...  In image classification, using deep artificial neural network and end-to-end training, current algorithms Krizhevsky et al. (2012 ) He et al. (2016 can achieve human performance in ImageNet dataset.  ... 
arXiv:2108.03988v1 fatcat:677igmbasrefhezipeue5w2klm

Truly Sparse Neural Networks at Scale [article]

Selima Curci, Decebal Constantin Mocanu, Mykola Pechenizkiyi
2022 arXiv   pre-print
Recently, sparse training methods have started to be established as a de facto approach for training and inference efficiency in artificial neural networks. Yet, this efficiency is just in theory.  ...  In this paper, we take an orthogonal approach, and we show that we can train truly sparse neural networks to harvest their full potential.  ...  Acknowledgement We thank the Google Cloud Platform Research Credits program for granting us the necessary resources to run the Extreme large sparse MLPs experiments.  ... 
arXiv:2102.01732v2 fatcat:xw4pnoj5zfafvilmk34odczt5m

Semi-Supervised Extreme Learning Machine using L1-Graph

Hongwei Zhao
2018 International Journal of Performability Engineering  
This paper proposes a SELM algorithm based on L1-Graph, which features no specifying parameters, is robust against noise, has a sparse solution and so on.  ...  If there are noises or uneven distribution in the data, the results are not very good.  ...  The artificial neural network algorithm [7] was established in 1943 by W. McCulloch and W.H. Pitts.  ... 
doi:10.23940/ijpe.18.04.p2.603610 fatcat:jxrzzhx6jbdulihsv6qozyodra

Sparsity through evolutionary pruning prevents neuronal networks from overfitting

Richard C. Gerum, André Erpenbeck, Patrick Krauss, Achim Schilling
2020 Neural Networks  
To make progress in decoding the structural basis of biological neural networks we here chose a bottom-up approach, where we evolutionarily trained small neural networks in performing a maze task.  ...  This could be the case because the brain is not a randomly initialized neural network, which has to be trained from scratch by simply investing a lot of calculation power, but has from birth some fixed  ...  However, in this study we investigate the development of model sparsity in artificial neural networks, being the analogue to a sparse connectome in biology.  ... 
doi:10.1016/j.neunet.2020.05.007 pmid:32454374 fatcat:hjx2sv7ejnfx3jfulafdyaqsf4

Dendritic normalisation improves learning in sparsely connected artificial neural networks [article]

Alexander D Bird, Hermann Cuntz
2020 bioRxiv   pre-print
Inspired by the physiology of neuronal systems in the brain, artificial neural networks have become an invaluable tool for machine learning applications.  ...  Here we introduce such a normalisation, where the strength of a neuron's afferents is divided by their number, to various sparsely-connected artificial networks.  ...  Figure 1 : 1 Dendritic normalisation improves learning in sparse artificial neural networks A, Schematic of a sparsely-connected artificial neural network.  ... 
doi:10.1101/2020.01.14.906537 fatcat:54sararqlnetfhgolfbiizioey

The curious case of developmental BERTology: On sparsity, transfer learning, generalization and the brain [article]

Xin Wang
2020 arXiv   pre-print
Just like perceptual and cognitive neurophysiology has inspired effective deep neural network architectures which in turn make a useful model for understanding the brain, here we explore how biological  ...  neural development might inspire efficient and robust optimization procedures which in turn serve as a useful model for the maturation and aging of the brain.  ...  percent to over half of pre-trained weights, with good sparse solutions exist everywhere in between ( Figure 5 , left).  ... 
arXiv:2007.03774v1 fatcat:jn6pwzdh2rcv3c3iazjcdx2ifa

Artificial Neural Networks in Biomedical Engineering: A Review [chapter]

R. Nayak, L.C. Jain, B.K.H. Ting
2001 Computational Mechanics–New Frontiers for the New Millennium  
Artificial neural networks in general are explained; some limitations and some proven benefits of neural networks are discussed.  ...  Use of artificial neural network techniques in various biomedical engineering applications is summarised. A case study is used to demonstrate the efficacy of artificial neural networks in this area.  ...  Since then the wide interest in artificial neural networks, both among researchers and in areas of various applications, has resulted in more-powerful networks, better training algorithms and improved  ... 
doi:10.1016/b978-0-08-043981-5.50132-2 fatcat:3yekheij4rap3nhyfg4bvztimq

Sparsity through evolutionary pruning prevents neuronal networks from overfitting [article]

Richard C. Gerum, André Erpenbeck, Patrick Krauss, Achim Schilling
2020 arXiv   pre-print
To make progress in decoding the structural basis of biological neural networks we here chose a bottom-up approach, where we evolutionarily trained small neural networks in performing a maze task.  ...  This could be the case because the brain is not a randomly initialized neural network, which has to be trained by simply investing a lot of calculation power, but has from birth some fixed hierarchical  ...  However, in this study we investigate the development of model sparsity in artificial neural networks, being the analogue to a sparse connectome in biology.  ... 
arXiv:1911.10988v2 fatcat:d46yeyvjlfad7be6ht3wa47sti
« Previous Showing results 1 — 15 out of 34,301 results