Filters








5,068 Hits in 5.1 sec

An Improved Algorithm for Eleman Neural Network to Avoid the Local Minima Problem

Zhiqiang Zhang, Guofeng Tang, Zheng Tang
2007 Third International Conference on Natural Computation (ICNC 2007)  
To solve this problem and speed up the process of the convergence, we propose an improved algorithm which includes two phases, a backpropagation phase and a gradient ascent phase.  ...  The simulation result shows that the proposed method can avoid the local minima problem and largely accelerate the speed of the convergence and get good results for the prediction tasks.  ...  Acknowledgment I would like to thank Professor Tang Zheng, my supervisor, for his many suggestions and constant support in the whole course of this research.  ... 
doi:10.1109/icnc.2007.203 dblp:conf/icnc/ZhangTT07 fatcat:xiuqu3ugjnbkdmcjgnt72suqky

Modified Error Function with Added Terms for the Backpropagation Algorithm [chapter]

Weixing Bi, Xugang Wang, Ziliang Zong, Zheng Tang
2004 Lecture Notes in Computer Science  
We have noted that the local minima problem in the backpropagation algorithm is usually caused by update disharmony between weights connected to the hidden layer and the output layer.  ...  Thus, it can avoid the local minima problem caused by such disharmony. Moreover, some new learning parameters introduced for the added term are easy to select.  ...  avoid the local minima problem that occurs due to neuron saturation in the hidden layer.  ... 
doi:10.1007/978-3-540-28647-9_57 fatcat:bg5lyscsibdhzgwrx5nipjvy74

Page 370 of Neural Computation Vol. 4, Issue 3 [page]

1992 Neural Computation  
No fundamental problems were encountered, apart from a tendency to get stuck in local minima, in exactly the same way as in backpropaga- tion learning.  ...  Somewhat surprisingly, learning on these conceptually sim- ple “hard” problems was actually improved by the presence of high levels of noise, and the network became stuck in local minima less frequently  ... 

Modified Backpropagation with Added White Gaussian Noise in Weighted Sum for Convergence Improvement

Ashwini Sapkal, U V Kulkarni
2018 Procedia Computer Science  
Here, the modified backpropagation algorithm is proposed, in which the white Gaussian noise is added in the weighted sum entity of the backpropagation.  ...  It is observed that the proposed modified backpropagation requires less number of epochs as compared to the standard backpropagation for the convergence when applied on 2 bit parity and Iris dataset.  ...  According to the research carried out in [9] and [10] , the backpropagation (BP) convergence rate is very poor due to local minima and the flatspot problem.  ... 
doi:10.1016/j.procs.2018.10.401 fatcat:5qc2qnk6cjgijn5ls7aw7xepbi

Avoiding local minima in Variational Quantum Algorithms with Neural Networks [article]

Javier Rivera-Dean, Patrick Huembeli, Antonio Acín, Joseph Bowles
2021 arXiv   pre-print
The effect of this neural network is to peturb the cost landscape as a function of its parameters, so that local minima can be escaped or avoided via a modification to the cost landscape itself.  ...  Although such algorithms are powerful in principle, the non-convexity of the associated cost landscapes and the prevalence of local minima means that local optimization methods such as gradient descent  ...  ACKNOWLEDGEMENTS We would like to thank the Pennylane Developers for helping solving the issues regaring the numerical implementation.  ... 
arXiv:2104.02955v2 fatcat:wejdqrtzpren5km6cyazjdq2ga

A Hybrid Method for Training Convolutional Neural Networks [article]

Vasco Lopes, Paulo Fazendeiro
2020 arXiv   pre-print
local minimas and fine-tune the weights, so that the network achieves higher accuracy results.  ...  In this paper, we propose a hybrid method that uses both backpropagation and evolutionary strategies to train Convolutional Neural Networks, where the evolutionary strategies are used to help to avoid  ...  pausing to perform the evolutionary algorithm in the weights of the last layer, in order to avoid local minimums.  ... 
arXiv:2005.04153v1 fatcat:26aswpy7ufa6tavm3oxnmutq34

Neural Network Hybrid Learning: Genetic Algorithms & Levenberg-Marquardt [chapter]

Ricardo B. C. Prudêncio, Teresa B. Ludermir
2003 Studies in Classification, Data Analysis, and Knowledge Organization  
However, in many cases, these algorithms are very slow and susceptible to the local minimum problem.  ...  The GA-LM algorithm was used to train a Time-Delay Neural Network for river flow prediction.  ...  Our algorithm aims to combine the capacity of GAs in avoiding local minima and the fast execution of the LM algorithm.  ... 
doi:10.1007/978-3-642-18991-3_53 fatcat:gv5hmvarfvbkzbg4vyd3vr46ky

Multiple Layer Perceptron training using genetic algorithms

Udo Seiffert
2001 The European Symposium on Artificial Neural Networks  
Multiple Layer Perceptron networks trained with backpropagation algorithm are very frequently used to solve a wide variety of real-world problems.  ...  This paper describes an approach to substitute it completely by a genetic algorithm.  ...  The numbers given in this paper are strongly dependent on many simulation conditions. Other problems under different circumstances may lead to slightly different results.  ... 
dblp:conf/esann/Seiffert01 fatcat:ixcarurfzzghdd23iovnxu3lim

An Adaptive Penalty-Based Learning Extension for Backpropagation and its Variants

B. Jansen, K. Nakayama
2006 The 2006 IEEE International Joint Conference on Neural Network Proceedings  
Over the years, many improvements and refinements of the backpropagation learning algorithm have been reported.  ...  In this study, the new approach has been applied to the backpropagation learning algorithm as well as the RPROP learning algorithm and simulations have been performed.  ...  In general, the RPROP learning algorithm converges faster to global or local minima.  ... 
doi:10.1109/ijcnn.2006.247341 dblp:conf/ijcnn/JansenN06 fatcat:zuqdjxi3hzbeboznb2rrmtwvdm

Faster Learning for Dynamic Recurrent Backpropagation

Yan Fang, Terrence J. Sejnowski
1990 Neural Computation  
The backpropagation learning algorithm for feedforward networks (Rumelhart et al. 1986 ) has recently been generalized to recurrent networks (Pineda 1989) .  ...  In this note, we report a modification of the delta weight update rule that significantly improves both the performance and the speed of the original Pearlmutter learning algorithm.  ...  The conjugate gradient method converged very quickly, but always to local minima (Figure lc) .  ... 
doi:10.1162/neco.1990.2.3.270 fatcat:odthh343mrceliandn3or6ffia

Global Optimisation of Neural Networks Using a Deterministic Hybrid Approach [chapter]

Gleb Beliakov, Ajith Abraham
2002 Hybrid Information Systems  
Selection of the topology of a neural network and correct parameters for the learning algorithm is a tedious task for designing an optimal artificial neural network, which is smaller, faster and with a  ...  Our preliminary experimentation results show that the proposed deterministic approach could provide near optimal results much faster than the evolutionary approach.  ...  In this connection, the primitive backpropagation may result in a better solution than more sophisticated methods, because its disadvantages turn to the benefits of avoiding some shallow local minima  ... 
doi:10.1007/978-3-7908-1782-9_8 fatcat:etyuvlv54fbkngk7b6vfscbhqq

Page 290 of The Journal of the Operational Research Society Vol. 60, Issue 2 [page]

2009 The Journal of the Operational Research Society  
A modified back propagation method to avoid false local minima. Neural Networks 11: 1059-1072. Glover F (1986). Future paths for integer programming and links to artificial intelligence.  ...  Although BP is the most popular training algorithm for NNs, it has two shortcomings: potential to get trapped in local minima and slow convergence rate to high-quality solutions.  ... 

Learning with Product Units

Laurens R. Leerink, C. Lee Giles, Bill G. Horne, Marwan A. Jabri
1994 Neural Information Processing Systems  
In an attempt to increase prognostic accuracy, many putative prognostic factors have been identified.  ...  The TNM staging system has been used since the early 1960's to predict breast cancer patient outcome.  ...  As mentioned earlier, for PUs a global search is required to solve the local-minima problems.  ... 
dblp:conf/nips/LeerinkGHJ94 fatcat:axry2ehsmnar7lwmhqiganwf7e

An Efficient Patient Inflow Prediction Model For hospital Resource Management

Kottalanka Srikanth, D. Arivazhagan
2017 Indonesian Journal of Electrical Engineering and Computer Science  
To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network.  ...  To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error.  ...  The other problem of Back propagation algorithm is that when there is an optimization problem present in updating local minima.  ... 
doi:10.11591/ijeecs.v7.i3.pp809-817 fatcat:i4ejabowsjftrmvspefbgrffi4

Online Handwritten Character Recognition Using an Optical Backpropagation Neural Network

Walid A. Salameh, Mohammed A. Otair
2005 Issues in Informing Science and Information Technology  
One of the possible remedies to escape from local minima is using a very small learning rate, but this will slow the learning process.  ...  Learning often takes insupportable time to converge, and it may fall into local minima at all.  ...  Small values for the learning rates were used to avoid local minima, the value ranges from 0.1 to 0.4.  ... 
doi:10.28945/866 fatcat:zf4342yegfbqzfwbkt3hfdlody
« Previous Showing results 1 — 15 out of 5,068 results