A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2012; you can also visit the original URL.
The file type is
To solve this problem and speed up the process of the convergence, we propose an improved algorithm which includes two phases, a backpropagation phase and a gradient ascent phase. ... The simulation result shows that the proposed method can avoid the local minima problem and largely accelerate the speed of the convergence and get good results for the prediction tasks. ... Acknowledgment I would like to thank Professor Tang Zheng, my supervisor, for his many suggestions and constant support in the whole course of this research. ...doi:10.1109/icnc.2007.203 dblp:conf/icnc/ZhangTT07 fatcat:xiuqu3ugjnbkdmcjgnt72suqky
Lecture Notes in Computer Science
We have noted that the local minima problem in the backpropagation algorithm is usually caused by update disharmony between weights connected to the hidden layer and the output layer. ... Thus, it can avoid the local minima problem caused by such disharmony. Moreover, some new learning parameters introduced for the added term are easy to select. ... avoid the local minima problem that occurs due to neuron saturation in the hidden layer. ...doi:10.1007/978-3-540-28647-9_57 fatcat:bg5lyscsibdhzgwrx5nipjvy74
No fundamental problems were encountered, apart from a tendency to get stuck in local minima, in exactly the same way as in backpropaga- tion learning. ... Somewhat surprisingly, learning on these conceptually sim- ple “hard” problems was actually improved by the presence of high levels of noise, and the network became stuck in local minima less frequently ...
Here, the modified backpropagation algorithm is proposed, in which the white Gaussian noise is added in the weighted sum entity of the backpropagation. ... It is observed that the proposed modified backpropagation requires less number of epochs as compared to the standard backpropagation for the convergence when applied on 2 bit parity and Iris dataset. ... According to the research carried out in  and  , the backpropagation (BP) convergence rate is very poor due to local minima and the flatspot problem. ...doi:10.1016/j.procs.2018.10.401 fatcat:5qc2qnk6cjgijn5ls7aw7xepbi
The effect of this neural network is to peturb the cost landscape as a function of its parameters, so that local minima can be escaped or avoided via a modification to the cost landscape itself. ... Although such algorithms are powerful in principle, the non-convexity of the associated cost landscapes and the prevalence of local minima means that local optimization methods such as gradient descent ... ACKNOWLEDGEMENTS We would like to thank the Pennylane Developers for helping solving the issues regaring the numerical implementation. ...arXiv:2104.02955v2 fatcat:wejdqrtzpren5km6cyazjdq2ga
local minimas and fine-tune the weights, so that the network achieves higher accuracy results. ... In this paper, we propose a hybrid method that uses both backpropagation and evolutionary strategies to train Convolutional Neural Networks, where the evolutionary strategies are used to help to avoid ... pausing to perform the evolutionary algorithm in the weights of the last layer, in order to avoid local minimums. ...arXiv:2005.04153v1 fatcat:26aswpy7ufa6tavm3oxnmutq34
However, in many cases, these algorithms are very slow and susceptible to the local minimum problem. ... The GA-LM algorithm was used to train a Time-Delay Neural Network for river flow prediction. ... Our algorithm aims to combine the capacity of GAs in avoiding local minima and the fast execution of the LM algorithm. ...doi:10.1007/978-3-642-18991-3_53 fatcat:gv5hmvarfvbkzbg4vyd3vr46ky
Multiple Layer Perceptron networks trained with backpropagation algorithm are very frequently used to solve a wide variety of real-world problems. ... This paper describes an approach to substitute it completely by a genetic algorithm. ... The numbers given in this paper are strongly dependent on many simulation conditions. Other problems under different circumstances may lead to slightly different results. ...dblp:conf/esann/Seiffert01 fatcat:ixcarurfzzghdd23iovnxu3lim
Over the years, many improvements and refinements of the backpropagation learning algorithm have been reported. ... In this study, the new approach has been applied to the backpropagation learning algorithm as well as the RPROP learning algorithm and simulations have been performed. ... In general, the RPROP learning algorithm converges faster to global or local minima. ...doi:10.1109/ijcnn.2006.247341 dblp:conf/ijcnn/JansenN06 fatcat:zuqdjxi3hzbeboznb2rrmtwvdm
The backpropagation learning algorithm for feedforward networks (Rumelhart et al. 1986 ) has recently been generalized to recurrent networks (Pineda 1989) . ... In this note, we report a modification of the delta weight update rule that significantly improves both the performance and the speed of the original Pearlmutter learning algorithm. ... The conjugate gradient method converged very quickly, but always to local minima (Figure lc) . ...doi:10.1162/neco.19188.8.131.520 fatcat:odthh343mrceliandn3or6ffia
Hybrid Information Systems
Selection of the topology of a neural network and correct parameters for the learning algorithm is a tedious task for designing an optimal artificial neural network, which is smaller, faster and with a ... Our preliminary experimentation results show that the proposed deterministic approach could provide near optimal results much faster than the evolutionary approach. ... In this connection, the primitive backpropagation may result in a better solution than more sophisticated methods, because its disadvantages turn to the benefits of avoiding some shallow local minima ...doi:10.1007/978-3-7908-1782-9_8 fatcat:etyuvlv54fbkngk7b6vfscbhqq
The Journal of the Operational Research Society
A modified back propagation method to avoid false local minima. Neural Networks 11: 1059-1072. Glover F (1986). Future paths for integer programming and links to artificial intelligence. ... Although BP is the most popular training algorithm for NNs, it has two shortcomings: potential to get trapped in local minima and slow convergence rate to high-quality solutions. ...
In an attempt to increase prognostic accuracy, many putative prognostic factors have been identified. ... The TNM staging system has been used since the early 1960's to predict breast cancer patient outcome. ... As mentioned earlier, for PUs a global search is required to solve the local-minima problems. ...dblp:conf/nips/LeerinkGHJ94 fatcat:axry2ehsmnar7lwmhqiganwf7e
To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. ... To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. ... The other problem of Back propagation algorithm is that when there is an optimization problem present in updating local minima. ...doi:10.11591/ijeecs.v7.i3.pp809-817 fatcat:i4ejabowsjftrmvspefbgrffi4
One of the possible remedies to escape from local minima is using a very small learning rate, but this will slow the learning process. ... Learning often takes insupportable time to converge, and it may fall into local minima at all. ... Small values for the learning rates were used to avoid local minima, the value ranges from 0.1 to 0.4. ...doi:10.28945/866 fatcat:zf4342yegfbqzfwbkt3hfdlody
« Previous Showing results 1 — 15 out of 5,068 results