A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is
In this paper we propose a nonmonotone approach to recurrent neural networks training for temporal sequence processing applications. This approach allows learning performance to deteriorate in some iterations, nevertheless the network's performance is improved over time. A self-scaling BFGS is equipped with an adaptive nonmonotone technique that employs approximations of the Lipschitz constant and is tested on a set of sequence processing problems. Simulation results show that the proposeddoi:10.1016/j.amc.2010.12.012 fatcat:neps6mha25bbhetgt5tliy6jmq