A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
Convergence of Gradient Descent Algorithm with Penalty Term For Recurrent Neural Networks
2014
International Journal of Multimedia and Ubiquitous Engineering
This paper investigates a gradient descent algorithm with penalty for a recurrent neural network. The penalty we considered here is a term proportional to the norm of the weights. Its primary roles in the methods are to control the magnitude of the weights. After proving that all of the weights are automatically bounded during the iteration process, we also present some deterministic convergence results for this learning methods, indicating that the gradient of the error function goes to
doi:10.14257/ijmue.2014.9.9.17
fatcat:hwadqnxnnfhyfltoqxuzqebyza