A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Power Function Error Initialization Can Improve Convergence of Backpropagation Learning in Neural Networks for Classification
2021
Neural Computation
Abstract supervised learning corresponds to minimizing a loss or cost function expressing the differences between model predictions yn and the target values tn given by the training data. In neural networks, this means backpropagating error signals through the transposed weight matrixes from the output layer toward the input layer. For this, error signals in the output layer are typically initialized by the difference yn - tn, which is optimal for several commonly used loss functions like
doi:10.1162/neco_a_01407
pmid:34310673
fatcat:2gru2jkkuvbrzomrsf3jhgaime