Power Function Error Initialization Can Improve Convergence of Backpropagation Learning in Neural Networks for Classification

Andreas Knoblauch
2021 Neural Computation  
Abstract supervised learning corresponds to minimizing a loss or cost function expressing the differences between model predictions yn and the target values tn given by the training data. In neural networks, this means backpropagating error signals through the transposed weight matrixes from the output layer toward the input layer. For this, error signals in the output layer are typically initialized by the difference yn - tn, which is optimal for several commonly used loss functions like
more » ... entropy or sum of squared errors. Here I evaluate a more general error initialization method using power functions |yn - tn|q for q>0, corresponding to a new family of loss functions that generalize cross-entropy. Surprisingly, experiments on various learning tasks reveal that a proper choice of q can significantly improve the speed and convergence of backpropagation learning, in particular in deep and recurrent neural networks. The results suggest two main reasons for the observed improvements. First, compared to cross-entropy, the new loss functions provide better fits to the distribution of error signals in the output layer and therefore maximize the model's likelihood more efficiently. Second, the new error initialization procedure may often provide a better gradient-to-loss ratio over a broad range of neural output activity, thereby avoiding flat loss landscapes with vanishing gradients.
doi:10.1162/neco_a_01407 pmid:34310673 fatcat:2gru2jkkuvbrzomrsf3jhgaime