A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2023; you can also visit the original URL.
The file type is application/pdf
.
Almost Sure Convergence of Dropout Algorithms for Neural Networks
[article]
2023
arXiv
pre-print
We investigate the convergence and convergence rate of stochastic training algorithms for Neural Networks (NNs) that have been inspired by Dropout (Hinton et al., 2012). With the goal of avoiding overfitting during training of NNs, dropout algorithms consist in practice of multiplying the weight matrices of a NN componentwise by independently drawn random matrices with {0, 1 }-valued entries during each iteration of Stochastic Gradient Descent (SGD). This paper presents a probability
arXiv:2002.02247v2
fatcat:7ufpgdskabevdgchzih3tbk2bq