A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization
[article]
2021
arXiv
pre-print
Many supervised machine learning methods are naturally cast as optimization problems. For prediction models which are linear in their parameters, this often leads to convex problems for which many mathematical guarantees exist. Models which are non-linear in their parameters such as neural networks lead to non-convex optimization problems for which guarantees are harder to obtain. In this review paper, we consider two-layer neural networks with homogeneous activation functions where the number
arXiv:2110.08084v1
fatcat:stye5jkm5fhyjiclvmz6olxtly