A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning
[article]
2018
arXiv
pre-print
In this paper we aim to formally explain the phenomenon of fast convergence of SGD observed in modern machine learning. The key observation is that most modern learning architectures are over-parametrized and are trained to interpolate the data by driving the empirical loss (classification and regression) close to zero. While it is still unclear why these interpolated solutions perform well on test data, we show that these regimes allow for fast convergence of SGD, comparable in number of
arXiv:1712.06559v3
fatcat:cgm2ieqksfa3bcza3zb3fgn52e