A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Rethinking Adam: A Twofold Exponential Moving Average Approach
[article]
2022
arXiv
pre-print
Adaptive gradient methods, e.g. Adam, have achieved tremendous success in machine learning. Scaling the learning rate element-wisely by a certain form of second moment estimate of gradients, such methods are able to attain rapid training of modern deep neural networks. Nevertheless, they are observed to suffer from compromised generalization ability compared with stochastic gradient descent (SGD) and tend to be trapped in local minima at an early stage during training. Intriguingly, we discover
arXiv:2106.11514v3
fatcat:pxrvkdrvfjcqhjka2w6svpe4uq