An Adaptive Remote Stochastic Gradient Method for Training Neural Networks [article]

Yushu Chen, Hao Jing, Wenlai Zhao, Zhiqiang Liu, Ouyi Li, Liang Qiao, Wei Xue, Guangwen Yang
2020 arXiv   pre-print
We present the remote stochastic gradient (RSG) method, which computes the gradients at configurable remote observation points, in order to improve the convergence rate and suppress gradient noise at the same time for different curvatures. RSG is further combined with adaptive methods to construct ARSG for acceleration. The method is efficient in computation and memory, and is straightforward to implement. We analyze the convergence properties by modeling the training process as a dynamic
more » ... , which provides a guideline to select the configurable observation factor without grid search. ARSG yields O(1/√(T)) convergence rate in non-convex settings, that can be further improved to O(log(T)/T) in strongly convex settings. Numerical experiments demonstrate that ARSG achieves both faster convergence and better generalization, compared with popular adaptive methods, such as ADAM, NADAM, AMSGRAD, and RANGER for the tested problems. In particular, for training ResNet-50 on ImageNet, ARSG outperforms ADAM in convergence speed and meanwhile it surpasses SGD in generalization.
arXiv:1905.01422v8 fatcat:7l5cewu2dne4rg3xujhqf3ad7q