Riemannian Stochastic Recursive Momentum Method for non-Convex Optimization

Andi Han, Junbin Gao
2021 Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence   unpublished
We propose a stochastic recursive momentum method for Riemannian non-convex optimization that achieves a nearly-optimal complexity to find epsilon-approximate solution with one sample. The new algorithm requires one-sample gradient evaluations per iteration and does not require restarting with a large batch gradient, which is commonly used to obtain a faster rate. Extensive experiment results demonstrate the superiority of the proposed algorithm. Extensions to nonsmooth and constrained optimization settings are also discussed.
doi:10.24963/ijcai.2021/345 fatcat:3oon3fbrqfe27a4pz6iruvr4am