Zeroth-order Asynchronous Doubly Stochastic Algorithm with Variance Reduction [article]

Bin Gu and Zhouyuan Huo and Heng Huang
2016 arXiv   pre-print
Zeroth-order (derivative-free) optimization attracts a lot of attention in machine learning, because explicit gradient calculations may be computationally expensive or infeasible. To handle large scale problems both in volume and dimension, recently asynchronous doubly stochastic zeroth-order algorithms were proposed. The convergence rate of existing asynchronous doubly stochastic zeroth order algorithms is O(1/√(T)) (also for the sequential stochastic zeroth-order optimization algorithms). In
more » ... his paper, we focus on the finite sums of smooth but not necessarily convex functions, and propose an asynchronous doubly stochastic zeroth-order optimization algorithm using the accelerated technology of variance reduction (AsyDSZOVR). Rigorous theoretical analysis show that the convergence rate can be improved from O(1/√(T)) the best result of existing algorithms to O(1/T). Also our theoretical results is an improvement to the ones of the sequential stochastic zeroth-order optimization algorithms.
arXiv:1612.01425v1 fatcat:2d5byqvsyrevxizofshwvdu67e