Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems [article]

Luo Luo, Haishan Ye, Zhichao Huang, Tong Zhang
2020 arXiv   pre-print
We consider nonconvex-concave minimax optimization problems of the form min_ xmax_ y∈𝒴 f( x, y), where f is strongly-concave in y but possibly nonconvex in x and 𝒴 is a convex and compact set. We focus on the stochastic setting, where we can only access an unbiased stochastic gradient estimate of f at each iteration. This formulation includes many machine learning applications as special cases such as robust optimization and adversary training. We are interested in finding an 𝒪(ε)-stationary
more » ... nt of the function Φ(·)=max_ y∈𝒴 f(·, y). The most popular algorithm to solve this problem is stochastic gradient decent ascent, which requires 𝒪(κ^3ε^-4) stochastic gradient evaluations, where κ is the condition number. In this paper, we propose a novel method called Stochastic Recursive gradiEnt Descent Ascent (SREDA), which estimates gradients more efficiently using variance reduction. This method achieves the best known stochastic gradient complexity of 𝒪(κ^3ε^-3), and its dependency on ε is optimal for this problem.
arXiv:2001.03724v2 fatcat:rg2yygen7fg5jg6udqb7ru2zji