A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems
[article]
2020
arXiv
pre-print
We consider nonconvex-concave minimax optimization problems of the form min_ xmax_ y∈𝒴 f( x, y), where f is strongly-concave in y but possibly nonconvex in x and 𝒴 is a convex and compact set. We focus on the stochastic setting, where we can only access an unbiased stochastic gradient estimate of f at each iteration. This formulation includes many machine learning applications as special cases such as robust optimization and adversary training. We are interested in finding an 𝒪(ε)-stationary
arXiv:2001.03724v2
fatcat:rg2yygen7fg5jg6udqb7ru2zji