Extragradient method with variance reduction for stochastic variational inequalities [article]

Alfredo Iusem, Alejandro Jofré, Roberto I. Oliveira, Philip Thompson
2017 arXiv   pre-print
We propose an extragradient method with stepsizes bounded away from zero for stochastic variational inequalities requiring only pseudo-monotonicity. We provide convergence and complexity analysis, allowing for an unbounded feasible set, unbounded operator, non-uniform variance of the oracle and, also, we do not require any regularization. Alongside the stochastic approximation procedure, we iteratively reduce the variance of the stochastic error. Our method attains the optimal oracle complexity
more » ... O(1/ϵ^2) (up to a logarithmic term) and a faster rate O(1/K) in terms of the mean (quadratic) natural residual and the D-gap function, where K is the number of iterations required for a given tolerance ϵ>0. Such convergence rate represents an acceleration with respect to the stochastic error. The generated sequence also enjoys a new feature: the sequence is bounded in L^p if the stochastic error has finite p-moment. Explicit estimates for the convergence rate, the oracle complexity and the p-moments are given depending on problem parameters and distance of the initial iterate to the solution set. Moreover, sharper constants are possible if the variance is uniform over the solution set or the feasible set. Our results provide new classes of stochastic variational inequalities for which a convergence rate of O(1/K) holds in terms of the mean-squared distance to the solution set. Our analysis includes the distributed solution of pseudo-monotone Cartesian variational inequalities under partial coordination of parameters between users of a network.
arXiv:1703.00260v1 fatcat:ywjlr54mdff5ljw7kzpsdtvvcu