One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods [article]

Filip Hanzely, Peter Richtárik
2020 arXiv   pre-print
We propose a remarkably general variance-reduced method suitable for solving regularized empirical risk minimization problems with either a large number of training examples, or a large model dimension, or both. In special cases, our method reduces to several known and previously thought to be unrelated methods, such as SAGA, LSVRG, JacSketch, SEGA and ISEGA, and their arbitrary sampling and proximal generalizations. However, we also highlight a large number of new specific algorithms with
more » ... esting properties. We provide a single theorem establishing linear convergence of the method under smoothness and quasi strong convexity assumptions. With this theorem we recover best-known and sometimes improved rates for known methods arising in special cases. As a by-product, we provide the first unified method and theory for stochastic gradient and stochastic coordinate descent type methods.
arXiv:1905.11266v2 fatcat:hnhbmvpsljf3rkt43g7j3gu4dy