Stochastic subGradient Methods with Linear Convergence for Polyhedral Convex Optimization [article]

Tianbao Yang, Qihang Lin
2016 arXiv   pre-print
In this paper, we show that simple Stochastic subGradient Decent methods with multiple Restarting, named RSGD, can achieve a linear convergence rate for a class of non-smooth and non-strongly convex optimization problems where the epigraph of the objective function is a polyhedron, to which we refer as polyhedral convex optimization. Its applications in machine learning include ℓ_1 constrained or regularized piecewise linear loss minimization and submodular function minimization. To the best of
more » ... our knowledge, this is the first result on the linear convergence rate of stochastic subgradient methods for non-smooth and non-strongly convex optimization problems.
arXiv:1510.01444v5 fatcat:3u3w4374e5cqhgjfa6aan6oyue