Accelerating Minibatch Stochastic Gradient Descent using Stratified Sampling [article]

Peilin Zhao, Tong Zhang
2014 arXiv   pre-print
Stochastic Gradient Descent (SGD) is a popular optimization method which has been applied to many important machine learning tasks such as Support Vector Machines and Deep Neural Networks. In order to parallelize SGD, minibatch training is often employed. The standard approach is to uniformly sample a minibatch at each step, which often leads to high variance. In this paper we propose a stratified sampling strategy, which divides the whole dataset into clusters with low within-cluster variance;
more » ... we then take examples from these clusters using a stratified sampling technique. It is shown that the convergence rate can be significantly improved by the algorithm. Encouraging experimental results confirm the effectiveness of the proposed method.
arXiv:1405.3080v1 fatcat:qem2auyfqzfeldhwq7wuak2tn4