An inexact subsampled proximal Newton-type method for large-scale machine learning [article]

Xuanqing Liu, Cho-Jui Hsieh, Jason D. Lee, Yuekai Sun
2017 arXiv   pre-print
We propose a fast proximal Newton-type algorithm for minimizing regularized finite sums that returns an ϵ-suboptimal point in Õ(d(n + √(κ d))(1/ϵ)) FLOPS, where n is number of samples, d is feature dimension, and κ is the condition number. As long as n > d, the proposed method is more efficient than state-of-the-art accelerated stochastic first-order methods for non-smooth regularizers which requires Õ(d(n + √(κ n))(1/ϵ)) FLOPS. The key idea is to form the subsampled Newton subproblem in a way
more » ... hat preserves the finite sum structure of the objective, thereby allowing us to leverage recent developments in stochastic first-order methods to solve the subproblem. Experimental results verify that the proposed algorithm outperforms previous algorithms for ℓ_1-regularized logistic regression on real datasets.
arXiv:1708.08552v1 fatcat:pr7nrwnc3jaybize2k2544ml3i