Faster p-Norm Regression Using Sparsity [article]

Mehrdad Ghadiri, Richard Peng, Santosh S. Vempala
2021 arXiv   pre-print
For a matrix A∈ℝ^n× d with n≥ d, we consider the dual problems of minAx-b_p^p, b∈ℝ^n and min_A^⊤ x=bx_p^p, b∈ℝ^d. We improve the runtimes for solving these problems to high accuracy for every p>1 for sufficiently sparse matrices. We show that recent progress on fast sparse linear solvers can be leveraged to obtain faster than matrix-multiplication algorithms for any p > 1, i.e., in time Õ(pn^θ) for some θ < ω, the matrix multiplication constant. We give the first high-accuracy input sparsity
more » ... orm regression algorithm for solving minAx-b_p^p with 1 < p ≤ 2, via a new row sampling theorem for the smoothed p-norm function. This algorithm runs in time Õ(nnz(A) + d^4) for any 1<p≤ 2, and in time Õ(nnz(A) + d^θ) for p close to 2, improving on the previous best bound where the exponent of d grows with max{p, p/(p-1)}.
arXiv:2109.11537v2 fatcat:arut4ej46zexzdg2rxpi4l7c5e