A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Fast Proximal Gradient Descent for A Class of Non-convex and Non-smooth Sparse Learning Problems
2019
Conference on Uncertainty in Artificial Intelligence
Non-convex and non-smooth optimization problems are important for statistics and machine learning. However, solving such problems is always challenging. In this paper, we propose fast proximal gradient descent based methods to solve a class of non-convex and non-smooth sparse learning problems, i.e. the 0 regularization problems. We prove improved convergence rate of proximal gradient descent on the 0 regularization problems, and propose two accelerated versions by support projection. The
dblp:conf/uai/YangY19
fatcat:zpzotiiysnbizi54mrrs7up6ei