The Internet Archive has a preservation copy of this work in our general collections.
The file type is application/pdf
.
Filters
Fast projections onto mixed-norm balls with applications
[article]
2012
arXiv
pre-print
We address this deficiency by presenting batch and online (stochastic-gradient) optimization methods, both of which rely on efficient projections onto mixed-norm balls. ...
Joint sparsity offers powerful structural cues for feature selection, especially for variables that are expected to demonstrate a "grouped" behavior. ...
For solving an overall regression problem involving mixed-norms we suggested two main algorithms, spectral projected gradient and stochastic-gradient (for separable losses). ...
arXiv:1204.1437v1
fatcat:myndmgnwkjhedopoa4v26ynd7a
Optimization in learning and data analysis
2013
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '13
L-BFGS): use gradient information to maintain an approximation to ∇ 2 f (x); inexact Newton: Use a method for iterative linear equations to solve for d. ...
Most famously: sparsity of the unknown vector x (few nonzeros). Low-rank and/or sparsity of an unknown matrix X . "Naturalness" of an image vector u ij , for i, j = 1, 2, . . . , N. ...
Given some estimatex of the primal solution andū of the dual, get better approximation x(β) by solving a convex quadratic program with bounds: where β is a penalty parameter. ...
doi:10.1145/2487575.2492149
dblp:conf/kdd/Wright13
fatcat:j6edo6tj65fhjil5sg2hqkpvly
Chapter 37: Methodologies and Software for Derivative-Free Optimization
[chapter]
2017
Advances and Trends in Optimization with Engineering Applications
Good numerical results on unconstrained problems were also reported for the BC-DFO code [76] , an interpolation based trust-region method developed for bound constrained optimization (see Section 37.3 ...
An RBF is defined by the composition of an univariate function and a function measuring the distance to a sample point. ...
doi:10.1137/1.9781611974683.ch37
fatcat:v7uiauqijvfmvbtobxpcqyx5py
Metric Learning from Relative Comparisons by Minimizing Squared Residual
2012
2012 IEEE 12th International Conference on Data Mining
Experimental results suggest that our method consistently outperforms existing methods in terms of clustering accuracy. ...
We also extend our model and algorithm to promote sparsity in the learned metric matrix. ...
Acknowledgement We would like to thank Yunchao Gong for help on conducting experiments. This work was partially supported by NSF IIS-0812464 and NIH R01HG006703. ...
doi:10.1109/icdm.2012.38
dblp:conf/icdm/LiuGZJW12
fatcat:lfi5dvg7b5cb5nixf2lwkc6ehq
Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization
2013
International Conference on Machine Learning
We provide stronger and more general primal-dual convergence results for Frank-Wolfe-type algorithms (a.k.a. conditional gradient) for constrained convex optimization, enabled by a simple framework of ...
Our analysis also holds if the linear subproblems are only solved approximately (as well as if the gradients are inexact), and is proven to be worst-case optimal in the sparsity of the obtained solutions ...
helpful discussions and remarks, and Robert Carnecky for the 3d-visualization. ...
dblp:conf/icml/Jaggi13
fatcat:sjrno5eynbf6jcn5o4ckcdf344
Second-order Conditional Gradient Sliding
[article]
2020
arXiv
pre-print
We present the Second-Order Conditional Gradient Sliding (SOCGS) algorithm, which uses a projection-free algorithm to solve the constrained quadratic subproblems inexactly. ...
Constrained second-order convex optimization algorithms are the method of choice when a high accuracy solution to a problem is needed, due to their local quadratic convergence. ...
We would like to thank Gábor Braun for the helpful discussions, and the anonymous reviewers for their suggestions and comments. ...
arXiv:2002.08907v2
fatcat:dzqmj4bkzjdmxppukm6f35iyli
Nearly Non-Expansive Bounds for Mahalanobis Hard Thresholding
2020
Annual Conference Computational Learning Theory
sparsity level. ...
We further show that such a bound extends to an approximate version of H A,k (w) estimated by Hard Thresholding Pursuit (HTP) algorithm. ...
Xiao-Tong Yuan would also like to acknowledge the partial support from National Major Project of China for New Generation of AI under Grant No.2018AAA0100400 and Natural Science Foundation of China (NSFC ...
dblp:conf/colt/Yuan020
fatcat:uu4lxden4rayznizreumewvp7y
An Inexact Proximal Path-Following Algorithm for Constrained Convex Minimization
2014
SIAM Journal on Optimization
We propose an inexact path-following algorithmic framework and theoretically characterize the worst case convergence as well as computational complexity of this framework, and also analyze its behavior ...
For our analysis, we also use two simple convex functions ω(t) := t − ln(1 + t) for t ≥ 0 and ω * (t) := −t − ln(1 − t) for t ∈ [0, 1), which are strictly increasing in their domain. ...
We refer to Q t k β as the quadratic convergence region of the inexact proximal-Newton iterations (3.3) for solving (1.2). ...
doi:10.1137/130944539
fatcat:x524dg3vbzc3depm2im4khg2la
Non-smooth Variable Projection
[article]
2020
arXiv
pre-print
We propose an inexact adaptive algonrithm for solving such problems and analyze its computational complexity. ...
Finally, we show how the theory can be used to design methods for selected problems occurring frequently in machine-learning and inverse problems. ...
We consider an inexact proximal gradient method of the form (D.1)
1 . 1 Algorithm 1.1 Prototype VP algorithm for solving (1.1)
Convex case To achieve sublinear convergence of the outer iterations for ...
arXiv:1601.05011v6
fatcat:clzrc3peindabob6axrwgp6o7a
Tailoring optimization algorithms to process applications
1992
Computers and Chemical Engineering
Also, these methods are briefly compared to general purpose algorithms in order to demonstrate the effectiveness of this approach. ...
In particular, the Successive Quadratic Programming (SQP) algorithm has been successful over the past decade; it has also been extended in a number of ways to optimization problems involving several thousand ...
In addition, I am grateful to my colleagues, Jeff Logsdon, Claudia Schmid and Iauw-Bhieng Tjoa for numerous discussions, and for demonstrating this approach on the examples outlined above. ...
doi:10.1016/s0098-1354(09)80011-2
fatcat:iuvs3t4gknd37pugnxtm7yvg64
An Inexact Proximal Path-Following Algorithm for Constrained Convex Minimization
[article]
2014
arXiv
pre-print
We propose an inexact path-following algorithmic framework and theoretically characterize the worst-case analytical complexity of this framework when the proximal subproblems are solved inexactly. ...
To show the merits of our framework, we apply its instances to both synthetic and real-world applications, where it shows advantages over standard interior point methods. ...
When g is a smooth term and projections on Ω are expensive to compute, sequential convex programming approach such as sequential quadratic programming (SQP) constitutes an efficient strategy for solving ...
arXiv:1311.1756v2
fatcat:wgsfcmke7vcjzjg4lypsx5uh2q
Variable Projection for NonSmooth Problems
2021
SIAM Journal on Scientific Computing
We extend the approach to problems that include nonsmooth terms, develop an inexact adaptive algorithm that solves projection subproblems inexactly by iterative methods, and analyze its computational complexity ...
Classic examples have exploited closed-form projections and smoothness of the objective function. ...
In particular, we give sufficient conditions for the applicability of an inner-outer proximal gradient method. ...
doi:10.1137/20m1348650
fatcat:zxml6lekfzhexeekgyx3bbvbkq
Proximal-Gradient Algorithms for Tracking Cascades Over Social Networks
2014
IEEE Journal on Selected Topics in Signal Processing
To this end, solvers with complementary strengths are developed by leveraging (pseudo) real-time sparsity-promoting proximal gradient iterations, the improved convergence rate of accelerated variants, ...
or reduced computational complexity of stochastic gradient descent. ...
In a linear regression context, a related EWLSE was put forth in [1] for adaptive estimation of sparse signals; see also [18] for a projection-based adaptive algorithm. ...
doi:10.1109/jstsp.2014.2317284
fatcat:oqngnyyszngmpi76yxx7vqq7pm
Level-set methods for convex optimization
[article]
2016
arXiv
pre-print
A zero-finding procedure, based on inexact function evaluations and possibly inexact derivative information, leads to an efficient solution scheme for the original problem. ...
Convex optimization problems arising in applications often have favorable objective functions and complicated constraints, thereby precluding first-order methods from being immediately applicable. ...
In this case, projected (sub)gradient methods require a least-squares solve for the projection step. ...
arXiv:1602.01506v1
fatcat:i7nvu2eifnedxfk3f624bwr2qm
Level-set methods for convex optimization
2018
Mathematical programming
A zero-finding procedure, based on inexact function evaluations and possibly inexact derivative information, leads to an efficient solution scheme for the original problem. ...
Convex optimization problems arising in applications often have favorable objective functions and complicated constraints, thereby precluding first-order methods from being immediately applicable. ...
In this case, projected (sub)gradient methods require a least-squares solve for the projection step. ...
doi:10.1007/s10107-018-1351-8
fatcat:yyvlqxdt5nfltat6icogcc56vu
« Previous
Showing results 1 — 15 out of 359 results