Filters








2,755 Hits in 5.3 sec

Scaling the Convex Barrier with Sparse Dual Algorithms [article]

Alessandro De Palma, Harkirat Singh Behl, Rudy Bunel, Philip H. S. Torr, M. Pawan Kumar
2021
We alleviate this deficiency by presenting two novel dual algorithms: one operates a subgradient method on a small active set of dual variables, the other exploits the sparsity of Frank-Wolfe type optimizers  ...  Tight and efficient neural network bounding is crucial to the scaling of neural network verification systems.  ...  However, the convex relaxation considered in the dual solvers is itself very weak (Ehlers, 2017) , hitting what is now commonly referred to as the "convex barrier" (Salman et al., 2019) .  ... 
doi:10.48550/arxiv.2101.05844 fatcat:oh57lv3kffgrre44bmf7zqridu

Page 8092 of Mathematical Reviews Vol. , Issue 99k [page]

1999 Mathematical Reviews  
We provide an alternative definition of self-scaled barriers and then conclude with a discussion of the scalings of the variables which keep the underlying convex cone invariant.”  ...  Tichatschke (Trier) 99k:90119 90C25 52A41 90C60 Tungel, Levent (3-WTRLM-CB; Waterloo, ON) Primal-dual symmetry and scale invariance of interior-point algorithms for convex optimization.  ... 

Linear matrix inequalities with chordal sparsity patterns and applications to robust quadratic optimization

Martin S. Andersen, Lieven Vandenberghe, Joachim Dahl
2010 2010 IEEE International Symposium on Computer-Aided Control System Design  
The algorithms take advantage of fast recursive algorithms for evaluating the function values and derivatives for the logarithmic barrier functions of the cone of positive semidefinite matrices with a  ...  given chordal sparsity pattern, and of the corresponding dual cone.  ...  This is an important property of chordal graphs, and it is the basis of the chordal matrix algorithms for the problems described henceforth. 2) Value and gradient of dual barrier: The barrier for the cone  ... 
doi:10.1109/cacsd.2010.5612788 dblp:conf/cacsd/AndersenVD10 fatcat:effa7wzdynenvd6ljrc5jmvl3u

An Interior-Point Method for Large-Scale -Regularized Least Squares

Seung-Jean Kim, K. Koh, M. Lustig, Stephen Boyd, Dimitry Gorinevsky
2007 IEEE Journal on Selected Topics in Signal Processing  
It can efficiently solve large dense problems, that arise in sparse signal recovery with orthogonal transforms, by exploiting fast algorithms for these transforms.  ...  His current research interests include convex optimization with engineering applications, large-scale optimization, robust optimization, computational finance, machine learning, and statistics.  ...  ACKNOWLEDGMENT The authors are grateful to the anonymous reviewers, E. Candès, M. Calder, J. Duchi, and M. Grant, for helpful comments.  ... 
doi:10.1109/jstsp.2007.910971 fatcat:g4yvsd5uhbfc7oivrwbbtwmpna

A sparse proximal implementation of the LP dual active set algorithm

Timothy A. Davis, William W. Hager
2006 Mathematical programming  
We present an implementation of the LP Dual Active Set Algorithm (LP DASA) based on a quadratic proximal approximation, a strategy for dropping inactive equations from the constraints, and recently developed  ...  algorithms for updating a sparse Cholesky factorization after a low-rank change.  ...  In particular, the elegant reformulation of the LP version of Algorithm 2 in terms of least squares problems was suggested by a referee.  ... 
doi:10.1007/s10107-006-0017-0 fatcat:4sndvoe7pzflnevhd4etgpllte

Page 1878 of Mathematical Reviews Vol. , Issue 95c [page]

1995 Mathematical Reviews  
The algorithms are designed to exploit features of primal and dual decomposability of the Lagrangian, which are typically available in a large-scale setting, and they are open to considerable paral- lelization  ...  The key assumption there is that for problems with sparse structure P is reasonably well approximated by ) P;. N. I.  ... 

Page 668 of Mathematical Reviews Vol. , Issue 2004a [page]

2004 Mathematical Reviews  
Convex optimization deals with the minimization of a convex function over a convex set.  ...  The algorithm is motivated from function approximation using sparse combinations of basis functions as well as some of its variants.  ... 

Page 4394 of Mathematical Reviews Vol. , Issue 95g [page]

1995 Mathematical Reviews  
First, we introduce a primal-dual algorithmic frame- work based on the logarithmic barrier function method, where the solution of the linear systems is performed by a Krylov-subspace method.  ...  H. (4-LNDIC-PR; London) Computational experience with several methods for large sparse convex quadratic programming.  ... 

Implementation of nonsymmetric interior-point methods for linear optimization over sparse matrix cones

Martin S. Andersen, Joachim Dahl, Lieven Vandenberghe
2010 Mathematical Programming Computation  
pattern and its dual cone, the cone of chordal sparse matrices that have a positive semidefinite completion.  ...  The implementation takes advantage of fast recursive algorithms for evaluating the function values and derivatives of the logarithmic barrier functions for these cones.  ...  the original author(s) and source are credited.  ... 
doi:10.1007/s12532-010-0016-2 fatcat:2nrkpvzsfrb47cu2m6bnl5soua

A primal-dual interior-point algorithm for nonsymmetric exponential-cone optimization

Joachim Dahl, Erling D. Andersen
2021 Mathematical programming  
We specialize Tunçel's primal-dual scalings for the important case of 3 dimensional exponential-cones, resulting in a practical algorithm with good numerical performance, on level with standard symmetric  ...  It is a generalization of the famous algorithm suggested by Nesterov and Todd for the symmetric conic case, and uses primal-dual scalings for nonsymmetric cones proposed by Tunçel.  ...  One such is example is the nonsymmetric cone of semidefinite matrices with sparse chordal structure [32] , which could extend primal-dual solvers like MOSEK with the ability to solve large sparse semidefinite  ... 
doi:10.1007/s10107-021-01631-4 fatcat:akr3wt4zengfdeurlakyxyir7m

An Efficient Message Filtering Strategy Based on Asynchronous ADMM with L1 Regularization

Jiafeng Zhang
2019 Journal of Physics, Conference Series  
With the increment of data scale, distributed machine learning has received more and more attention.  ...  Experiments on largescale sparse data show that our algorithm can effectively reduce the traffic of messages and make the algorithm reach convergence in a shorter time.  ...  message filtering strategy In the distributed optimization with large-scale sparse dataset, the dimensions of the model parameters will be very high.  ... 
doi:10.1088/1742-6596/1284/1/012066 fatcat:q4pkhpcwendufpcrstv3gmvxou

Page 4076 of Mathematical Reviews Vol. , Issue 92g [page]

1992 Mathematical Reviews  
The method also makes use of sparse matrix technology. In Section 6 the numerical experience with the algorithm is documented.  ...  Summary: “We present a primal polynomial-time barrier function algorithm for convex quadratic programming.  ... 

Recent Developments in Interior-Point Methods [chapter]

Stephen J. Wright
2000 IFIP Advances in Information and Communication Technology  
In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite  ...  The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming.  ...  Powell and the other organizers of the IFIP TC7 '99 conference for arranging a most enjoyable and interesting meeting, and for a dose reading of the paper which resulted in many improvements.  ... 
doi:10.1007/978-0-387-35514-6_14 fatcat:chzlfupe7bbhbdyynhatlckjcu

On Optimal Frame Conditioners [article]

Chae A. Clark, Kasso A. Okoudjou
2015 arXiv   pre-print
In this paper we reformulate the scalability problem as a convex optimization question.  ...  In particular, we present examples of various formulations of the problem along with numerical results obtained by using our methods on randomly generated frames.  ...  And forming the gradient descent algorithm with η, results in Algorithm 1 Gradient Descent w.r.t. µ while not converged do µ k+1 ← µ k − η • (Lu − b) end while Algorithm 2 Gradient Descent w.r.t. λ while  ... 
arXiv:1501.06494v1 fatcat:spqxypmnabestjx3p476p6o7oy

Preface

Alexandre d'Aspremont, Francis Bach, Inderjit S. Dhillon, Bin Yu
2010 Mathematical programming  
At the same time, machine learning provides optimization with an ever larger array of new problems and challenging data sets: 1 penalized least-squares and the NETFLIX problem being two recent examples  ...  that significant advances in computing power have allowed mathematical programming to start attacking realistically large statistical problems, and statisticians to consider sophisticated optimization algorithms  ...  primal-dual subgradient method for nonsmooth convex optimization problems, where the feasible set is described by a self-concordant barrier.  ... 
doi:10.1007/s10107-010-0424-0 fatcat:vg2jfbgvt5b2hnat7emkej7miy
« Previous Showing results 1 — 15 out of 2,755 results