A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is
We alleviate this deficiency by presenting two novel dual algorithms: one operates a subgradient method on a small active set of dual variables, the other exploits the sparsity of Frank-Wolfe type optimizers ... Tight and efficient neural network bounding is crucial to the scaling of neural network verification systems. ... However, the convex relaxation considered in the dual solvers is itself very weak (Ehlers, 2017) , hitting what is now commonly referred to as the "convex barrier" (Salman et al., 2019) . ...doi:10.48550/arxiv.2101.05844 fatcat:oh57lv3kffgrre44bmf7zqridu
We provide an alternative definition of self-scaled barriers and then conclude with a discussion of the scalings of the variables which keep the underlying convex cone invariant.” ... Tichatschke (Trier) 99k:90119 90C25 52A41 90C60 Tungel, Levent (3-WTRLM-CB; Waterloo, ON) Primal-dual symmetry and scale invariance of interior-point algorithms for convex optimization. ...
The algorithms take advantage of fast recursive algorithms for evaluating the function values and derivatives for the logarithmic barrier functions of the cone of positive semidefinite matrices with a ... given chordal sparsity pattern, and of the corresponding dual cone. ... This is an important property of chordal graphs, and it is the basis of the chordal matrix algorithms for the problems described henceforth. 2) Value and gradient of dual barrier: The barrier for the cone ...doi:10.1109/cacsd.2010.5612788 dblp:conf/cacsd/AndersenVD10 fatcat:effa7wzdynenvd6ljrc5jmvl3u
It can efficiently solve large dense problems, that arise in sparse signal recovery with orthogonal transforms, by exploiting fast algorithms for these transforms. ... His current research interests include convex optimization with engineering applications, large-scale optimization, robust optimization, computational finance, machine learning, and statistics. ... ACKNOWLEDGMENT The authors are grateful to the anonymous reviewers, E. Candès, M. Calder, J. Duchi, and M. Grant, for helpful comments. ...doi:10.1109/jstsp.2007.910971 fatcat:g4yvsd5uhbfc7oivrwbbtwmpna
We present an implementation of the LP Dual Active Set Algorithm (LP DASA) based on a quadratic proximal approximation, a strategy for dropping inactive equations from the constraints, and recently developed ... algorithms for updating a sparse Cholesky factorization after a low-rank change. ... In particular, the elegant reformulation of the LP version of Algorithm 2 in terms of least squares problems was suggested by a referee. ...doi:10.1007/s10107-006-0017-0 fatcat:4sndvoe7pzflnevhd4etgpllte
The algorithms are designed to exploit features of primal and dual decomposability of the Lagrangian, which are typically available in a large-scale setting, and they are open to considerable paral- lelization ... The key assumption there is that for problems with sparse structure P is reasonably well approximated by ) P;. N. I. ...
Convex optimization deals with the minimization of a convex function over a convex set. ... The algorithm is motivated from function approximation using sparse combinations of basis functions as well as some of its variants. ...
First, we introduce a primal-dual algorithmic frame- work based on the logarithmic barrier function method, where the solution of the linear systems is performed by a Krylov-subspace method. ... H. (4-LNDIC-PR; London) Computational experience with several methods for large sparse convex quadratic programming. ...
pattern and its dual cone, the cone of chordal sparse matrices that have a positive semidefinite completion. ... The implementation takes advantage of fast recursive algorithms for evaluating the function values and derivatives of the logarithmic barrier functions for these cones. ... the original author(s) and source are credited. ...doi:10.1007/s12532-010-0016-2 fatcat:2nrkpvzsfrb47cu2m6bnl5soua
We specialize Tunçel's primal-dual scalings for the important case of 3 dimensional exponential-cones, resulting in a practical algorithm with good numerical performance, on level with standard symmetric ... It is a generalization of the famous algorithm suggested by Nesterov and Todd for the symmetric conic case, and uses primal-dual scalings for nonsymmetric cones proposed by Tunçel. ... One such is example is the nonsymmetric cone of semidefinite matrices with sparse chordal structure  , which could extend primal-dual solvers like MOSEK with the ability to solve large sparse semidefinite ...doi:10.1007/s10107-021-01631-4 fatcat:akr3wt4zengfdeurlakyxyir7m
With the increment of data scale, distributed machine learning has received more and more attention. ... Experiments on largescale sparse data show that our algorithm can effectively reduce the traffic of messages and make the algorithm reach convergence in a shorter time. ... message filtering strategy In the distributed optimization with large-scale sparse dataset, the dimensions of the model parameters will be very high. ...doi:10.1088/1742-6596/1284/1/012066 fatcat:q4pkhpcwendufpcrstv3gmvxou
The method also makes use of sparse matrix technology. In Section 6 the numerical experience with the algorithm is documented. ... Summary: “We present a primal polynomial-time barrier function algorithm for convex quadratic programming. ...
IFIP Advances in Information and Communication Technology
In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite ... The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. ... Powell and the other organizers of the IFIP TC7 '99 conference for arranging a most enjoyable and interesting meeting, and for a dose reading of the paper which resulted in many improvements. ...doi:10.1007/978-0-387-35514-6_14 fatcat:chzlfupe7bbhbdyynhatlckjcu
In this paper we reformulate the scalability problem as a convex optimization question. ... In particular, we present examples of various formulations of the problem along with numerical results obtained by using our methods on randomly generated frames. ... And forming the gradient descent algorithm with η, results in Algorithm 1 Gradient Descent w.r.t. µ while not converged do µ k+1 ← µ k − η • (Lu − b) end while Algorithm 2 Gradient Descent w.r.t. λ while ...arXiv:1501.06494v1 fatcat:spqxypmnabestjx3p476p6o7oy
At the same time, machine learning provides optimization with an ever larger array of new problems and challenging data sets: 1 penalized least-squares and the NETFLIX problem being two recent examples ... that significant advances in computing power have allowed mathematical programming to start attacking realistically large statistical problems, and statisticians to consider sophisticated optimization algorithms ... primal-dual subgradient method for nonsmooth convex optimization problems, where the feasible set is described by a self-concordant barrier. ...doi:10.1007/s10107-010-0424-0 fatcat:vg2jfbgvt5b2hnat7emkej7miy
« Previous Showing results 1 — 15 out of 2,755 results