Filters








79 Hits in 5.4 sec

Scalable Preconditioning of Block-Structured Linear Algebra Systems using ADMM [article]

Jose S. Rodriguez, Carl D. Laird, Victor M. Zavala
2019 arXiv   pre-print
We study the solution of block-structured linear algebra systems arising in optimization by using iterative solution techniques.  ...  Our approach uses a Krylov solver (GMRES) that is preconditioned with an alternating method of multipliers (ADMM).  ...  In this work, we provide a detailed derivation of this ADMM-GMRES approach and test its performance in the context of block-structured linear algebra systems.  ... 
arXiv:1904.11003v1 fatcat:fis24khdjja3xl5f252ywr6nse

PRESAS: Block-Structured Preconditioning of Iterative Solvers within a Primal Active-Set Method for fast MPC [article]

Rien Quirynen, Stefano Di Cairano
2020 arXiv   pre-print
Model predictive control (MPC) for linear dynamical systems requires solving an optimal control structured quadratic program (QP) at each sampling instant.  ...  Three different block-structured preconditioning techniques are presented and their numerical properties are studied further.  ...  Namely, PRESAS enjoys the preferred computational complexity of ( 2 ) per iteration based on its block sparsity structure exploitation and the use of iterative linear algebra routines.  ... 
arXiv:1912.02122v2 fatcat:bvdj3eglhjaijnjokpbgapbhxy

Fast Solution Methods for Convex Quadratic Optimization of Fractional Differential Equations [article]

Spyridon Pougkakiotis, John W. Pearson, Santolo Leveque, Jacek Gondzio
2020 arXiv   pre-print
Discretized versions of FDEs involve large dense linear systems. In order to overcome this difficulty, we design a recursive linear algebra, which is based on the Fast Fourier Transform (FFT).  ...  We develop an Alternating Direction Method of Multipliers (ADMM) framework, which uses preconditioned Krylov subspace solvers for the resulting sub-problems.  ...  SL acknowledges financial support from a School of Mathematics PhD studentship at the University of Edinburgh, and JG acknowledges support from the EPSRC grant EP/N019652/1.  ... 
arXiv:1907.13428v4 fatcat:5ytqsreurbedbjgqu2pkunucem

Solving Variational Inequalities and Cone Complementarity Problems in Non‐Smooth Dynamics using the Alternating Direction Method of Multipliers

Alessandro Tasora, Dario Mangoni, Simone Benatti, Rinaldo Garziera
2021 International Journal for Numerical Methods in Engineering  
We ground our algorithm on the Alternating Direction Method of Multipliers (ADMM), an efficient and robust optimization method that draws on few computational primitives.  ...  In order to improve computational performance, we reformulated the original ADMM scheme in order to exploit the sparsity of constraint jacobians and we added optimizations such as warm starting and adaptive  ...  that is the solution of the linear system.  ... 
doi:10.1002/nme.6693 fatcat:o4hs7hwhofbt7l2fljlggu5xqm

GPU acceleration of ADMM for large-scale quadratic programming

Michel Schubiger, Goran Banjac, John Lygeros
2020 Journal of Parallel and Distributed Computing  
The alternating direction method of multipliers (ADMM) is a powerful operator splitting technique for solving structured convex optimization problems.  ...  We build our solver on top of OSQP, a state-of-the-art implementation of ADMM for quadratic programming.  ...  Acknowledgment We are grateful to Samuel Balula for helpful discussions and managing the hardware used in this work.  ... 
doi:10.1016/j.jpdc.2020.05.021 fatcat:jqgc6h34jvdrtoawi57shq7wse

Nonlinear programming strategies on high-performance computers

Jia Kang, Naiyuan Chiang, Carl D. Laird, Victor M. Zavala
2015 2015 54th IEEE Conference on Decision and Control (CDC)  
We discuss structured nonlinear programming problems arising in control applications, and we review software and hardware capabilities that enable the efficient exploitation of such structures.  ...  We focus on linear algebra parallelization strategies and discuss how these interact and influence highlevel algorithmic design elements required to enforce global convergence and deal with negative curvature  ...  Department of Energy, Office of science, under Contract No. DE-AC02-06CH11357.  ... 
doi:10.1109/cdc.2015.7402938 dblp:conf/cdc/KangCLZ15 fatcat:3tx4wnaeargflbzbpvmgubn2ka

High-performance Kernel Machines with Implicit Distributed Optimization and Randomization [article]

Vikas Sindhwani, Haim Avron
2015 arXiv   pre-print
general purpose convex optimization, and (ii) the use of randomization to improve the scalability of kernel methods.  ...  Our approach is based on a block-splitting variant of the Alternating Directions Method of Multipliers, carefully reconfigured to handle very large random feature matrices, while exploiting hybrid parallelism  ...  to highly tuned parallel basic linear algebra subprograms (BLAS) implementations.  ... 
arXiv:1409.0940v3 fatcat:dw3lzspvmnbyhouwgieqeij6sy

GPU Acceleration of ADMM for Large-Scale Quadratic Programming [article]

Michel Schubiger, Goran Banjac, John Lygeros
2019 arXiv   pre-print
The alternating direction method of multipliers (ADMM) is a powerful operator splitting technique for solving structured convex optimization problems.  ...  We build our solver on top of OSQP, a state-of-the-art implementation of ADMM for quadratic programming.  ...  Acknowledgements We are grateful to Samuel Balula for helpful discussions and managing the hardware used in this work.  ... 
arXiv:1912.04263v1 fatcat:wlqxmc6uu5h3rfykxfxbkdfkbm

Leveraging GPU batching for scalable nonlinear programming through massive Lagrangian decomposition [article]

Youngdae Kim and François Pacaud and Kibaek Kim and Mihai Anitescu
2021 arXiv   pre-print
By using the application of distributed control of alternating current optimal power flow, where a large problem is decomposed into many smaller nonlinear programs using a Lagrangian approach, we demonstrate  ...  Our numerical results show the linear scaling with respect to the batch size and the number of GPUs and more than 35 times speedup on 6 GPUs than on 40 CPUs available on a single node.  ...  For example, GPUs have been used to accelerate the solution of linear systems arising in convex optimization algorithms [30, 36, 37] and the KKT system of an augmented Lagrangian of nonlinear programming  ... 
arXiv:2106.14995v1 fatcat:jmqmz2ckhbdf3cc332zw6sonfa

A Survey of Recent Scalability Improvements for Semidefinite Programming with Applications in Machine Learning, Control, and Robotics [article]

Anirudha Majumdar, Georgina Hall, Amir Ali Ahmadi
2019 arXiv   pre-print
trade off scalability with conservatism (e.g., by approximating semidefinite programs with linear and second-order cone programs).  ...  Historically, scalability has been a major challenge to the successful application of semidefinite programming in fields such as machine learning, control, and robotics.  ...  This work is partially supported by the DARPA Young Faculty Award, the CAREER Award of the NSF, the Google Faculty Award, the Innovation Award of the School of Engineering and Applied Sciences at Princeton  ... 
arXiv:1908.05209v3 fatcat:g2vqfhv27vgddbv7l4xciywf4u

Convex Optimization for Big Data: Scalable, randomized, and parallel algorithms for big data analytics

Volkan Cevher, Stephen Becker, Mark Schmidt
2014 IEEE Signal Processing Magazine  
We provide an overview of this emerging field, describe contemporary approximation techniques like first-order methods and randomization for scalability, and survey the important role of parallel and distributed  ...  linear systems.  ...  We describe three impacts of randomizing linear algebra routines in optimization here.  ... 
doi:10.1109/msp.2014.2329397 fatcat:7np3knuhena2fd5o6tqjtpbzai

A semi-proximal augmented Lagrangian based decomposition method for primal block angular convex composite quadratic conic programming problems [article]

Xin-Yee Lam, Defeng Sun, Kim-Chuan Toh
2018 arXiv   pre-print
We propose a semi-proximal augmented Lagrangian based decomposition method for convex composite quadratic conic programming problems with primal block angular structures.  ...  Numerical results show that our algorithms can perform well even for very large instances of primal block angular convex QP problems.  ...  Acknowledgements We would like to thank Professor Jordi Castro for sharing with us his solver BlockIP so that we are able to evaluate the performance of our algorithm more comprehensively.  ... 
arXiv:1812.04941v1 fatcat:jjah6n7onrd6xihrzr7pvrofua

Faster Kernel Ridge Regression Using Sketching and Preconditioning

Haim Avron, Kenneth L. Clarkson, David P. Woodruff
2017 SIAM Journal on Matrix Analysis and Applications  
In this paper, we propose a preconditioning technique for accelerating the solution of the aforementioned linear system.  ...  Kernel ridge regression is a simple yet powerful technique for nonparametric regression whose computation amounts to solving a linear system. This system is usually dense and highly ill-conditioned.  ...  Fast numerical linear algebra using sketching.  ... 
doi:10.1137/16m1105396 fatcat:tmkn47vimvhrllkzsbze3za4ne

CP2K on the road to exascale [article]

Thomas D. Kühne, Christian Plessl, Robert Schade, Ole Schütt
2022 arXiv   pre-print
The CP2K program package, which can be considered as the swiss army knife of atomistic simulations, is presented with a special emphasis on ab-initio molecular dynamics using the second-generation Car-Parrinello  ...  After outlining current and near-term development efforts with regards to massively parallel low-scaling post-Hartree-Fock and eigenvalue solvers, novel approaches on how we plan to take full advantage of  ...  In combination with the purification scheme of Eq. 14, we were able to perform record sized linear-scaling electronic structure computations on systems with more than 100 million atoms, thereby achieving  ... 
arXiv:2205.14741v1 fatcat:pgeghelnxbdcro3isifi2mjuwu

Adaptive Consensus ADMM for Distributed Optimization [article]

Zheng Xu, Gavin Taylor, Hao Li, Mario Figueiredo, Xiaoming Yuan, Tom Goldstein
2017 arXiv   pre-print
The alternating direction method of multipliers (ADMM) is commonly used for distributed model fitting problems, but its performance and reliability depend strongly on user-defined penalty parameters.  ...  We study distributed ADMM methods that boost performance by using different fine-tuned algorithm parameters on each worker node.  ...  without providing a rate, and Banert et al., 2016; Goldstein et al., 2015) prove convergence for some particular variants of ADMM ("linearized" or "preconditioned").  ... 
arXiv:1706.02869v2 fatcat:sqfnhn5tgvcgxhacqhaownpt3q
« Previous Showing results 1 — 15 out of 79 results