Filters








107 Hits in 1.5 sec

Warmstarting column generation for unit commitment [article]

Nagisa Sugishita, Andreas Grothey, Ken McKinnon
2021 arXiv   pre-print
See Takriti, Birge, and Long (1996) and Schulze, Grothey, and McKinnon (2017) for instance.  ...  However, as Schulze, Grothey, and McKinnon (2017) reported, solving such an approximation may require a nontrivial amount of time.  ... 
arXiv:2110.06872v1 fatcat:2n2zephxyjc47dounip3pjstly

Incremental cutting-plane method and its application [article]

Nagisa Sugishita, Andreas Grothey, Ken McKinnon
2021 arXiv   pre-print
We consider regularized cutting-plane methods to minimize a convex function that is the sum of a large number of component functions. One important example is the dual problem obtained from Lagrangian relaxation on a decomposable problem. In this paper, we focus on an incremental variant of the regularized cutting-plane methods, which only evaluates a subset of the component functions in each iteration. We first consider a limited-memory setup where the method deletes cuts after a finite number
more » ... of iterations. The convergence properties of the limited-memory methods are studied under various conditions on regularization. We then provide numerical experiments where the incremental method is applied to the dual problems derived from large-scale unit commitment problems. In many settings, the incremental method is able to find a solution of high precision in a shorter time than the non-incremental method.
arXiv:2110.12533v1 fatcat:jx4mjxtzpje2bbrdcqnfoqsiaa

On the Effectiveness of Sequential Linear Programming for the Pooling Problem [article]

Andreas Grothey, Ken McKinnon
2020 arXiv   pre-print
The aim of this paper is to compare the performance of a local solution technique -- namely Sequential Linear Programming (SLP) employing random starting points -- with state-of-the-art global solvers such as Baron and more sophisticated local solvers such as Sequential Quadratic Programming and Interior Point for the pooling problem. These problems can have many local optima, and we present a small example that illustrates how this can occur. We demonstrate that SLP -- usually deemed obsolete
more » ... ince the arrival of fast reliable QP solvers, Interior Point Methods and sophisticated global solvers -- is still the method of choice for an important class of pooling problem when the criterion is the quality of the solution found within a given acceptable time budget. In addition we introduce a new formulation, the qq-formulation, for the case of fixed demands, that exclusively uses proportional variables. We compare the performance of SLP and the global solver Baron on the qq-formulation and other common formulations. While Baron with the qq-formulation generates weaker bounds than with the other formulations tested, for both SLP and Baron the qq-formulation finds the best solutions within a given time budget. The qq-formulation can be strengthened by pq-like cuts in which case the same bounds as for the pq-formulation are obtained. However the associated time penalty due to the additional constraints results in poorer solution quality within the time budget.
arXiv:2002.10899v1 fatcat:n6ivgzxfuja3jiroa5cch7etfm

Massively Parallel Asset and Liability Management [chapter]

Andreas Grothey
2011 Lecture Notes in Computer Science  
Multistage Stochastic Programming is a popular method to solve financial planning problems such as Asset and Liability Management (ALM). The desirability to have future scenarios match static and dynamic correlations between assets leads to problems of truly enormous sizes (often reaching tens of millions of unknowns or more). Clearly parallel processing becomes mandatory to deal with such problems. Solution approaches for these problems include nested Decomposition and Interior Point Methods.
more » ... he latter class in particular is appealing due to its flexibility with regard to model formulation and its amenability to parallelisation on massively parallel architectures. We review some of the results and challenges in this approach, demonstrate how popular risk measures can be integrated into the framework and address the issue of modelling for High Performance Computing.
doi:10.1007/978-3-642-21878-1_52 fatcat:vrmx35lmfbfxxbj563eww4shi4

Top-percentile traffic routing problem by dynamic programming

Andreas Grothey, Xinan Yang
2011 Optimization and Engineering  
Multi-homing is a technology used by Internet Service Provider (ISP) to connect to the Internet via different network providers. To make full use of the underlying networks with minimum cost, an optimal routing strategy is required by ISPs. This study investigates the optimal routing strategy in case where network providers charge ISPs according to top-percentile pricing. We call this problem the Top-percentile Traffic Routing Problem (TpTRP). The TpTRP is a multistage stochastic optimisation
more » ... oblem in which routing decision should be made before knowing the amount of traffic that is to be routed in the following time period. The stochastic nature of the problem forms the critical difficulty of this study. In this paper several approaches are investigated in modelling and solving the problem. We begin by modelling the TpTRP as a multistage stochastic programming problem, which is hard to solve due to the integer variables introduced by top-percentile pricing. Several simplifications of the original TpTRP are then explored in the second part of this work. Some of these allow analytical solutions which lead to bounds on the achievable optimal solution. We also establish bounds by investigation several "naive" routing policies. In the end, we explore the solution of the TpTRP as a stochastic dynamic programming problem by a discretization of the state space. This allows us to solve medium size instances of TpTRP to optimality and to improve on any naive routing policy.
doi:10.1007/s11081-010-9130-2 fatcat:xk4ope7fvbedxhcoyex44f6hp4

A decomposition-based crash-start for stochastic programming

Marco Colombo, Andreas Grothey
2013 Computational optimization and applications  
In this paper we propose a crash-start technique for interior point methods applicable to multi-stage stochastic programming problems. The main idea is to generate an initial point for the interior point solver by decomposing the barrier problem associated with the deterministic equivalent at the second stage and using a concatenation of the solutions of the subproblems as a warm-starting point for the complete instance. We analyse this scheme and produce theoretical conditions under which the
more » ... arm-start iterate is successful. We describe the implementation within the OOPS solver and the results of the numerical tests we performed.
doi:10.1007/s10589-012-9530-7 fatcat:miozkyodtjbthdb2r5k2icuxca

Primal heuristics for Dantzig-Wolfe decomposition for unit commitment [article]

Nagisa Sugishita, Andreas Grothey, Ken McKinnon
2022 arXiv   pre-print
The unit commitment problem is a short-term planning problem in the energy industry. Dantzig-Wolfe decomposition is a popular approach to solve the problem. This paper focuses on primal heuristics used with Dantzig-Wolfe decomposition. We propose two primal heuristics: one based on decomposition and one based on machine learning. The first one uses the fractional solution to the restricted master problem to fix a subset of the integer variables. In each iteration of the column generation
more » ... re, the primal heuristic obtains the fractional solution, checks whether each binary variable satisfies the integrality constraint and fix those which do. The remaining variables are then optimised quickly by a solver to find a feasible, near-optimal solution to the original instance. The other primal heuristic based on machine learning is of interest when the problems are to be solved repeatedly with different demand data but with the same problem structure. The primal heuristic uses a pre-trained neural network to fix a subset of the integer variables. In the training phase, a neural network is trained to predict for any demand data and for each binary variable how likely it is that the variable takes each of two possible values. After the training, given an instance to be solved, the prediction of the model is used with a rounding threshold to fix some binary variables. Our numerical experiments compare our methods with solving the undecomposed problem and also with other primal heuristics from the literature. The experiments reveal that the primal heuristic based on machine learning is superior when the suboptimality tolerance is relatively large, such as 0.5% or 0.25%, while the decomposition is the best when the tolerance is small, for example 0.1%.
arXiv:2110.12531v2 fatcat:vklobxn325dmhlhig5oinlox2y

A warm-start approach for large-scale stochastic linear programs

Marco Colombo, Jacek Gondzio, Andreas Grothey
2009 Mathematical programming  
Gondzio and Grothey [10] analyse the same system, but are concerned with absorbing primal and dual infeasibility separately by splitting (17) into two separate directions.  ...  Gondzio and Grothey [10] measure perturbations by a relative measure of implied primal and dual infeasibilities, and analyse recovery steps in the primal and the dual spaces independently.  ... 
doi:10.1007/s10107-009-0290-9 fatcat:bdu2bq3rafgtfgv6p5sdm3g7tm

Reoptimization With the Primal-Dual Interior Point Method

Jacek Gondzio, Andreas Grothey
2002 SIAM Journal on Optimization  
Re-optimization techniques for an interior point method applied to solve a sequence of linear programming problems are discussed. Conditions are given for problem perturbations that can be absorbed in merely one Newton step. The analysis is performed for both short-step and long-step feasible path-following method. A practical procedure is then derived for an infeasible path-following method. It is applied in the context of crash start for several large-scale structured linear programs.
more » ... l results with OOPS, a new object-oriented parallel solver, demonstrate the efficiency of the approach. For large structured linear programs crash start leads to about 40% reduction of the iterations number and translates into 25% reduction of the solution time. The crash procedure parallelizes well and speed-ups between 3.1-3.8 on 4 processors are achieved.
doi:10.1137/s1052623401393141 fatcat:es63titut5emfg3nzvmpgm5yom

Benders decomposition with adaptive oracles for large scale optimization

Nicolò Mazzi, Andreas Grothey, Ken McKinnon, Nagisa Sugishita
2020 Mathematical Programming Computation  
AbstractThis paper proposes an algorithm to efficiently solve large optimization problems which exhibit a column bounded block-diagonal structure, where subproblems differ in right-hand side and cost coefficients. Similar problems are often tackled using cutting-plane algorithms, which allow for an iterative and decomposed solution of the problem. When solving subproblems is computationally expensive and the set of subproblems is large, cutting-plane algorithms may slow down severely. In this
more » ... ntext we propose two novel adaptive oracles that yield inexact information, potentially much faster than solving the subproblem. The first adaptive oracle is used to generate inexact but valid cutting planes, and the second adaptive oracle gives a valid upper bound of the true optimal objective. These two oracles progressively "adapt" towards the true exact oracle if provided with an increasing number of exact solutions, stored throughout the iterations. These adaptive oracles are embedded within a Benders-type algorithm able to handle inexact information. We compare the Benders with adaptive oracles against a standard Benders algorithm on a stochastic investment planning problem. The proposed algorithm shows the capability to substantially reduce the computational effort to obtain an $$\epsilon $$ ϵ -optimal solution: an illustrative case is 31.9 times faster for a $$1.00\%$$ 1.00 % convergence tolerance and 15.4 times faster for a $$0.01\%$$ 0.01 % tolerance.
doi:10.1007/s12532-020-00197-0 fatcat:lqflo3jtc5evpl62mjw6ky7ef4

Optimizing the COVID-19 Intervention Policy in Scotland and the Case for Testing and Tracing [article]

Andreas Grothey, Kenneth I.M. McKinnon
2020 medRxiv   pre-print
Unlike other European countries the UK has abandoned widespread testing and tracing of known SARS-CoV-2 carriers in mid-March. The reason given was that the pandemic was out of control and with wide community based spread it would not be possible to contain it by tracing any longer. Like other countries the UK has since relied on a lockdown as the main measure to contain the virus (or more precisely the reproduction number R at significant economic and social cost. It is clear that this level
more » ... lockdown cannot be sustained until a vaccine is available, yet it is not clear what an exit strategy would look like that avoids the danger of a second (or subsequent waves). In this paper we argue that, when used within a portfolio of intervention strategies, widespread testing and tracing leads to significant cost savings compared to using lockdown measures alone. While the effect is most pronounced if a large proportion of the infectious population can be identified and their contacts traced, under reasonable assumptions there are still significant savings even if the fraction of infectious people found by tracing is small. We also present a policy optimization model that finds, for given assumptions on the disease parameters, the best intervention strategy to contain the virus by varying the degree of tracing and lockdown measure (and vaccination once that option is available) over time. We run the model on data fitted to the published COVID-19 outbreak figures for Scotland. The model suggests an intervention strategy that keeps the number of COVID-19 deaths low using a combination of tracing and lockdown. This strategy would only require lockdown measures equivalent to a reduction of R to about 1.8--2.0 if lockdown was used alone, at acceptable economic cost, while the model finds no such strategy without tracing enabled.
doi:10.1101/2020.06.11.20128173 fatcat:byghpiqowjgf3ctcg6ndtnyc64

Asset Liability Management Modelling with Risk Control by Stochastic Dominance [chapter]

Xi Yang, Jacek Gondzio, Andreas Grothey
2011 Asset and Liability Management Handbook  
doi:10.1057/9780230307230_5 fatcat:3daganvrnbhhdl3nfjlqwb4sbm

Exploiting structure in parallel implementation of interior point methods for optimization

Jacek Gondzio, Andreas Grothey
2008 Computational Management Science  
Gondzio, J & Grothey, A 2009 'Exploiting structure in parallel implementation of interior point methods for optimization, ' Computational Managment Science, vol. 6,  ... 
doi:10.1007/s10287-008-0090-3 fatcat:ogkohifljbfvjmr32winnntuvi

Asset liability management modelling with risk control by stochastic dominance

Xi Yang, Jacek Gondzio, Andreas Grothey
2010 Journal of Asset Management  
An Asset-Liability Management model with a novel strategy for controlling risk of underfunding is presented in this paper. The basic model involves multiperiod decisions (portfolio rebalancing) and deals with the usual uncertainty of investment returns and future liabilities. Therefore it is well-suited to a stochastic programming approach. A stochastic dominance concept is applied to measure (and control) risk of underfunding. A small numerical example is provided to demonstrate advantages of
more » ... his new model which includes stochastic dominance constraints over the basic model. Adding stochastic dominance constraints comes with a price. It complicates the structure of the underlying stochastic program. Indeed, new constraints create a link between variables associated with different scenarios of the same time stage. This destroys the usual treestructure of the constraint matrix in the stochastic program and prevents the application of standard stochastic programming approaches such as (nested) Benders decomposition. A structure-exploiting interior point method is applied to this problem. A specialized interior point solver OOPS can deal efficiently with such problems and outperforms the industrial strength commercial solver CPLEX. Computational results on medium scale problems with sizes reaching about one million of variables demonstrate the efficiency of the specialized solution technique. The solution time for these nontrivial asset liability models seems to grow sublinearly with the key parameters of the model such as the number of assets and the number of realizations of the benchmark portfolio, and this makes the method applicable to truly large scale problems.
doi:10.1057/jam.2010.8 fatcat:ef7prcndbzagle4as6fr3335km

Parallel interior-point solver for structured quadratic programs: Application to financial planning problems

Jacek Gondzio, Andreas Grothey
2006 Annals of Operations Research  
The approach presented in this paper is an extension of that implemented in OOPS 1 (Gondzio and Sarkissian 2003, Gondzio and Grothey 2003) .  ... 
doi:10.1007/s10479-006-0139-z fatcat:23wnhcuxtrdoraiiaayw62dwxi
« Previous Showing results 1 — 15 out of 107 results