IA Scholar Query: Linear Convergence of First- and Zeroth-Order Primal-Dual Algorithms for Distributed Nonconvex Optimization.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgFri, 25 Nov 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Conditional Gradient Methods
https://scholar.archive.org/work/b2imrksvmfclhaik7ghfh6bcte
The purpose of this survey is to serve both as a gentle introduction and a coherent overview of state-of-the-art Frank--Wolfe algorithms, also called conditional gradient algorithms, for function minimization. These algorithms are especially useful in convex optimization when linear optimization is cheaper than projections. The selection of the material has been guided by the principle of highlighting crucial ideas as well as presenting new approaches that we believe might become important in the future, with ample citations even of old works imperative in the development of newer methods. Yet, our selection is sometimes biased, and need not reflect consensus of the research community, and we have certainly missed recent important contributions. After all the research area of Frank--Wolfe is very active, making it a moving target. We apologize sincerely in advance for any such distortions and we fully acknowledge: We stand on the shoulder of giants.Gábor Braun, Alejandro Carderera, Cyrille W. Combettes, Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Sebastian Pokuttawork_b2imrksvmfclhaik7ghfh6bcteFri, 25 Nov 2022 00:00:00 GMTAn Empirical Quantile Estimation Approach to Nonlinear Optimization Problems with Chance Constraints
https://scholar.archive.org/work/ktzkvim36bfdlmyeufvppnzesy
We investigate an empirical quantile estimation approach to solve chance-constrained nonlinear optimization problems. Our approach is based on the reformulation of the chance constraint as an equivalent quantile constraint to provide stronger signals on the gradient. In this approach, the value of the quantile function is estimated empirically from samples drawn from the random parameters, and the gradient of the quantile function is estimated via a finite-difference approximation on top of the quantile-function-value estimation. We establish a convergence theory of this approach within the framework of an augmented Lagrangian method for solving general nonlinear constrained optimization problems. The foundation of the convergence analysis is a concentration property of the empirical quantile process, and the analysis is divided based on whether or not the quantile function is differentiable. In contrast to the sampling-and-smoothing approach used in the literature, the method developed in this paper does not involve any smoothing function, and hence the quantile-function gradient approximation is easier to implement, and there are fewer accuracy-control parameters to tune. Numerical investigation shows that our approach can also identify high-quality solutions, especially with a relatively large step size for the finite-difference estimation, which works intuitively as an implicit smoothing. Thus,the possibility exists that an explicit smoothing is not always necessary to handle the chance constraints. Just improving the estimation of the quantile-function value and gradient itself likely could already lead to high performance for solving the chance-constrained nonlinear programs.Fengqiao Luo, Jeffrey Larsonwork_ktzkvim36bfdlmyeufvppnzesyThu, 03 Nov 2022 00:00:00 GMTA Riemannian ADMM
https://scholar.archive.org/work/oiwcabydunholatbut4rqmpgja
We consider a class of Riemannian optimization problems where the objective is the sum of a smooth function and a nonsmooth function, considered in the ambient space. This class of problems finds important applications in machine learning and statistics such as the sparse principal component analysis, sparse spectral clustering, and orthogonal dictionary learning. We propose a Riemannian alternating direction method of multipliers (ADMM) to solve this class of problems. Our algorithm adopts easily computable steps in each iteration. The iteration complexity of the proposed algorithm for obtaining an ϵ-stationary point is analyzed under mild assumptions. To the best of our knowledge, this is the first Riemannian ADMM with provable convergence guarantee for solving Riemannian optimization problem with nonsmooth objective. Numerical experiments are conducted to demonstrate the advantage of the proposed method.Jiaxiang Li, Shiqian Ma, Tejes Srivastavawork_oiwcabydunholatbut4rqmpgjaThu, 03 Nov 2022 00:00:00 GMTGradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
https://scholar.archive.org/work/s4giehkxxnfafp6lsihw3onhf4
In this work, we study distributed optimization algorithms that reduce the high communication costs of synchronization by allowing clients to perform multiple local gradient steps in each communication round. Recently, Mishchenko et al. (2022) proposed a new type of local method, called ProxSkip, that enjoys an accelerated communication complexity without any data similarity condition. However, their method requires all clients to call local gradient oracles with the same frequency. Because of statistical heterogeneity, we argue that clients with well-conditioned local problems should compute their local gradients less frequently than clients with ill-conditioned local problems. Our first contribution is the extension of the original ProxSkip method to the setup where clients are allowed to perform a different number of local gradient steps in each communication round. We prove that our modified method, GradSkip, still converges linearly, has the same accelerated communication complexity, and the required frequency for local gradient computations is proportional to the local condition number. Next, we generalize our method by extending the randomness of probabilistic alternations to arbitrary unbiased compression operators and considering a generic proximable regularizer. This generalization, GradSkip+, recovers several related methods in the literature. Finally, we present an empirical study to confirm our theoretical claims.Artavazd Maranjyan, Mher Safaryan, Peter Richtárikwork_s4giehkxxnfafp6lsihw3onhf4Fri, 28 Oct 2022 00:00:00 GMTCommunication-Efficient Stochastic Zeroth-Order Optimization for Federated Learning
https://scholar.archive.org/work/msjyabfk2jampe64zhgvvwb4fe
Federated learning (FL), as an emerging edge artificial intelligence paradigm, enables many edge devices to collaboratively train a global model without sharing their private data. To enhance the training efficiency of FL, various algorithms have been proposed, ranging from first-order to second-order methods. However, these algorithms cannot be applied in scenarios where the gradient information is not available, e.g., federated black-box attack and federated hyperparameter tuning. To address this issue, in this paper we propose a derivative-free federated zeroth-order optimization (FedZO) algorithm featured by performing multiple local updates based on stochastic gradient estimators in each communication round and enabling partial device participation. Under non-convex settings, we derive the convergence performance of the FedZO algorithm on non-independent and identically distributed data and characterize the impact of the numbers of local iterates and participating edge devices on the convergence. To enable communication-efficient FedZO over wireless networks, we further propose an over-the-air computation (AirComp) assisted FedZO algorithm. With an appropriate transceiver design, we show that the convergence of AirComp-assisted FedZO can still be preserved under certain signal-to-noise ratio conditions. Simulation results demonstrate the effectiveness of the FedZO algorithm and validate the theoretical observations.Wenzhi Fang, Ziyi Yu, Yuning Jiang, Yuanming Shi, Colin N. Jones, Yong Zhouwork_msjyabfk2jampe64zhgvvwb4feMon, 10 Oct 2022 00:00:00 GMTStochastic Zeroth-order Functional Constrained Optimization: Oracle Complexity and Applications
https://scholar.archive.org/work/3ck67oh2efdk5jld57jti4fuve
Functionally constrained stochastic optimization problems, where neither the objective function nor the constraint functions are analytically available, arise frequently in machine learning applications. In this work, assuming we only have access to the noisy evaluations of the objective and constraint functions, we propose and analyze stochastic zeroth-order algorithms for solving the above class of stochastic optimization problem. When the domain of the functions is ℝ^n, assuming there are m constraint functions, we establish oracle complexities of order 𝒪((m+1)n/ϵ^2) and 𝒪((m+1)n/ϵ^3) respectively in the convex and nonconvex setting, where ϵ represents the accuracy of the solutions required in appropriately defined metrics. The established oracle complexities are, to our knowledge, the first such results in the literature for functionally constrained stochastic zeroth-order optimization problems. We demonstrate the applicability of our algorithms by illustrating its superior performance on the problem of hyperparameter tuning for sampling algorithms and neural network training.Anthony Nguyen, Krishnakumar Balasubramanianwork_3ck67oh2efdk5jld57jti4fuveSun, 09 Oct 2022 00:00:00 GMTLog Barriers for Safe Black-box Optimization with Application to Safe Reinforcement Learning
https://scholar.archive.org/work/tlu2uyqxfzbjvbpjzy7bv32hni
Optimizing noisy functions online, when evaluating the objective requires experiments on a deployed system, is a crucial task arising in manufacturing, robotics and many others. Often, constraints on safe inputs are unknown ahead of time, and we only obtain noisy information, indicating how close we are to violating the constraints. Yet, safety must be guaranteed at all times, not only for the final output of the algorithm. We introduce a general approach for seeking a stationary point in high dimensional non-linear stochastic optimization problems in which maintaining safety during learning is crucial. Our approach called LB-SGD is based on applying stochastic gradient descent (SGD) with a carefully chosen adaptive step size to a logarithmic barrier approximation of the original problem. We provide a complete convergence analysis of non-convex, convex, and strongly-convex smooth constrained problems, with first-order and zeroth-order feedback. Our approach yields efficient updates and scales better with dimensionality compared to existing approaches. We empirically compare the sample complexity and the computational cost of our method with existing safe learning approaches. Beyond synthetic benchmarks, we demonstrate the effectiveness of our approach on minimizing constraint violation in policy search tasks in safe reinforcement learning (RL).Ilnura Usmanova, Yarden As, Maryam Kamgarpour, Andreas Krausework_tlu2uyqxfzbjvbpjzy7bv32hniThu, 21 Jul 2022 00:00:00 GMTZeroth-Order Optimization for Composite Problems with Functional Constraints
https://scholar.archive.org/work/tgnuarruyvf4fok2am5npm3waa
In many real-world problems, first-order (FO) derivative evaluations are too expensive or even inaccessible. For solving these problems, zeroth-order (ZO) methods that only need function evaluations are often more efficient than FO methods or sometimes the only options. In this paper, we propose a novel zeroth-order inexact augmented Lagrangian method (ZO-iALM) to solve black-box optimization problems, which involve a composite (i.e., smooth+nonsmooth) objective and functional constraints. This appears to be the first work that develops an iALM-based ZO method for functional constrained optimization and meanwhile achieves query complexity results matching the best-known FO complexity results up to a factor of variable dimension. With an extensive experimental study, we show the effectiveness of our method. The applications of our method span from classical optimization problems to practical machine learning examples such as resource allocation in sensor networks and adversarial example generation.Zichong Li, Pin-Yu Chen, Sijia Liu, Songtao Lu, Yangyang Xuwork_tgnuarruyvf4fok2am5npm3waaTue, 28 Jun 2022 00:00:00 GMTModel-Free Feedback Constrained Optimization Via Projected Primal-Dual Zeroth-Order Dynamics
https://scholar.archive.org/work/5g6njhq7k5dvxhc5nxr44ryvnq
In this paper, we propose a model-free feedback solution method to solve generic constrained optimization problems, without knowing the specific formulations of the objective and constraint functions. This solution method is termed projected primal-dual zeroth-order dynamics (P-PDZD) and is developed based on projected primal-dual gradient dynamics and extremum seeking control. In particular, the P-PDZD method can be interpreted as a model-free controller that autonomously drives an unknown system to the solution of the optimization problem using only output feedback. The P-PDZD can properly handle both the hard and asymptotic constraints, and we develop the decentralized version of P-PDZD when applied to multi-agent systems. Moreover, we prove that the P-PDZD achieves semi-global practical asymptotic stability and structural robustness. We then apply the decentralized P-PDZD to the optimal voltage control problem in power distribution systems with square probing signals, and the simulation results verified the optimality, robustness, and adaptivity of the P-PDZD method.Xin Chen, Jorge I. Poveda, Na Liwork_5g6njhq7k5dvxhc5nxr44ryvnqWed, 22 Jun 2022 00:00:00 GMTAccelerated first-order methods for a class of semidefinite programs
https://scholar.archive.org/work/urjvao37q5ge5ioqcdv36lolxm
This paper introduces a new storage-optimal first-order method (FOM), CertSDP, for solving a special class of semidefinite programs (SDPs) to high accuracy. The class of SDPs that we consider, the exact QMP-like SDPs, is characterized by low-rank solutions, a priori knowledge of the restriction of the SDP solution to a small subspace, and standard regularity assumptions such as strict complementarity. Crucially, we show how to use a certificate of strict complementarity to construct a low-dimensional strongly convex minimax problem whose optimizer coincides with a factorization of the SDP optimizer. From an algorithmic standpoint, we show how to construct the necessary certificate and how to solve the minimax problem efficiently. We accompany our theoretical results with preliminary numerical experiments suggesting that CertSDP significantly outperforms current state-of-the-art methods on large sparse exact QMP-like SDPs.Alex L. Wang, Fatma Kilinc-Karzanwork_urjvao37q5ge5ioqcdv36lolxmWed, 01 Jun 2022 00:00:00 GMTLevel Constrained First Order Methods for Function Constrained Optimization
https://scholar.archive.org/work/upe5xvnrqveuzhl7y5mwlwo6wi
We present a new feasible proximal gradient method for constrained optimization where both the objective and constraint functions are given by summation of a smooth, possibly nonconvex function and a convex simple function. The algorithm converts the original problem into a sequence of convex subproblems. Either exact or approximate solutions of convex subproblems can be computed efficiently in many cases. For the inexact case, computing the solution of the subproblem requires evaluation of at most one gradient/function-value of the original objective and constraint functions. An important feature of the algorithm is the constraint level parameter. By carefully increasing this level for each subproblem, we provide a simple solution to overcome the challenge of bounding the Lagrangian multipliers, and show that the algorithm follows a strictly feasible solution path till convergence to the stationary point. Finally, we develop a simple, proximal gradient descent type analysis, showing that the complexity bound of this new algorithm is comparable to gradient descent for the unconstrained setting which is new in the literature. Exploiting this new design and analysis technique, we extend our algorithms to some more challenging constrained optimization problems where 1) the objective is a stochastic or finite-sum function, and 2) structured nonsmooth functions replace smooth components of both objective and constraint functions. Complexity results for these problems also seem to be new in the literature. We also show that our method can be applied for convex function constrained problems where we show complexities similar to the proximal gradient method.Digvijay Boob, Qi Deng, Guanghui Lanwork_upe5xvnrqveuzhl7y5mwlwo6wiMon, 16 May 2022 00:00:00 GMTEfficient Algorithms for Minimizing Compositions of Convex Functions and Random Functions and Its Applications in Network Revenue Management
https://scholar.archive.org/work/w46pk45a4ngttktbdirvn3zoz4
In this paper, we study a class of nonconvex stochastic optimization in the form of min_x∈𝒳 F(x):=𝔼_ξ [f(ϕ(x,ξ))], where the objective function F is a composition of a convex function f and a random function ϕ. Leveraging an (implicit) convex reformulation via a variable transformation u=𝔼[ϕ(x,ξ)], we develop stochastic gradient-based algorithms and establish their sample and gradient complexities for achieving an ϵ-global optimal solution. Interestingly, our proposed Mirror Stochastic Gradient (MSG) method operates only in the original x-space using gradient estimators of the original nonconvex objective F and achieves 𝒪̃(ϵ^-2) sample and gradient complexities, which matches the lower bounds for solving stochastic convex optimization problems. Under booking limits control, we formulate the air-cargo network revenue management (NRM) problem with random two-dimensional capacity, random consumption, and routing flexibility as a special case of the stochastic nonconvex optimization, where the random function ϕ(x,ξ)=x∧ξ, i.e., the random demand ξ truncates the booking limit decision x. Extensive numerical experiments demonstrate the superior performance of our proposed MSG algorithm for booking limit control with higher revenue and lower computation cost than state-of-the-art bid-price-based control policies, especially when the variance of random capacity is large. KEYWORDS: stochastic nonconvex optimization, hidden convexity, air-cargo network revenue management, gradient-based algorithmsXin Chen and Niao He and Yifan Hu and Zikun Yework_w46pk45a4ngttktbdirvn3zoz4Tue, 03 May 2022 00:00:00 GMTA Survey of Decentralized Online Learning
https://scholar.archive.org/work/6rxqmz5o2nhx3gkyn7imgwcxde
Decentralized online learning (DOL) has been increasingly researched in the last decade, mostly motivated by its wide applications in sensor networks, commercial buildings, robotics (e.g., decentralized target tracking and formation control), smart grids, deep learning, and so forth. In this problem, there are a network of agents who may be cooperative (i.e., decentralized online optimization) or noncooperative (i.e., online game) through local information exchanges, and the local cost function of each agent is often time-varying in dynamic and even adversarial environments. At each time, a decision must be made by each agent based on historical information at hand without knowing future information on cost functions. Although this problem has been extensively studied in the last decade, a comprehensive survey is lacking. Therefore, this paper provides a thorough overview of DOL from the perspective of problem settings, communication, computation, and performances. In addition, some potential future directions are also discussed in details.Xiuxian Li, Lihua Xie, Na Liwork_6rxqmz5o2nhx3gkyn7imgwcxdeSun, 01 May 2022 00:00:00 GMTStatistical Game Theory
https://scholar.archive.org/work/ncsivrjg5bhllotot6q7s7d6na
Game theory and statistics are two huge scientific disciplines that have played a significant role in the development of a wide variety of fields, including computer science, natural sciences, and social sciences. Traditionally, game theory has been used for decision making in strategic environments where multiple agents interact with each other. Statistics, on the other hand, is traditionally used for reasoning in non-adversarial settings where the samples are assumed to be generated by some stationary non-reactive source. Due to the contrasting settings in which game theory and statistics are often studied, these two disciplines have traditionally been regarded as disparate research areas. However, there is a great degree of commonality between the two fields. A surprisingly wide range of problems in classical and modern statistics have a game theoretic component to them. Classically, the mathematical philosophy of statistics, particularly frequentist statistics, posits that the source of samples is potentially adversarial. This resulted in the rich theory of minimax statistical games and estimation. Boosting algorithms, which are often regarded as best off-the-shelf classifiers, can be viewed as playing a zero-sum game against a weak learner. To allow for various departures of "test environment" from "train environments", the emerging field of robust machine learning allows for adversarial manipulation of the train or test environments. Finally, an emerging class of density estimators in modern machine learning use an adversarial "critic" of the density estimator to improve the final density estimation. The common theme among these classical and modern developments is an interplay between statistical estimation and multiplayer games. Statistical game theory is a unified analytical and algorithmic framework underlying all these classical and modern developments. This thesis aims to lay the foundations of statistical game theory to address the above-mentioned (and many more) statistical problems. While our prima [...]Arun Sai Suggalawork_ncsivrjg5bhllotot6q7s7d6naThu, 21 Apr 2022 00:00:00 GMTDistributed Nonconvex Optimization: Gradient-free Iterations and ϵ-Globally Optimal Solution
https://scholar.archive.org/work/zd5yo263g5hdvafmwsthitohqy
Distributed optimization utilizes local computation and communication to realize a global aim of optimizing the sum of local objective functions. It has gained wide attention for a variety of applications in networked systems. This paper addresses a class of constrained distributed nonconvex optimization problems involving univariate objectives, aiming to achieve global optimization without requiring local evaluations of gradients at every iteration. We propose a novel algorithm named CPCA, exploiting the notion of combining Chebyshev polynomial approximation, average consensus, and polynomial optimization. The proposed algorithm is i) able to obtain ϵ-globally optimal solutions for any arbitrarily small given accuracy ϵ, ii) efficient in terms of both zeroth-order queries (i.e., evaluations of function values) and inter-agent communication, and iii) distributed terminable when the specified precision requirement is met. The key insight is to use polynomial approximations to substitute for general objective functions, distribute these approximations via average consensus, and turn to solve an easier approximate version of the original problem. Due to the nice analytic properties owned by polynomials, this approximation not only facilitates efficient global optimization, but also allows the design of gradient-free iterations to reduce cumulative costs of queries and achieve geometric convergence when nonconvex problems are solved. We provide comprehensive analysis of the accuracy and complexities of the proposed algorithm.Zhiyu He, Jianping He, Cailian Chen, Xinping Guanwork_zd5yo263g5hdvafmwsthitohqyThu, 31 Mar 2022 00:00:00 GMTPrivate and Robust Distributed Nonconvex Optimization via Polynomial Approximation
https://scholar.archive.org/work/zakqxzffmrgwtlagvjkcxemb74
There has been work that exploits polynomial approximation to solve distributed nonconvex optimization problems involving univariate objectives. This idea facilitates arbitrarily precise global optimization without requiring local evaluations of gradients at every iteration. Nonetheless, there remains a gap between existing theoretical guarantees and diverse practical requirements, e.g., privacy preservation and robustness to network imperfections. To fill this gap and keep the above strengths, we propose a Private and Robust Chebyshev-Proxy-based distributed Optimization Algorithm (PR-CPOA). Specifically, to ensure both accuracy of solutions and privacy of local objectives, we design a new privacy-preserving mechanism. This mechanism leverages the randomness in blockwise insertions of perturbed vector states and hence provides an improved privacy guarantee in the scope of (α,β)-data-privacy. Furthermore, to gain robustness against various network imperfections, we use the push-sum consensus protocol as a backbone, discuss its specific enhancements, and evaluate the performance of the proposed algorithm accordingly. Thanks to the purely consensus-type iterations, we avoid the privacy-accuracy trade-off and the bother of selecting appropriate step-sizes in different settings. We provide rigorous analysis of the accuracy, privacy, and complexity. It is shown that the advantages brought by the idea of polynomial approximation are maintained when all the above requirements exist.Zhiyu He, Jianping He, Cailian Chen, Xinping Guanwork_zakqxzffmrgwtlagvjkcxemb74Thu, 31 Mar 2022 00:00:00 GMTTime-Varying Optimization of Networked Systems with Human Preferences
https://scholar.archive.org/work/rntnu7qdkbczneldkju7gyg6ue
This paper considers a time-varying optimization problem associated with a network of systems, with each of the systems shared by (and affecting) a number of individuals. The objective is to minimize cost functions associated with the individuals' preferences, which are unknown, subject to time-varying constraints that capture physical or operational limits of the network. To this end, the paper develops a distributed online optimization algorithm with concurrent learning of the cost functions. The cost functions are learned on-the-fly based on the users' feedback (provided at irregular intervals) by leveraging tools from shape-constrained Gaussian Process. The online algorithm is based on a primal-dual method, and acts effectively in a closed-loop fashion where: i) users' feedback is utilized to estimate the cost, and ii) measurements from the network are utilized in the algorithmic steps to bypass the need for sensing of (unknown) exogenous inputs of the network. The performance of the algorithm is analyzed in terms of dynamic network regret and constraint violation. Numerical examples are presented in the context of real-time optimization of distributed energy resources.Ana M. Ospina, Andrea Simonetto, Emiliano Dall'Anesework_rntnu7qdkbczneldkju7gyg6ueFri, 11 Mar 2022 00:00:00 GMTA Novel Convergence Analysis for Algorithms of the Adam Family and Beyond
https://scholar.archive.org/work/3i6j6snx7fbivbg33mrikvjbma
Why does the original analysis of Adam fail, but it still converges very well in practice on a broad range of problems? There are still some mysteries about Adam that have not been unraveled. This paper provides a novel non-convex analysis of Adam and its many variants to uncover some of these mysteries. Our analysis exhibits that an increasing or large enough "momentum" parameter for the first-order moment used in practice is sufficient to ensure Adam and its many variants converge under a mild boundness condition on the adaptive scaling factor of the step size. In contrast, the original problematic analysis of Adam uses a momentum parameter that decreases to zero, which is the key reason that makes it diverge on some problems. To the best of our knowledge, this is the first time the gap between analysis and practice is bridged. Our analysis also exhibits more insights for practical implementations of Adam, e.g., increasing the momentum parameter in a stage-wise manner in accordance with stagewise decreasing step size would help improve the convergence. Our analysis of the Adam family is modular such that it can be (has been) extended to solving other optimization problems, e.g., compositional, min-max and bi-level problems. As an interesting yet non-trivial use case, we present an extension for solving non-convex min-max optimization in order to address a gap in the literature that either requires a large batch or has double loops. Our empirical studies corroborate the theory and also demonstrate the effectiveness in solving min-max problems.Zhishuai Guo, Yi Xu, Wotao Yin, Rong Jin, Tianbao Yangwork_3i6j6snx7fbivbg33mrikvjbmaTue, 22 Feb 2022 00:00:00 GMTFailure Probability Constrained AC Optimal Power Flow
https://scholar.archive.org/work/cgreeql4xbgq5fbufd5twpcfgm
Despite cascading failures being the central cause of blackouts in power transmission systems, existing operational and planning decisions are made largely by ignoring their underlying cascade potential. This paper posits a reliability-aware AC Optimal Power Flow formulation that seeks to design a dispatch point which has a low operator-specified likelihood of triggering a cascade starting from any single component outage. By exploiting a recently developed analytical model of the probability of component failure, our Failure Probability-constrained ACOPF (FP-ACOPF) utilizes the system's expected first failure time as a smoothly tunable and interpretable signature of cascade risk. We use techniques from bilevel optimization and numerical linear algebra to efficiently formulate and solve the FP-ACOPF using off-the-shelf solvers. Extensive simulations on the IEEE 118-bus case show that, when compared to the unconstrained and N-1 security-constrained ACOPF, our probability-constrained dispatch points can significantly lower the probabilities of long severe cascades and of large demand losses, while incurring only minor increases in total generation costs.Anirudh Subramanyam and Jacob Roth and Albert Lam and Mihai Anitescuwork_cgreeql4xbgq5fbufd5twpcfgmMon, 24 Jan 2022 00:00:00 GMTModel-Free Nonlinear Feedback Optimization
https://scholar.archive.org/work/l3vhvt6ke5bdjb7mu3meeo5n3y
Feedback optimization is a control paradigm that enables physical systems to autonomously reach efficient operating points. Its central idea is to interconnect optimization iterations in closed-loop with the physical plant. Since iterative gradient-based methods are extensively used to achieve optimality, feedback optimization controllers typically require the knowledge of the steady-state sensitivity of the plant, which may not be easily accessible in some applications. In contrast, in this paper we develop a model-free feedback controller for efficient steady-state operation of general dynamical systems. The proposed design consists in updating control inputs via gradient estimates constructed from evaluations of the nonconvex objective at the current input and at the measured output. We study the dynamic interconnection of the proposed iterative controller with a stable nonlinear discrete-time plant. For this setup, we characterize the optimality and the stability of the closed-loop behavior as functions of the problem dimension, the number of iterations, and the rate of convergence of the physical plant. To handle general constraints that affect multiple inputs, we enhance the controller with Frank-Wolfe type updates.Zhiyu He, Saverio Bolognani, Jianping He, Florian Dörfler, Xinping Guanwork_l3vhvt6ke5bdjb7mu3meeo5n3yFri, 07 Jan 2022 00:00:00 GMT