Theory of Randomized Optimization Heuristics (Dagstuhl Reports 19431)
Carola Doerr, Carlos M. Fonseca, Tobias Friedrich, Xin Yao, Michael Wagner
2020
Dagstuhl Reports
This report documents the activities of Dagstuhl Seminar 19431 on "Theory of Randomized Optimization Heuristics". 46 researchers from Europe, Australia, Asia, and North America have come together to discuss ongoing research. This tenth edition of the seminar series had three focus topics: (1) relation between optimal control and heuristic optimization, (2) benchmarking optimization heuristics, and (3) the interfaces between continuous and discrete optimization. Several breakout sessions have
more »
... vided ample opportunity to brainstorm on recent developments in the research landscape, to discuss and solve open problems, and to kick-start new research initiatives. Seminar October 20-25, 2019 -http://www.dagstuhl.de/19431 2012 ACM Subject Classification Theory of computation → Bio-inspired optimization, Theory of computation → Evolutionary algorithms, Theory of computation → Optimization with randomized search heuristics License Creative Commons BY 3.0 Unported license © Efficient optimization techniques affect our personal, industrial, and academic environments through the supply of well-designed processes that enable a best-possible use of our limited resources. Despite significant research efforts, most real-world problems remain too complex to admit exact analytical or computational solutions. Therefore, heuristic approaches that trade the accuracy of a solution for a simple algorithmic structure, fast running times, or an otherwise efficient use of computational resources are required. Randomized optimization heuristics form a highly successful and thus frequently applied class of such problem solvers. Among the best-known representatives of this class are stochastic local search methods, Monte Carlo techniques, genetic and evolutionary algorithms, and swarm intelligence techniques. 62 -Theory of Randomized Optimization Heuristics The theory of randomized optimization heuristics strives to set heuristic approaches on firm ground by providing a sound mathematical foundation for this important class of algorithms. Key challenges in this research area comprise optimization under uncertainty, parameter selection (most randomized optimization heuristics are parametrized), the role and usefulness of so-called crossover operations (i.e., the idea of creating high-quality solution candidates by recombining previously evaluated ones) and, more generally, performance guarantees for advanced heuristics such as population-based techniques, estimation-of-distribution algorithms, differential evolution, and others. Dagstuhl Seminar 19431 on "Theory of Randomized Optimization Heuristics" was a continuation of the seminar series originally on "Theory of Evolutionary Algorithms". Today the field extends far beyond evolutionary algorithms -a development that previous Dagstuhl seminars have significantly influenced. While the previous seminar 17191 had a very strong focus on methodological questions and techniques needed to analyze stochastic optimization heuristics, the present seminar had among its three main focus topics chosen to foster interaction with two strongly linked research communities that were not previously represented in the seminar series: stochastic control theory and empirical benchmarking of randomized optimization heuristics. Recent work has shown that there is a very close link between the theory of randomized optimization heuristics and stochastic control theory, both regarding the nature of the "systems" of interest and the analytical techniques that have been developed in the two communities. At the seminar, we have explored these affinities through the two invited presentations of Luc Pronzato and Vivek Borkar, through contributed talks highlighting different aspects studied in both communities (e.g., the presentation on one-shot optimization by Olivier Teytaud), and through focussed breakout sessions, in particular the one fully dedicated to Connection between the analysis of evolution strategies and estimation of distribution algorithms and the analysis of stochastic approximation and ordinary differential equations, in which interesting similarities and differences between the two fields were identified. The second focus topic of Dagstuhl Seminar 19431 was benchmarking of optimization heuristics. Benchmarking plays a central role in empirical performance assessment. However, it can also be an essential tool for theoreticians to develop their mathematically-derived ideas into practical algorithms, thereby encouraging a principled discussion between empiricallydriven and theoretically-driven researchers. Benchmarking has been a central topic in several breakout sessions, for example those on Competitions and Benchmarking, Algorithm Selection and Configuration, but also the breakout session on Multi-Objective Optimization. A survey of best practices in empirical benchmarking has been kick-started in the breakout session on Benchmarking: Best Practices and Open Issues. Discussing the mathematical challenges arising in the performance analysis of randomized heuristics has always been a central topic in this Dagstuhl seminar series. Among other achievements, important connections between continuous and discrete optimization have been established, most notably in the form of drift theorems, which are typically applicable regardless of the nature of the search space. Apart from such methodological advances, we have also observed two other trends bridging discrete and continuous optimization: (i) an increased interest in analyzing parameter-dependent performance guarantees, and (ii) the recent advances in the study of estimation of distribution algorithms, which borrow techniques from both discrete and continuous optimization theory. These topics have been discussed in the invited talk of Youhei Akimoto, in several contributed presentations, and in the breakout sessions on Measuring Optimization Progress in an Invariant Way for Comparison-Based Algorithms and on Mixed-Integer Optimization.
doi:10.4230/dagrep.9.10.61
dblp:journals/dagstuhl-reports/DoerrFFY19
fatcat:72bb4njkqrek5mgisp2p4uw3m4