IA Scholar Query: Branch-and-reduce exponential/FPT algorithms in practice: A case study of vertex cover.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgThu, 15 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Parameterized algorithmics for time-evolving structures: temporalizing and multistaging
https://scholar.archive.org/work/uyaoernzvvb7rmwysi2x6b5yzi
The thesis studies temporal graph problems and multistage problems. Since these problems typically are computationally hard, the focus is on developing fast exact (FPT-)algorithms. Temporal graph problems. A temporal graph is a graph whose edge set changes over time. Here, an edge at a specific time step is called time-edge. One of our main contributions is the introduction of a set of parameters tailored for temporal graph problems. We focus mainly on four problems on temporal graphs. Minimizing Reachability by Delaying. Given a temporal graph, a set of source vertices, and three integers k, r, and δ, the problem Minimizing Temporal Reachability by Delaying asks whether we can delay at most k time-edges by δ time steps (i.e., moving the edges δ time steps into the future) such that the sources can reach at most r vertices via temporal paths (i.e., paths using edges appearing in non-decreasing time-order). Our main contribution here is an algorithm running in O(r!k|G|) time, where |G| is the size of the temporal graph. This stands in contrast to the W[1]-hardness when parameterized by r for the problem of deleting instead of delaying time-edges. Restless Temporal Paths. A restless temporal path is a temporal path that can stay only a bounded amount of time at one vertex. Our main contribution here is a randomized algorithm to find a length-at-most-k restless temporal path from vertex s to vertex z in 4^ℓ |G|^O(1) time, where ℓ is the difference between k and the length of the shortest temporal path from s to z. Moreover, we show that finding these restless temporal paths is fixed-parameter tractable when parameterized by the timed feedback vertex number (that is, a temporal version of the classical feedback vertex number introduced in this thesis). This stands in contrast to the W[1]-hardness when parameterized by the feedback vertex number of the underlying graph. Temporal Separation. A temporal separator is a vertex set that intersects the vertices of all temporal paths between two distinguished vertices. We co [...]Philipp Zschoche, Technische Universität Berlin, Rolf Niedermeierwork_uyaoernzvvb7rmwysi2x6b5yziThu, 15 Sep 2022 00:00:00 GMTMakespan Scheduling of Unit Jobs with Precedence Constraints in O(1.995^n) time
https://scholar.archive.org/work/jjyjuhkpzjd4vi4np24iwrrdye
In a classical scheduling problem, we are given a set of n jobs of unit length along with precedence constraints and the goal is to find a schedule of these jobs on m identical machines that minimizes the makespan. This problem is well-known to be NP-hard for an unbounded number of machines. Using standard 3-field notation, it is known as P|prec, p_j=1|C_max. We present an algorithm for this problem that runs in O(1.995^n) time. Before our work, even for m=3 machines the best known algorithms ran in O^∗(2^n) time. In contrast, our algorithm works when the number of machines m is unbounded. A crucial ingredient of our approach is an algorithm with a runtime that is only single-exponential in the vertex cover of the comparability graph of the precedence constraint graph. This heavily relies on insights from a classical result by Dolev and Warmuth (Journal of Algorithms 1984) for precedence graphs without long chains.Jesper Nederlof, Céline M. F. Swennenhuis, Karol Węgrzyckiwork_jjyjuhkpzjd4vi4np24iwrrdyeThu, 04 Aug 2022 00:00:00 GMTThere and Back Again: On Applying Data Reduction Rules by Undoing Others
https://scholar.archive.org/work/ogv4xj27yrcdncmsrmoiqkbq7q
Data reduction rules are an established method in the algorithmic toolbox for tackling computationally challenging problems. A data reduction rule is a polynomial-time algorithm that, given a problem instance as input, outputs an equivalent, typically smaller instance of the same problem. The application of data reduction rules during the preprocessing of problem instances allows in many cases to considerably shrink their size, or even solve them directly. Commonly, these data reduction rules are applied exhaustively and in some fixed order to obtain irreducible instances. It was often observed that by changing the order of the rules, different irreducible instances can be obtained. We propose to "undo" data reduction rules on irreducible instances, by which they become larger, and then subsequently apply data reduction rules again to shrink them. We show that this somewhat counter-intuitive approach can lead to significantly smaller irreducible instances. The process of undoing data reduction rules is not limited to "rolling back" data reduction rules applied to the instance during preprocessing. Instead, we formulate so-called backward rules, which essentially undo a data reduction rule, but without using any information about which data reduction rules were applied to it previously. In particular, based on the example of Vertex Cover we propose two methods applying backward rules to shrink the instances further. In our experiments we show that this way smaller irreducible instances consisting of real-world graphs from the SNAP and DIMACS datasets can be computed.Aleksander Figiel and Vincent Froese and André Nichterlein and Rolf Niedermeierwork_ogv4xj27yrcdncmsrmoiqkbq7qWed, 29 Jun 2022 00:00:00 GMTKernelization for Treewidth-2 Vertex Deletion
https://scholar.archive.org/work/quyqfvxnd5awljfugpnplvefm4
The Treewidth-2 Vertex Deletion problem asks whether a set of at most t vertices can be removed from a graph, such that the resulting graph has treewidth at most two. A graph has treewidth at most two if and only if it does not contain a K_4 minor. Hence, this problem corresponds to the NP-hard ℱ-Minor Cover problem with ℱ = {K_4}. For any variant of the ℱ-Minor Cover problem where ℱ contains a planar graph, it is known that a polynomial kernel exists. I.e., a preprocessing routine that in polynomial time outputs an equivalent instance of size t^O(1). However, this proof is non-constructive, meaning that this proof does not yield an explicit bound on the kernel size. The {K_4}-Minor Cover problem is the simplest variant of the ℱ-Minor Cover problem with an unknown kernel size. To develop a constructive kernelization algorithm, we present a new method to decompose graphs into near-protrusions, such that near-protrusions in this new decomposition can be reduced using elementary reduction rules. Our method extends the 'approximation and tidying' framework by van Bevern et al. [Algorithmica 2012] to provide guarantees stronger than those provided by both this framework and a regular protrusion decomposition. Furthermore, we provide extensions of the elementary reduction rules used by the {K_4, K_2,3}-Minor Cover kernelization algorithm introduced by Donkers et al. [IPEC 2021]. Using the new decomposition method and reduction rules, we obtain a kernel consisting of O(t^41) vertices, which is the first constructive kernel. This kernel is a step towards more concrete kernelization bounds for the ℱ-Minor Cover problem where ℱ contains a planar graph, and our decomposition provides a potential direction to achieve these new bounds.Jeroen L.G. Scholswork_quyqfvxnd5awljfugpnplvefm4Fri, 18 Mar 2022 00:00:00 GMTLossy Planarization: A Constant-Factor Approximate Kernelization for Planar Vertex Deletion
https://scholar.archive.org/work/377llva5ajasfplsbob5tng6gi
In the F-minor-free deletion problem we want to find a minimum vertex set in a given graph that intersects all minor models of graphs from the family F. The Vertex planarization problem is a special case of F-minor-free deletion for the family F = K_5, K_3,3. Whenever the family F contains at least one planar graph, then F-minor-free deletion is known to admit a constant-factor approximation algorithm and a polynomial kernelization [Fomin, Lokshtanov, Misra, and Saurabh, FOCS'12]. The Vertex planarization problem is arguably the simplest setting for which F does not contain a planar graph and the existence of a constant-factor approximation or a polynomial kernelization remains a major open problem. In this work we show that Vertex planarization admits an algorithm which is a combination of both approaches. Namely, we present a polynomial A-approximate kernelization, for some constant A > 1, based on the framework of lossy kernelization [Lokshtanov, Panolan, Ramanujan, and Saurabh, STOC'17]. Simply speaking, when given a graph G and integer k, we show how to compute a graph G' on poly(k) vertices so that any B-approximate solution to G' can be lifted to an (A*B)-approximate solution to G, as long as A*B*OPT(G) <= k. In order to achieve this, we develop a framework for sparsification of planar graphs which approximately preserves all separators and near-separators between subsets of the given terminal set. Our result yields an improvement over the state-of-art approximation algorithms for Vertex planarization. The problem admits a polynomial-time O(n^eps)-approximation algorithm, for any eps > 0, and a quasi-polynomial-time (log n)^O(1) approximation algorithm, both randomized [Kawarabayashi and Sidiropoulos, FOCS'17]. By pipelining these algorithms with our approximate kernelization, we improve the approximation factors to respectively O(OPT^eps) and (log OPT)^O(1).Bart M. P. Jansen, Michał Włodarczykwork_377llva5ajasfplsbob5tng6giFri, 04 Feb 2022 00:00:00 GMT