IA Scholar Query: Approximating Disjoint-Path Problems Using Greedy Algorithms and Packing Integer Programs.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgTue, 06 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440OASIcs, Volume 106, ATMOS 2022, Complete Volume
https://scholar.archive.org/work/k3l2xowdkvfxdelwxhf2xrcp6y
OASIcs, Volume 106, ATMOS 2022, Complete VolumeMattia D'Emidio, Niels Lindnerwork_k3l2xowdkvfxdelwxhf2xrcp6yTue, 06 Sep 2022 00:00:00 GMTAverage Sensitivity of the Knapsack Problem
https://scholar.archive.org/work/jb3q2chp6zd2pgd3gay2ve7pne
In resource allocation, we often require that the output allocation of an algorithm is stable against input perturbation because frequent reallocation is costly and untrustworthy. Varma and Yoshida (SODA'21) formalized this requirement for algorithms as the notion of average sensitivity. Here, the average sensitivity of an algorithm on an input instance is, roughly speaking, the average size of the symmetric difference of the output for the instance and that for the instance with one item deleted, where the average is taken over the deleted item. In this work, we consider the average sensitivity of the knapsack problem, a representative example of a resource allocation problem. We first show a (1-ε)-approximation algorithm for the knapsack problem with average sensitivity O(ε^{-1}log ε^{-1}). Then, we complement this result by showing that any (1-ε)-approximation algorithm has average sensitivity Ω(ε^{-1}). As an application of our algorithm, we consider the incremental knapsack problem in the random-order setting, where the goal is to maintain a good solution while items arrive one by one in a random order. Specifically, we show that for any ε > 0, there exists a (1-ε)-approximation algorithm with amortized recourse O(ε^{-1}log ε^{-1}) and amortized update time O(log n+f_ε), where n is the total number of items and f_ε > 0 is a value depending on ε.Soh Kumabe, Yuichi Yoshida, Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, Grzegorz Hermanwork_jb3q2chp6zd2pgd3gay2ve7pneThu, 01 Sep 2022 00:00:00 GMTApproximation Algorithms for Round-UFP and Round-SAP
https://scholar.archive.org/work/vwqrs4k7v5en3lh2gqrveuirqy
We study Round-UFP and Round-SAP, two generalizations of the classical Bin Packing problem that correspond to the unsplittable flow problem on a path (UFP) and the storage allocation problem (SAP), respectively. We are given a path with capacities on its edges and a set of jobs where for each job we are given a demand and a subpath. In Round-UFP, the goal is to find a packing of all jobs into a minimum number of copies (rounds) of the given path such that for each copy, the total demand of jobs on any edge does not exceed the capacity of the respective edge. In Round-SAP, the jobs are considered to be rectangles and the goal is to find a non-overlapping packing of these rectangles into a minimum number of rounds such that all rectangles lie completely below the capacity profile of the edges. We show that in contrast to Bin Packing, both problems do not admit an asymptotic polynomial-time approximation scheme (APTAS), even when all edge capacities are equal. However, for this setting, we obtain asymptotic (2+ε)-approximations for both problems. For the general case, we obtain an O(log log n)-approximation algorithm and an O(log log 1/δ)-approximation under (1+δ)-resource augmentation for both problems. For the intermediate setting of the no bottleneck assumption (i.e., the maximum job demand is at most the minimum edge capacity), we obtain an absolute 12- and an asymptotic (16+ε)-approximation algorithm for Round-UFP and Round-SAP, respectively.Debajyoti Kar, Arindam Khan, Andreas Wiese, Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, Grzegorz Hermanwork_vwqrs4k7v5en3lh2gqrveuirqyThu, 01 Sep 2022 00:00:00 GMTConfiguration Balancing for Stochastic Requests
https://scholar.archive.org/work/xpjfnercircrldan23uipgvmoq
The configuration balancing problem with stochastic requests generalizes many well-studied resource allocation problems such as load balancing and virtual circuit routing. In it, we have m resources and n requests. Each request has multiple possible configurations, each of which increases the load of each resource by some amount. The goal is to select one configuration for each request to minimize the makespan: the load of the most-loaded resource. In our work, we focus on a stochastic setting, where we only know the distribution for how each configuration increases the resource loads, learning the realized value only after a configuration is chosen. We develop both offline and online algorithms for configuration balancing with stochastic requests. When the requests are known offline, we give a non-adaptive policy for configuration balancing with stochastic requests that O(log m/loglog m)-approximates the optimal adaptive policy. In particular, this closes the adaptivity gap for this problem as there is an asymptotically matching lower bound even for the very special case of load balancing on identical machines. When requests arrive online in a list, we give a non-adaptive policy that is O(log m) competitive. Again, this result is asymptotically tight due to information-theoretic lower bounds for very special cases (e.g., for load balancing on unrelated machines). Finally, we show how to leverage adaptivity in the special case of load balancing on related machines to obtain a constant-factor approximation offline and an O(loglog m)-approximation online. A crucial technical ingredient in all of our results is a new structural characterization of the optimal adaptive policy that allows us to limit the correlations between its decisions.Franziska Eberle, Anupam Gupta, Nicole Megow, Benjamin Moseley, Rudy Zhouwork_xpjfnercircrldan23uipgvmoqMon, 29 Aug 2022 00:00:00 GMTLifted edges as connectivity priors for multicut and disjoint paths
https://scholar.archive.org/work/edizj43isvflhhihrsapdwjlhu
This work studies graph decompositions and their representation by 0/1 labeling of edges. We study two problems. The first is multicut (MC) which represents decompositions of undirected graphs (clustering of nodes into connected components). The second is disjoint paths (DP) in directed acyclic graphs where the clusters correspond to nodedisjoint paths. Unlike an alternative representation by node labeling, the number of clusters is not part of the input but is fully determined by the costs of edges. I would like to thank all my co-authors for a pleasant and constructive cooperation. Besides my supervisor Paul Swoboda, I would like to name especially Roberto Henschel, Timo Kaiser, Bjoern Andres, and Jan-Hendrik Lange for their major contribution to the shared publications that are part of this thesis. The publications could not be realized without their part of the work. I would like to thank Bjoern Andres for his supervision and help during the work on our common paper. I would like to mention also Michal Rolinek who helped us with our latest publication. I would like to thank Jiles Vreeken, Marcel Schulz and Markus List who cooperated with me on a research project that is not part of this thesis. I am very grateful to Bernt Schiele, the director of our department, who provided me with good working conditions, fully supported me in combining my working duties with family, and found a solution in the difficult stage of my PhD study by finding a new supervisor. Also, other people at MPI and Saarland University helped me to organize my work and family life and helped me with administrative issues.Andrea Hornakova, Universität Des Saarlandeswork_edizj43isvflhhihrsapdwjlhuMon, 29 Aug 2022 00:00:00 GMTGraph-Based Tests for Multivariate Covariate Balance Under Multi-Valued Treatments
https://scholar.archive.org/work/juzdy3qymjak7n7euh5curs3w4
We propose the use of non-parametric, graph-based tests to assess the distributional balance of covariates in observational studies with multi-valued treatments. Our tests utilize graph structures ranging from Hamiltonian paths that connect all of the data to nearest neighbor graphs that maximally separates data into pairs. We consider algorithms that form minimal distance graphs, such as optimal Hamiltonian paths or non-bipartite matching, or approximate alternatives, such as greedy Hamiltonian paths or greedy nearest neighbor graphs. Extensive simulation studies demonstrate that the proposed tests are able to detect the misspecification of matching models that other methods miss. Contrary to intuition, we also find that tests ran on well-formed approximate graphs do better in most cases than tests run on optimally formed graphs, and that a properly formed test on an approximate nearest neighbor graph performs best, on average. In a multi-valued treatment setting with breast cancer data, these graph-based tests can also detect imbalances otherwise missed by common matching diagnostics. We provide a new R package graphTest to implement these methods and reproduce our results.Eric A. Dunipacework_juzdy3qymjak7n7euh5curs3w4Tue, 09 Aug 2022 00:00:00 GMTCardinality Minimization, Constraints, and Regularization: A Survey
https://scholar.archive.org/work/qbruvpkhpbd23ldgdligzqbyiu
We survey optimization problems that involve the cardinality of variable vectors in constraints or the objective function. We provide a unified viewpoint on the general problem classes and models, and give concrete examples from diverse application fields such as signal and image processing, portfolio selection, or machine learning. The paper discusses general-purpose modeling techniques and broadly applicable as well as problem-specific exact and heuristic solution approaches. While our perspective is that of mathematical optimization, a main goal of this work is to reach out to and build bridges between the different communities in which cardinality optimization problems are frequently encountered. In particular, we highlight that modern mixed-integer programming, which is often regarded as impractical due to commonly unsatisfactory behavior of black-box solvers applied to generic problem formulations, can in fact produce provably high-quality or even optimal solutions for cardinality optimization problems, even in large-scale real-world settings. Achieving such performance typically draws on the merits of problem-specific knowledge that may stem from different fields of application and, e.g., shed light on structural properties of a model or its solutions, or lead to the development of efficient heuristics; we also provide some illustrative examples.Andreas M. Tillmann, Daniel Bienstock, Andrea Lodi, Alexandra Schwartzwork_qbruvpkhpbd23ldgdligzqbyiuMon, 08 Aug 2022 00:00:00 GMTOn the Reinhardt Conjecture and Formal Foundations of Optimal Control
https://scholar.archive.org/work/rxdljdx5hnf7jmix4rh3hxse2u
We describe a reformulation (following Hales (2017)) of a 1934 conjecture of Reinhardt on pessimal packings of convex domains in the plane as a problem in optimal control theory. Several structural results of this problem including its Hamiltonian structure and Lax pair formalism are presented. General solutions of this problem for constant control are presented and are used to prove that the Pontryagin extremals of the control problem are constrained to lie in a compact domain of the state space. We further describe the structure of the control problem near its singular locus, and prove that we recover the Pontryagin system of the multi-dimensional Fuller optimal control problem (with two dimensional control) in this case. We show how this system admits logarithmic spiral trajectories when the control set is the circumscribing disk of the 2-simplex with the associated control performing an infinite number of rotations on the boundary of the disk in finite time. We also describe formalization projects in foundational optimal control viz., model-based and model-free Reinforcement Learning theory. Key ingredients which make these formalization novel viz., the Giry monad and contraction coinduction are considered and some applications are discussed.Koundinya Vajjhawork_rxdljdx5hnf7jmix4rh3hxse2uMon, 08 Aug 2022 00:00:00 GMTAdaptive solver behavior in mixed-integer programming
https://scholar.archive.org/work/esijys24zrg6vk7tfl54dbcsdy
This thesis addresses general-purpose solution techniques for mixed-integer programs (MIPs), a paradigm which captures formulations of countless real-world optimization problems. Most state-of-the-art MIP solvers employ a version of the branch-and-bound (B&B) algorithm to solve a MIP instance to proven optimality, supported by numerous auxiliary components that contribute new solutions or improve the dual convergence. One cannot expect that all such components are equally effective on all possible instances from the tremendous range of MIP applications. Ideally, a solver adapts to a given MIP instance by concentrating the available computational budget on those components that work best. In this thesis, we develop adaptive algorithmic behavior for several such MIP solver components solver as well as the B&B search itself. We develop new notions of pseudo-cost reliability, namely relative-error reliability and hypothesis reliability, by computing confidence intervals and pairwise t-tests on branching candidates to dynamically decide if strong branching is necessary. We develop two heuristic frameworks, adaptive large neighborhood search and adaptive diving that learn the most effective primal heuristics inspired by selection strategies for the multi-armed bandit problem. The presented ideas are transferred to adaptive LP pricing to maximize LP throughout by learning the pricing strategy for the dual simplex algorithms online during the search. Our proposed adaptive algorithmic behavior extends beyond individual solving components to the B&B search as a whole. To this end, we partition the B&B search into a feasibility phase, an improvement phase, and a heuristically detected proof phase. We improve solver performance by emphasizing different components and search strategies in each phase. We propose new estimation techniques for the progress of the B&B search based on forecasting and machine learning techniques. We turn this tree-size estimation into a novel restart strategy of the B&B algo [...]Gregor Christian Hendel, Technische Universität Berlin, Thorsten Kochwork_esijys24zrg6vk7tfl54dbcsdyTue, 02 Aug 2022 00:00:00 GMTContact and friction simulation for computer graphics
https://scholar.archive.org/work/a46z76uy3bawzjnjqbvigs372u
Efficient simulation of contact is of interest for numerous physics-based animation applications. For instance, virtual reality training, video games, rapid digital prototyping, and robotics simulation are all examples of applications that involve contact modeling and simulation. However, despite its extensive use in modern computer graphics, contact simulation remains one of the most challenging problems in physics-based animation. This course covers fundamental topics on the nature of contact modeling and simulation for computer graphics. Specifically, we provide mathematical details about formulating contact as a complementarity problem in rigid body and soft body animations. We briefly cover several approaches for contact generation using discrete collision detection. Then, we present a range of numerical techniques for solving the associated LCPs and NCPs. The advantages and disadvantages of each technique are further discussed in a practical manner, and best practices for implementation are discussed. Finally, we conclude the course with several advanced topics such as methods for soft body contact problems, barrier functions, and anisotropic friction modeling. Programming examples are provided in our appendix as well as on the course website to accompany the course notes.Sheldon Andrews, Kenny Erleben, Zachary Fergusonwork_a46z76uy3bawzjnjqbvigs372uTue, 02 Aug 2022 00:00:00 GMTOptimization Framework for Splitting DNN Inference Jobs over Computing Networks
https://scholar.archive.org/work/6rd5k4fnrzfoldh3fyy5mahuua
Ubiquitous artificial intelligence (AI) is considered one of the key services in 6G systems. AI services typically rely on deep neural network (DNN) requiring heavy computation. Hence, in order to support ubiquitous AI, it is crucial to provide a solution for offloading or distributing computational burden due to DNN, especially at end devices with limited resources. We develop an optimization framework for assigning the computation tasks of DNN inference jobs to computing resources in the network, so as to reduce the inference latency. To this end, we propose a layered graph model with which simple conventional routing jointly solves the problem of selecting nodes for computation and paths for data transfer between nodes. We show that using our model, the existing approaches to splitting DNN inference jobs can be equivalently reformulated as a routing problem that possesses better numerical properties. We also apply the proposed framework to derive algorithms for minimizing the end-to-end inference latency. We show through numerical evaluations that our new formulation can find a solution for DNN inference job distribution much faster than the existing formulation, and that our algorithms can select computing nodes and data paths adaptively to the computational attributes of given DNN inference jobs, so as to reduce the end-to-end latency.Sehun Jung, Hyang-Won Leework_6rd5k4fnrzfoldh3fyy5mahuuaTue, 26 Jul 2022 00:00:00 GMTGreedy Algorithm for Multiway Matching with Bounded Regret
https://scholar.archive.org/work/l646j5d5dvds5ew62lvprcvlxu
In this paper we prove the efficacy of a simple greedy algorithm for a finite horizon online resource allocation/matching problem, when the corresponding static planning linear program (SPP) exhibits a non-degeneracy condition called the general position gap (GPG). The key intuition that we formalize is that the solution of the reward maximizing SPP is the same as a feasibility Linear Program restricted to the optimal basic activities, and under GPG this solution can be tracked with bounded regret by a greedy algorithm, i.e., without the commonly used technique of periodically resolving the SPP. The goal of the decision maker is to combine resources (from a finite set of resource types) into configurations (from a finite set of feasible configurations) where each configuration is specified by the number of resources consumed of each type and a reward. The resources are further subdivided into three types - offline (whose quantity is known and available at time 0), online-queueable (which arrive online and can be stored in a buffer), and online-nonqueueable (which arrive online and must be matched on arrival or lost). Under GRG we prove that, (i) our greedy algorithm gets bounded any-time regret of 𝒪(1/ϵ_0) for matching reward (ϵ_0 is a measure of the GPG) when no configuration contains both an online-queueable and an online-nonqueueable resource, and (ii) 𝒪(log t) expected any-time regret otherwise (we also prove a matching lower bound). By considering the three types of resources, our matching framework encompasses several well-studied problems such as dynamic multi-sided matching, network revenue management, online stochastic packing, and multiclass queueing systems.Varun Guptawork_l646j5d5dvds5ew62lvprcvlxuMon, 25 Jul 2022 00:00:00 GMTTemporal graph exploration: restrictions and relaxations
https://scholar.archive.org/work/kxeukfp2efctjgg3en7gcwhpk4
This thesis considers the problem of exploring temporal graphs. A temporal graph G = hG1; :::;GLi of order n is a sequence of L undirected graphs (or layers) indexed by the timesteps t 2 f1; : : : ;Lg, such that V (G1) = V (G) and E(Gt) E(G) for some underlying graph G with order n. To explore G is to visit each vertex at least once via a sequence of edge-traversals (called an exploration schedule), with each consecutive edge traversed during a timestep strictly greater than the last. The arrival time of an the timestep during which the last unvisited vertex is reached for the first time.There exists an algorithm producing exploration schedules with arrival time O(n2) for any always-connected (i.e., Gt is connected for all t 2 f1; : : : ;Lg) temporal graph, and an infinite family F of always-connected temporal graphs for which any exploration schedule has arrival time (n2) [38, 86]. We isolate a number of characteristics held by the members of F and prove lower/upper bounds on the arrival time of exploration schedules for temporal graphs that are restricted from possessing them. First, we consider structural restrictions in which an input temporal graph has (1) degree upper bounded by in each layer; and (2) at most k edges 'missing' from the underlying graph in each layer; subquadratic upper bounds are proved in each case. We then consider 'relaxed' exploration schedules that can traverse a ?nite number of edges ( 1) in each timestep, focusing on the cases when 2 or n=k traversals are allowed. We also consider, from a complexity standpoint, a number of relaxed problem variants, in which (1) less than n vertices are required to be explored by a candidate, and (2) an unlimited but ?nite number of edge traversals can be made by a candidate exploration schedule, providing both FPT-membership results and hardness/NP-completeness results.Jakob T. Spoonerwork_kxeukfp2efctjgg3en7gcwhpk4Thu, 14 Jul 2022 00:00:00 GMTContinual Learning with Deep Learning Methods in an Application-Oriented Context
https://scholar.archive.org/work/am3ktmlmpfeftczaaf32kagnb4
knowledge is deeply grounded in many computer-based applications. An important research area of Artificial Intelligence (AI) deals with the automatic derivation of knowledge from data. Machine learning offers the according algorithms. One area of research focuses on the development of biologically inspired learning algorithms. The respective machine learning methods are based on neurological concepts so that they can systematically derive knowledge from data and store it. One type of machine learning algorithms that can be categorized as "deep learning" model is referred to as Deep Neural Networks (DNNs). DNNs consist of multiple artificial neurons arranged in layers that are trained by using the backpropagation algorithm. These deep learning methods exhibit amazing capabilities for inferring and storing complex knowledge from high-dimensional data. However, DNNs are affected by a problem that prevents new knowledge from being added to an existing base. The ability to continuously accumulate knowledge is an important factor that contributed to evolution and is therefore a prerequisite for the development of strong AIs. The so-called "catastrophic forgetting" (CF) effect causes DNNs to immediately loose already derived knowledge after a few training iterations on a new data distribution. Only an energetically expensive retraining with the joint data distribution of past and new data enables the abstraction of the entire new set of knowledge. In order to counteract the effect, various techniques have been and are still being developed with the goal to mitigate or even solve the CF problem. These published CF avoidance studies usually imply the effectiveness of their approaches for various continual learning tasks. This dissertation is set in the context of continual machine learning with deep learning methods. The first part deals with the development of an ...Benedikt Pfülbwork_am3ktmlmpfeftczaaf32kagnb4Tue, 12 Jul 2022 00:00:00 GMTAn Introduction to Lifelong Supervised Learning
https://scholar.archive.org/work/4vysklpyxnhn3nxubvc73s2cey
This primer is an attempt to provide a detailed summary of the different facets of lifelong learning. We start with Chapter 2 which provides a high-level overview of lifelong learning systems. In this chapter, we discuss prominent scenarios in lifelong learning (Section 2.4), provide 8 Introduction a high-level organization of different lifelong learning approaches (Section 2.5), enumerate the desiderata for an ideal lifelong learning system (Section 2.6), discuss how lifelong learning is related to other learning paradigms (Section 2.7), describe common metrics used to evaluate lifelong learning systems (Section 2.8). This chapter is more useful for readers who are new to lifelong learning and want to get introduced to the field without focusing on specific approaches or benchmarks. The remaining chapters focus on specific aspects (either learning algorithms or benchmarks) and are more useful for readers who are looking for specific approaches or benchmarks. Chapter 3 focuses on regularization-based approaches that do not assume access to any data from previous tasks. Chapter 4 discusses memory-based approaches that typically use a replay buffer or an episodic memory to save subset of data across different tasks. Chapter 5 focuses on different architecture families (and their instantiations) that have been proposed for training lifelong learning systems. Following these different classes of learning algorithms, we discuss the commonly used evaluation benchmarks and metrics for lifelong learning (Chapter 6) and wrap up with a discussion of future challenges and important research directions in Chapter 7.Shagun Sodhani, Mojtaba Faramarzi, Sanket Vaibhav Mehta, Pranshu Malviya, Mohamed Abdelsalam, Janarthanan Janarthanan, Sarath Chandarwork_4vysklpyxnhn3nxubvc73s2ceyTue, 12 Jul 2022 00:00:00 GMTDistributed-Memory Parallel Contig Generation for De Novo Long-Read Genome Assembly
https://scholar.archive.org/work/t4uaptp4tfabncscqa2s7vzo7e
De novo genome assembly, i.e., rebuilding the sequence of an unknown genome from redundant and erroneous short sequences, is a key but computationally intensive step in many genomics pipelines. The exponential growth of genomic data is increasing the computational demand and requires scalable, high-performance approaches. In this work, we present a novel distributed-memory algorithm that, from a string graph representation of the genome and using sparse matrices, generates the contig set, i.e., overlapping sequences that form a map representing a region of a chromosome. Using matrix abstraction, we mask branches in the string graph and compute the connected component to group genomic sequences that belong to the same linear chain (i.e., contig). Then, we perform multiway number partitioning to minimize the load imbalance in local assembly, i.e., concatenation of sequences from a given contig. Based on the assignment obtained by partitioning, we compute the induce subgraph function to redistribute sequences between processes, resulting in a set of local sparse matrices. Finally, we traverse each matrix using depth-first search to concatenate sequences. Our algorithm shows good scaling with parallel efficiency up to 80% on 128 nodes, resulting in uniform genome coverage and showing promising results in terms of assembly quality. Our contig generation algorithm localizes the assembly process to significantly reduce the amount of computation spent on this step. Our work is a step forward for efficient de novo long read assembly of large genomes in a distributed memory.Giulia Guidi, Gabriel Raulet, Daniel Rokhsar, Leonid Oliker, Katherine Yelick, Aydin Bulucwork_t4uaptp4tfabncscqa2s7vzo7eSun, 10 Jul 2022 00:00:00 GMTPacking cycles in planar and bounded-genus graphs
https://scholar.archive.org/work/56y4mog2h5bt5b2eepw53ofbzi
We devise constant-factor approximation algorithms for finding as many disjoint cycles as possible from a certain family of cycles in a given planar or bounded-genus graph. Here disjoint can mean vertex-disjoint or edge-disjoint, and the graph can be undirected or directed. The family of cycles under consideration must satisfy two properties: it must be uncrossable and allow for an oracle access that finds a weight-minimal cycle in that family for given nonnegative edge weights or (in planar graphs) the union of all remaining cycles in that family after deleting a given subset of edges. Our setting generalizes many problems that were studied separately in the past. For example, three families that satisfy the above properties are (i) all cycles in a directed or undirected graph, (ii) all odd cycles in an undirected graph, and (iii) all cycles in an undirected graph that contain precisely one demand edge, where the demand edges form a subset of the edge set. The latter family (iii) corresponds to the classical disjoint paths problem in fully planar and bounded-genus instances. While constant-factor approximation algorithms were known for edge-disjoint paths in such instances, we improve the constant in the planar case and obtain the first such algorithms for vertex-disjoint paths. We also obtain approximate min-max theorems of the Erdős–Pósa type. For example, the minimum feedback vertex set in a planar digraph is at most 12 times the maximum number of vertex-disjoint cycles.Niklas Schlomberg and Hanjo Thiele and Jens Vygenwork_56y4mog2h5bt5b2eepw53ofbziFri, 01 Jul 2022 00:00:00 GMTStreaming Algorithms for Geometric Steiner Forest
https://scholar.archive.org/work/kozwpzku2zddrl4acusbnpdjgy
We consider an important generalization of the Steiner tree problem, the Steiner forest problem, in the Euclidean plane: the input is a multiset X ⊆ ℝ², partitioned into k color classes C₁, C₂, ..., Cₖ ⊆ X. The goal is to find a minimum-cost Euclidean graph G such that every color class Cᵢ is connected in G. We study this Steiner forest problem in the streaming setting, where the stream consists of insertions and deletions of points to X. Each input point x ∈ X arrives with its color color(x) ∈ [k], and as usual for dynamic geometric streams, the input is restricted to the discrete grid {0, ..., Δ}². We design a single-pass streaming algorithm that uses poly(k ⋅ log Δ) space and time, and estimates the cost of an optimal Steiner forest solution within ratio arbitrarily close to the famous Euclidean Steiner ratio α₂ (currently 1.1547 ≤ α₂ ≤ 1.214). This approximation guarantee matches the state of the art bound for streaming Steiner tree, i.e., when k = 1. Our approach relies on a novel combination of streaming techniques, like sampling and linear sketching, with the classical Arora-style dynamic-programming framework for geometric optimization problems, which usually requires large memory and has so far not been applied in the streaming setting. We complement our streaming algorithm for the Steiner forest problem with simple arguments showing that any finite approximation requires Ω(k) bits of space.Artur Czumaj, Shaofeng H.-C. Jiang, Robert Krauthgamer, Pavel Veselý, Mikołaj Bojańczyk, Emanuela Merelli, David P. Woodruffwork_kozwpzku2zddrl4acusbnpdjgyTue, 28 Jun 2022 00:00:00 GMTOn the parameterized complexity of Compact Set Packing
https://scholar.archive.org/work/uap4bzxspvawbpfh6hpuh7y2mq
The Set Packing problem is, given a collection of sets 𝒮 over a ground set 𝒰, to find a maximum collection of sets that are pairwise disjoint. The problem is among the most fundamental NP-hard optimization problems that have been studied extensively in various computational regimes. The focus of this work is on parameterized complexity, Parameterized Set Packing (PSP): Given r ∈ℕ, is there a collection 𝒮' ⊆𝒮: |𝒮'| = r such that the sets in 𝒮' are pairwise disjoint? Unfortunately, the problem is not fixed parameter tractable unless 𝖶[1] = 𝖥𝖯𝖳, and, in fact, an "enumeration" running time of |𝒮|^Ω(r) is required unless the exponential time hypothesis (ETH) fails. This paper is a quest for tractable instances of Set Packing from parameterized complexity perspectives. We say that the input (𝒰,𝒮) is "compact" if |𝒰| = f(r)·Θ(( log |𝒮|)), for some f(r) ≥ r. In the Compact Set Packing problem, we are given a compact instance of PSP. In this direction, we present a "dichotomy" result of PSP: When |𝒰| = f(r)· o(log |𝒮|), PSP is in , while for |𝒰| = r·Θ(log (|𝒮|)), the problem is W[1]-hard; moreover, assuming ETH, Compact PSP does not even admit |𝒮|^o(r/log r) time algorithm. Further, our framework improves the hardness of other compact combinatorial problems.Ameet Gadekarwork_uap4bzxspvawbpfh6hpuh7y2mqMon, 13 Jun 2022 00:00:00 GMTOnline Paging with Heterogeneous Cache Slots
https://scholar.archive.org/work/kjlfzdg6erdvroh56t7zy6ksh4
It is natural to generalize the k-Server problem by allowing each request to specify not only a point p, but also a subset S of servers that may serve it. To attack this generalization, we focus on uniform and star metrics. For uniform metrics, the problem is equivalent to a generalization of Paging in which each request specifies not only a page p, but also a subset S of cache slots, and is satisfied by having a copy of p in some slot in S. We call this problem Slot-Heterogeneous Paging. We parameterize the problem by specifying an arbitrary family S⊆ 2^[k], and restricting the sets S to S. If all request sets are allowed (S=2^[k]), the optimal deterministic and randomized competitive ratios are exponentially worse than for standard Paging (S={[k]}). As a function of | S| and the cache size k, the optimal deterministic ratio is polynomial: at most O(k^2| S|) and at least Ω(√(| S|)). For any laminar family S of height h, the optimal ratios are O(hk) (deterministic) and O(h^2log k) (randomized). The special case that we call All-or-One Paging extends standard Paging by allowing each request to specify a specific slot to put the requested page in. For All-or-One Paging the optimal competitive ratios are Θ(k) (deterministic) and Θ(log k) (randomized), while the offline problem is NP-hard. We extend the deterministic upper bound to the weighted variant of All-Or-One Paging (a generalization of standard Weighted Paging), showing that it is also Θ(k).Marek Chrobak, Samuel Haney, Mehraneh Liaee, Debmalya Panigrahi, Rajmohan Rajaraman, Ravi Sundaram, Neal E. Youngwork_kjlfzdg6erdvroh56t7zy6ksh4Sat, 11 Jun 2022 00:00:00 GMT