Filters

38 Hits in 0.42 sec

### Capacitated Max-Batching with interval graph compatibilities

Tim Nonner
2016 Theoretical Computer Science
We consider the problem of partitioning interval graphs into cliques of bounded size. Each interval has a weight, and the cost of a clique is the maximum weight of any interval in the clique. This natural graph problem can be interpreted as a batch scheduling problem. Solving an open question from [7, 4, 5] , we show NP-hardness, even if the bound on the clique sizes is constant. Moreover, we give a PTAS based on a novel dynamic programming technique for this case. * Supported by DFG research
more » ... d by DFG research program No 1103 Embedded Microsystems. Parts of this work were done while the author was visiting the IBM T.J. Watson Research Center. 1 the interval graph structure. In this context, this problem can be interpreted as a capacitated batch scheduling problem, where the maximum weight of a job in a batch gives the time needed to process this batch [7, 4] , and hence the objective function above is the total completion time. CB k can be generalized to arbitrary graphs instead of interval graphs, as done in [8, 7, 5] . In this case, the problem is clearly NP-hard, since it contains graph coloring [10]. Previous work. Finke et al. [7] showed that CB k can be solved via dynamic programming in polynomial time for k = ∞. A similar result was independently obtained by Becchetti et al. [2] in the context of data aggregation. Moreover, this result was extended by Gijswijt, Jost, and Queyranne [8] to value-polymatroidal cost functions, a subset of the well-known submodular cost functions. Using this result as a relaxation, Correa et al. [5] recently presented a 2-approximation algorithm for CB k for arbitrary k. However, it was raised as an open problem in [7, 4, 5] whether this problem is NP-hard or not. Note that if the weights on the intervals are uniform, for example w I = 1 for each interval I ∈ J, then CB k simplifies to finding a clique partition of minimum cardinality where each clique has at most size k [3]. We can also think of this problem as a hitting set problem with uniform capacity k, where we want to hit or stab intervals with vertical lines which correspond to the cliques. Since the natural greedy algorithm solves this problem in polynomial time [7] , Even et al. [6] addressed the more complicated case of non-uniform capacities. They presented a polynomial time algorithm based on a general dynamic programming approach introduced by Baptiste [1] for the problem of scheduling jobs such that the number of gaps is minimized. Contributions. We settle the complexity of CB k by proving its NP-hardness in Section 5, even for k = 3, which solves an open problem from [7, 4, 5] . This is tight, since CB k can be solved in polynomial time for k = 2 by using an algorithm for weighted matching. Moreover, we present a dynamic programming based PTAS for any constant k in Section 4. It is worth mentioning that this dynamic program differs significantly from the dynamic programs introduced before for the related problems discussed above. Using the natural geometric interpretation of CB k used in [7], we briefly discuss these approaches in Subsection 2.1. As an initial building block for the PTAS, we first show in Section 3 that we only need to consider instances with a constant number of different weights. This result holds for general graphs as well, and therefore, it is explained in a separate section. Related work. Also the complementary problem, where we want to partition a graph into independent sets instead of cliques, has raised a considerable amount of attention [12, 11] . Note that, for k = ∞, finding such a partition is equivalent

### PTAS for Densest \$\$k\$\$ k -Subgraph in Interval Graphs

Tim Nonner
2014 Algorithmica
Given an interval graph and integer k, we consider the problem of finding a subgraph of size k with a maximum number of induced edges, called densest k-subgraph problem in interval graphs. It has been shown that this problem is NP-hard even for chordal graphs [17] , and there is probably no PTAS for general graphs [12] . However, the exact complexity status for interval graphs is a long-standing open problem [17] , and the best known approximation result is a 3-approximation algorithm [16] . We
more » ... algorithm [16] . We shed light on the approximation complexity of finding a densest k-subgraph in interval graphs by presenting a polynomialtime approximation scheme (PTAS), that is, we show that there is an (1 + ǫ)approximation algorithm for any ǫ > 0, which is the first such approximation scheme for the densest k-subgraph problem in an important graph class without any further restrictions.

### Capacitated max -Batching with Interval Graph Compatibilities [chapter]

Tim Nonner
2010 Lecture Notes in Computer Science
We consider the problem of partitioning interval graphs into cliques of bounded size. Each interval has a weight, and the cost of a clique is the maximum weight of any interval in the clique. This natural graph problem can be interpreted as a batch scheduling problem. Solving an open question from [7, 4, 5] , we show NP-hardness, even if the bound on the clique sizes is constant. Moreover, we give a PTAS based on a novel dynamic programming technique for this case. * Supported by DFG research
more » ... d by DFG research program No 1103 Embedded Microsystems. Parts of this work were done while the author was visiting the IBM T.J. Watson Research Center. 1 the interval graph structure. In this context, this problem can be interpreted as a capacitated batch scheduling problem, where the maximum weight of a job in a batch gives the time needed to process this batch [7, 4] , and hence the objective function above is the total completion time. CB k can be generalized to arbitrary graphs instead of interval graphs, as done in [8, 7, 5] . In this case, the problem is clearly NP-hard, since it contains graph coloring [10]. Previous work. Finke et al. [7] showed that CB k can be solved via dynamic programming in polynomial time for k = ∞. A similar result was independently obtained by Becchetti et al. [2] in the context of data aggregation. Moreover, this result was extended by Gijswijt, Jost, and Queyranne [8] to value-polymatroidal cost functions, a subset of the well-known submodular cost functions. Using this result as a relaxation, Correa et al. [5] recently presented a 2-approximation algorithm for CB k for arbitrary k. However, it was raised as an open problem in [7, 4, 5] whether this problem is NP-hard or not. Note that if the weights on the intervals are uniform, for example w I = 1 for each interval I ∈ J, then CB k simplifies to finding a clique partition of minimum cardinality where each clique has at most size k [3]. We can also think of this problem as a hitting set problem with uniform capacity k, where we want to hit or stab intervals with vertical lines which correspond to the cliques. Since the natural greedy algorithm solves this problem in polynomial time [7] , Even et al. [6] addressed the more complicated case of non-uniform capacities. They presented a polynomial time algorithm based on a general dynamic programming approach introduced by Baptiste [1] for the problem of scheduling jobs such that the number of gaps is minimized. Contributions. We settle the complexity of CB k by proving its NP-hardness in Section 5, even for k = 3, which solves an open problem from [7, 4, 5] . This is tight, since CB k can be solved in polynomial time for k = 2 by using an algorithm for weighted matching. Moreover, we present a dynamic programming based PTAS for any constant k in Section 4. It is worth mentioning that this dynamic program differs significantly from the dynamic programs introduced before for the related problems discussed above. Using the natural geometric interpretation of CB k used in [7], we briefly discuss these approaches in Subsection 2.1. As an initial building block for the PTAS, we first show in Section 3 that we only need to consider instances with a constant number of different weights. This result holds for general graphs as well, and therefore, it is explained in a separate section. Related work. Also the complementary problem, where we want to partition a graph into independent sets instead of cliques, has raised a considerable amount of attention [12, 11] . Note that, for k = ∞, finding such a partition is equivalent

### PTAS for Densest k-Subgraph in Interval Graphs [chapter]

Tim Nonner
2011 Lecture Notes in Computer Science
Given an interval graph and integer k, we consider the problem of finding a subgraph of size k with a maximum number of induced edges, called densest k-subgraph problem in interval graphs. It has been shown that this problem is NP-hard even for chordal graphs [17] , and there is probably no PTAS for general graphs [12] . However, the exact complexity status for interval graphs is a long-standing open problem [17] , and the best known approximation result is a 3-approximation algorithm [16] . We
more » ... algorithm [16] . We shed light on the approximation complexity of finding a densest k-subgraph in interval graphs by presenting a polynomialtime approximation scheme (PTAS), that is, we show that there is an (1 + ǫ)approximation algorithm for any ǫ > 0, which is the first such approximation scheme for the densest k-subgraph problem in an important graph class without any further restrictions.

### Clique Clustering Yields a PTAS for Max-Coloring Interval Graphs

Tim Nonner
2017 Algorithmica

### Latency Constrained Aggregation in Chain Networks Admits a PTAS [chapter]

Tim Nonner, Alexander Souza
2009 Lecture Notes in Computer Science
This paper studies the aggregation of messages in networks that consist of a chain of nodes, and each message is time-constrained such that it needs to be aggregated during a given time interval, called its due interval. The objective is to minimize the maximum sending cost of any node, which is for example a concern in wireless sensor networks, where it is crucial to distribute the energy consumption as equally as possible. First, we settle the complexity of this problem by proving its
more » ... proving its NP-hardness, even for the case of unit length due intervals. Second, we give a QPTAS, which we extend to a PTAS for the special case that the lengths of the due intervals are constants. This is in particular interesting, since we prove that this problem becomes APX-hard if we consider tree networks instead of chain networks, even for the case of unit length due intervals. Specifically, we show that it cannot be approximated within 4/3 − ǫ for any ǫ > 0, unless P=NP.

### Clique Clustering Yields a PTAS for max-Coloring Interval Graphs [chapter]

Tim Nonner
2011 Lecture Notes in Computer Science
We are given an interval graph G = (V, E) where each interval I ∈ V has a weight w I ∈ R + . The goal is to color the intervals V with an arbitrary number of color classes C 1 , C 2 , . . . , C k such that k i=1 max I∈C i w I is minimized. This problem, called max-coloring interval graphs, contains the classical problem of coloring interval graphs as a special case for uniform weights, and it arises in many practical scenarios such as memory management. Pemmaraju, Raman, and Varadarajan showed
more » ... Varadarajan showed that max-coloring interval graphs is NP-hard (SODA'04) and presented a 2-approximation algorithm. Closing a gap which has been open for years, we settle the approximation complexity of this problem by giving a polynomial-time approximation scheme (PTAS), that is, we show that there is an (1 + ǫ)-approximation algorithm for any ǫ > 0. Besides using standard preprocessing techniques such as geometric rounding and shifting, our main building block is a general technique for trading the overlap structure of an interval graph for accuracy, which we call clique clustering.

### APPROXIMATING THE JOINT REPLENISHMENT PROBLEM WITH DEADLINES

TIM NONNER, ALEXANDER SOUZA
2009 Discrete Mathematics, Algorithms and Applications (DMAA)
The objective of the classical Joint Replenishment Problem (JRP) is to minimize ordering costs by combining orders in two stages, first at some retailers, and then at a warehouse. These orders are needed to satisfy demands that appear over time at the retailers. We investigate the natural special case that each demand has a deadline until when it needs to be satisfied. For this case, we present a randomized 5/3-approximation algorithm. We moreover prove that JRP with deadlines is APX-hard.
more » ... s is APX-hard. Finally, we extend the known hardness results by showing that JRP with linear delay cost functions is NP-hard, even if each retailer has to satisfy only three demands.

### The Bell Is Ringing in Speed-Scaled Multiprocessor Scheduling

Gero Greiner, Tim Nonner, Alexander Souza
2013 Theory of Computing Systems
This paper investigates the problem of scheduling jobs on multiple speedscaled processors without migration, i.e., we have constant α > 1 such that running a processor at speed s results in energy consumption s α per time unit. We consider the general case where each job has a monotonously increasing cost function that penalizes delay. This includes the so far considered cases of deadlines and flow time. For any type of delay cost functions, we obtain the following results: Any β-approximation
more » ... ny β-approximation algorithm for a single processor yields a randomized βB α -approximation algorithm for multiple processors without migration, where B α is the αth Bell number, that is, the number of partitions of a set of size α. Analogously, we show that any βcompetitive online algorithm for a single processor yields a βB α -competitive online algorithm for multiple processors without migration. Finally, we show that any β-approximation algorithm for multiple processors with migration yields a deterministic βB α -approximation algorithm for multiple processors without migration. These facts improve several approximation ratios and lead to new results. For instance, we obtain the first constant factor online and offline approximation algorithm for multiple processors without migration for arbitrary release times, deadlines, and job sizes.

### An Efficient Polynomial-Time Approximation Scheme for the Joint Replenishment Problem [chapter]

Tim Nonner, Maxim Sviridenko
2013 Lecture Notes in Computer Science
Nonner and Souza [11] showed that this setting is APX-hard and therefore does not admit a polynomial-time approximation scheme, that is, an algorithm that has performance guarantee 1 + ε and polynomial  ...

### SRPT is 1.86-Competitive for Completion Time Scheduling [chapter]

Christine Chung, Tim Nonner, Alexander Souza
2010 Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms
We consider the classical problem of scheduling preemptible jobs, that arrive over time, on identical parallel machines. The goal is to minimize the total completion time of the jobs. In standard scheduling notation of Graham et al. [5], this problem is denoted P | r j , pmtn | j c j . A popular algorithm called SRPT, which always schedules the unfinished jobs with shortest remaining processing time, is known to be 2-competitive, see Phillips et al. [12, 13] . This is also the best known
more » ... e best known competitive ratio for any online algorithm. However, it is conjectured that the competitive ratio of SRPT is significantly less than 2. Even breaking the barrier of 2 is considered a significant step towards the final answer of this classical online problem. We improve on this open problem by showing that SRPT is 1.86competitive. This result is obtained using the following method, which might be of general interest: We define two dependent random variables that sum up to the difference between the cost of an SRPT schedule and the cost of an optimal schedule. Then we bound the sum of the expected values of these random variables with respect to the cost of the optimal schedule, yielding the claimed competitiveness. Furthermore, we show a lower bound of 21/19 for SRPT, improving on the previously best known 12/11 due to Lu et In this paper, we study the classical problem of online scheduling preemptible jobs, arriving over time, on identical machines. The goal is to minimize the total completion time of the jobs. Our performance measure is the competitive ratio, i.e., the worst-case ratio of the objective value achieved by an online algorithm and the offline optimum. Specifically, we are given m identical machines and jobs J = {1, . . . , n}, which arrive over time, where each job j becomes known at its release time r j ≥ 0. At time r j we also learn the processing time p j > 0 of job j. Preemption is allowed, i.e., at any time we may interrupt any job that is currently running and resume it later, possibly on a different machine. A schedule σ assigns (pieces of) jobs to time-intervals on machines, and the time when job j completes is denoted c j . We seek to minimize the total completion time j c j . In the standard scheduling notation due to Graham et al. [5], this problem is denoted P | r j , pmtn | j c j . For roughly 15 years, the best known competitive ratio for this fundamental scheduling problem was due to Phillips, Stein, and Wein [12, 13] . They proved that the algorithm SRPT, which always schedules the unfinished jobs with shortest remaining processing time, is 2-competitive. To achieve this, they showed that, at any time 2t, SRPT has completed as many jobs as any other schedule could complete by time t. It was an open problem to prove that the competitive ratio of SRPT is bounded by a constant strictly smaller than 2 for any number of processors m, as conjectured by Stein [18] and Lu, Sitters, and Stougie [10]. Contributions. We show in Section 3 that SRPT is 1.86-competitive, which also improves upon the best known competitive ratio for P | r j , pmtn | j c j . As the makespan argument of [12, 13] is tight, we need another approach. We make use of the following general method. Consider an arbitrary optimization problem, and let OPT be the cost of an optimal solution for a fixed but arbitrary instance. Moreover, let A also denote the cost of the solution returned by some deterministic algorithm A for the same instance. Hence, to obtain an approximation guarantee for A, we need to bound A/OPT. Let now X and Y be two dependent random variables with X + Y = A −OPT. In fact, they need to be dependent, since they sum up to a constant value depending on the given instance. Assume now that we have the bounds E [X] ≤ αOPT and E [Y ] ≤ βOPT for two positive constants α, β. In this case, by linearity of

### Shortest Path with Alternatives for Uniform Arrival Times: Algorithms and Experiments

Tim Nonner, Marco Laumanns, Marc Herbstritt
2014 Algorithmic Approaches for Transportation Modeling, Optimization, and Systems
In contrast, Nonner showed that general arrival times result in an NP-hard problem [9] , even for one-hop networks.  ...

### The bell is ringing in speed-scaled multiprocessor scheduling

Gero Greiner, Tim Nonner, Alexander Souza
2009 Proceedings of the twenty-first annual symposium on Parallelism in algorithms and architectures - SPAA '09
This paper investigates the problem of scheduling jobs on multiple speedscaled processors without migration, i.e., we have constant α > 1 such that running a processor at speed s results in energy consumption s α per time unit. We consider the general case where each job has a monotonously increasing cost function that penalizes delay. This includes the so far considered cases of deadlines and flow time. For any type of delay cost functions, we obtain the following results: Any β-approximation
more » ... ny β-approximation algorithm for a single processor yields a randomized βB α -approximation algorithm for multiple processors without migration, where B α is the αth Bell number, that is, the number of partitions of a set of size α. Analogously, we show that any βcompetitive online algorithm for a single processor yields a βB α -competitive online algorithm for multiple processors without migration. Finally, we show that any β-approximation algorithm for multiple processors with migration yields a deterministic βB α -approximation algorithm for multiple processors without migration. These facts improve several approximation ratios and lead to new results. For instance, we obtain the first constant factor online and offline approximation algorithm for multiple processors without migration for arbitrary release times, deadlines, and job sizes.

### Optimal Algorithms for Train Shunting and Relaxed List Update Problems

Tim Nonner, Alexander Souza
unpublished
This paper considers a Train Shunting problem which occurs in cargo train organizations: We have a locomotive travelling along a track segment and a collection of n cars, where each car has a source and a target. Whenever the train passes the source of a car, it needs to be added to the train, and on the target, the respective car needs to be removed. Any such operation at the end of the train incurs low shunting cost, but adding or removing truly in the interior requires a more complex
more » ... ore complex shunting operation and thus yields high cost. The objective is to schedule the adding and removal of cars as to minimize the total cost. This problem can also be seen as a relaxed version of the well-known List Update problem, which may be of independent interest. We derive polynomial time algorithms for Train Shunting by reducing this problem to finding independent sets in bipartite graphs. This allows us to treat several variants of the problem in a generic way. Specifically, we obtain an algorithm with running time O n 5/2 for the uniform case, where all low costs and all high costs are identical, respectively. Furthermore, for the non-uniform case we have running time of O n 3. Both versions translate to a symmetric variant, where it is also allowed to add and remove cars at the front of the train at low cost. In addition, we formulate a dynamic program with running time O n 4 , which exploits the special structure of the graph. Although the running time is worse, it allows us to solve many extensions, e.g., prize-collection, economies of scale, and dependencies between consecutive stations.

### Distributed Approximation Algorithms for Finding 2-Edge-Connected Subgraphs [chapter]

Sven O. Krumke, Peter Merz, Tim Nonner, Katharina Rupp
Lecture Notes in Computer Science
We consider the distributed construction of a minimum weight 2-edgeconnected spanning subgraph (2-ECSS) of a given weighted or unweighted graph. A 2-ECSS of a graph is a subgraph that, for each pair of vertices, contains at least two edge-disjoint paths connecting these vertices. The problem of finding a minimum weight 2-ECSS is NP-hard and a natural extension of the distributed MST construction problem, one of the most fundamental problems in the area of distributed computation. We present a
more » ... ion. We present a distributed 3 2 -approximation algorithm for the unweighted 2-ECSS construction problem that requires O(n) communication rounds and O(m) messages. Moreover, we present a distributed 3-approximation algorithm for the weighted 2-ECSS construction problem that requires O(n log n) communication rounds and O(n log 2 n + m) messages.
« Previous Showing results 1 — 15 out of 38 results