A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is `application/pdf`

.

## Filters

##
###
Randomised Rounding with Applications
[article]

2015
*
arXiv
*
pre-print

We develop new techniques for rounding packing integer programs using iterative randomized rounding. It is based on a novel application of multidimensional Brownian motion in R^n. Let ∼x∈[0,1]^n be a fractional feasible solution of a packing constraint A x ≤ 1, A ∈{0,1 }^m× n that maximizes a linear objective function. The independent randomized rounding method of Raghavan-Thompson rounds each variable x_i to 1 with probability ∼x_i and 0 otherwise. The expected value of the rounded objective

arXiv:1507.08501v1
fatcat:c66mcb7fmzastfw62nufgpfkzq
## more »

... nction matches the fractional optimum and no constraint is violated by more than O( m/ m).In contrast, our algorithm iteratively transforms ∼x to x̂∈{ 0,1}^n using a random walk, such that the expected values of x̂_i's are consistent with the Raghavan-Thompson rounding. In addition, it gives us intermediate values x' which can then be used to bias the rounding towards a superior solution.The reduced dependencies between the constraints of the sparser system can be exploited using Lovasz Local Lemma. For m randomly chosen packing constraints in n variables, with k variables in each inequality, the constraints are satisfied within O( (mkp m/n) / (mkp m/n)) with high probability where p is the ratio between the maximum and minimum coefficients of the linear objective function. Further, we explore trade-offs between approximation factors and error, and present applications to well-known problems like circuit-switching, maximum independent set of rectangles and hypergraph b-matching. Our methods apply to the weighted instances of the problems and are likely to lead to better insights for even dependent rounding.##
###
On tail estimates for Randomized Incremental Construction
[article]

2018
*
arXiv
*
pre-print

By combining several interesting applications of random sampling in geometric algorithms like point location, linear programming, segment intersections, binary space partitioning, Clarkson and Shor CS89 developed a general framework of randomized incremental construction (RIC ). The basic idea is to add objects in a random order and show that this approach yields efficient/optimal bounds on expected running time. Even quicksort can be viewed as a special case of this paradigm. However, unlike

arXiv:1808.02356v1
fatcat:rhz7im4bsvg5vkvokyzpjmqzae
## more »

... icksort, for most of these problems, attempts to obtain sharper tail estimates on the running time had proved inconclusive. Barring some results by MSW93,CMS92,Seidel91a, the general question remains unresolved. In this paper we present some general techniques to obtain tail estimates for RIC and and provide applications to some fundamental problems like Delaunay triangulations and construction of Visibility maps of intersecting line segments. The main result of the paper centers around a new and careful application of Freedman's Fre75 inequality for Martingale concentration that overcomes the bottleneck of the better known Azuma-Hoeffding inequality. Further, we show instances where an RIC based algorithm may not have inverse polynomial tail estimates. In particular, we show that the RIC time bounds for trapezoidal map can encounter a running time of Ω (n n n ) with probability exceeding 1/√(n). This rules out inverse polynomial concentration bounds around the expected running time.##
###
Improvable Knapsack Problems
[article]

2016
*
arXiv
*
pre-print

We consider a variant of the knapsack problem, where items are available with different possible weights. Using a separate budget for these item improvements, the question is: Which items should be improved to which degree such that the resulting classic knapsack problem yields maximum profit? We present a detailed analysis for several cases of improvable knapsack problems, presenting constant factor approximation algorithms and two PTAS.

arXiv:1607.08338v1
fatcat:2xpfvug76zbkbmsnfqofucc7wu
##
###
Towards a Theory of Cache-Efficient Algorithms
[article]

2000
*
arXiv
*
pre-print

We describe a model that enables us to analyze the running time of an algorithm in a computer with a memory hierarchy with limited associativity, in terms of various cache parameters. Our model, an extension of Aggarwal and Vitter's I/O model, enables us to establish useful relationships between the cache complexity and the I/O complexity of computations. As a corollary, we obtain cache-optimal algorithms for some fundamental problems like sorting, FFT, and an important subclass of permutations

arXiv:cs/0010007v1
fatcat:zb2jij6j6rc3bk4o7z7pentngi
## more »

... in the single-level cache model. We also show that ignoring associativity concerns could lead to inferior performance, by analyzing the average-case cache behavior of mergesort. We further extend our model to multiple levels of cache with limited associativity and present optimal algorithms for matrix transpose and sorting. Our techniques may be used for systematic exploitation of the memory hierarchy starting from the algorithm design stage, and dealing with the hitherto unresolved problem of limited associativity.##
###
Approximation Algorithms for Budget Constrained Network Upgradeable Problems
[article]

2014
*
arXiv
*
pre-print

Goerigk, Sabharwal, Schöbel and

arXiv:1412.3721v1
fatcat:q6je7w2uhna53pza2rirvmhhfi
*Sen*[GSSS14] considered the weight-reducible knapsack problem, for which they gave a polynomial-time 3-approximation and an FPTAS for the special case of uniform improvement ...##
###
Matching in Dynamic Graphs
[chapter]

2016
*
Encyclopedia of Algorithms
*

##
###
Distribution-sensitive algorithms
[chapter]

1998
*
Lecture Notes in Computer Science
*

and Seidel's discovery of the output-sensitive algorithm, there have been some recent simpli cation by Chan et al. 5] and Wenger 26] and Bhattacharya and

doi:10.1007/bfb0054380
fatcat:spcbegiuebfjvki32pay3xn54y
*Sen*3]. ...##
###
On parallel integer sorting

1992
*
Acta Informatica
*

We present an optimal algorithm for sorting n integers in the range [1, n c ] (for any constant c) for the EREW PRAM model where the word length is n , for any > 0. Using this algorithm, the best known upper bound for integer sorting on the (O(log n) word length) EREW PRAM model is improved. In addition, a novel parallel range reduction algorithm which results in a near optimal randomized integer sorting algorithm is presented. For the case when the keys are uniformly distributed integers in an

doi:10.1007/bf01178563
fatcat:bsxj243apffztnveqbsqpkt4d4
## more »

... arbitrary range, we give an algorithm whose expected running time is optimal. keys in O(n) sequential steps. Notice that the run time of BUCKET SORT matches the trivial Ω(n) lower bound for this problem. In this paper we are concerned with randomized parallel algorithms for sorting integer keys. Known Parallel Sorting Algorithms The performance of a parallel algorithm can be specified by bounds on its principal resources namely, processors and time. If we let P denote the processor bound, and T denote the time bound of a parallel algorithm for a given problem, the product P T is, clearly, bounded from below by the minimum sequential time, T s , required to solve this problem. We say a parallel algorithm is optimal if P T = O(T s ). Optimal parallel sorting for both general and integer keys remained an open problem for a long time. Many optimal algorithms (both deterministic and randomized) for sorting general keys in O(log n) time can be found in the literature (see [20] , [19] , [2] , and [7]). As in the sequential case, many parallel applications of interest need only sort integer keys. Until recently, no optimal parallel algorithm existed for sorting n integer keys with a run time of O(log n) or less. Rajasekaran and Reif[18] have given a randomized optimal algorithm for sorting n integers in the range [1, n(log n) O(1) ]. It remains an open problem to find an optimal algorithm for sorting keys in the range [1, n c ], for any constant c (using small word length). Hagerup [10] has published an algorithm that sorts n integers in the range [1, n c ] in time O(log n) using n log log n log n processors. The algorithm uses a stronger model, namely the Priority CRCW PRAM model, and O(n 1+ ) space, for any > 0.##
###
Improved Randomized Rounding using Random Walks
[article]

2014
*
arXiv
*
pre-print

We describe a novel algorithm for rounding packing integer programs based on multidimensional Brownian motion in R^n. Starting from an optimal fractional feasible solution x̅, the procedure converges in polynomial time to a distribution over (possibly infeasible) point set P ⊂{0,1 }^n such that the expected value of any linear objective function over P equals the value at x̅. This is an alternate approach to the classical randomized rounding method of Raghavan and Thompson RT:87. Our procedure

arXiv:1408.0488v2
fatcat:4luqlcfv4zdudpij6gkgnsudca
## more »

... s very general and in conjunction with discrepancy based arguments, yield efficient alternate methods for rounding other optimization problems that can be expressed as packing ILPs including disjoint path problems and MISR.##
###
The update complexity of selection and related problems
[article]

2011
*
arXiv
*
pre-print

We present a framework for computing with input data specified by intervals, representing uncertainty in the values of the input parameters. To compute a solution, the algorithm can query the input parameters that yield more refined estimates in form of sub-intervals and the objective is to minimize the number of queries. The previous approaches address the scenario where every query returns an exact value. Our framework is more general as it can deal with a wider variety of inputs and query

arXiv:1108.5525v1
fatcat:62fzytwyq5er7ea37posmkpsni
## more »

... ponses and we establish interesting relationships between them that have not been investigated previously. Although some of the approaches of the previous restricted models can be adapted to the more general model, we require more sophisticated techniques for the analysis and we also obtain improved algorithms for the previous model. We address selection problems in the generalized model and show that there exist 2-update competitive algorithms that do not depend on the lengths or distribution of the sub-intervals and hold against the worst case adversary. We also obtain similar bounds on the competitive ratio for the MST problem in graphs.##
###
Approximating Shortest Paths in Graphs
[chapter]

2009
*
Lecture Notes in Computer Science
*

Computing all-pairs distances in a graph is a fundamental problem of computer science but there has been a status quo with respect to the general problem of weighted directed graphs. In contrast, there has been a growing interest in the area of algorithms for approximate shortest paths leading to many interesting variations of the original problem. In this article, we trace some of the fundamental developments like spanners and distance oracles, their underlying constructions, as well as their

doi:10.1007/978-3-642-00202-1_3
fatcat:u22g6ldmgjgpvelv4jnob5r6ma
## more »

... pplications to the approximate all-pairs shortest paths.##
###
Planar Graph Blocking for External Searching

2002
*
Algorithmica
*

We present a new scheme for storing a planar graph in external memory so that any online path can be traversed in an I-O efficient way. Our storage scheme significantly improves the previous results for planar graphs with bounded face size. We also prove an upper bound on I-O efficiency of any storage scheme for well-shaped triangulated meshes. For these meshes, our storage scheme achieves optimal performance.

doi:10.1007/s00453-002-0969-2
fatcat:s46pjwlrnjcedhlzgdaznuweym
##
###
The Hardness of Speeding-up Knapsack

1998
*
BRICS Report Series
*

For completeness, we rederive some of the results from

doi:10.7146/brics.v5i14.19286
fatcat:3jccl37advfnrm6py4dp5igqpa
*Sen*[8] . The arity of a tree is the maximum number of children at any node. ... On the other hand, for problems like convex-hulls, a matching upper-bound is shown to exist (*Sen*[8] ) in our model that is similar to the CRCW model. ...##
###
On the streaming complexity of fundamental geometric problems
[article]

2018
*
arXiv
*
pre-print

In this paper, we focus on lower bounds and algorithms for some basic geometric problems in the one-pass (insertion only) streaming model. The problems considered are grouped into three categories: (i) Klee's measure (ii) Convex body approximation, geometric query, and (iii) Discrepancy Klee's measure is the problem of finding the area of the union of hyperrectangles. Under convex body approximation, we consider the problems of convex hull, convex body approximation, linear programming in fixed

arXiv:1803.06875v1
fatcat:lpqcqa3dhjgaplewceun44bfu4
## more »

... dimensions. The results for convex body approximation implies a property testing type result to find if a query point lies inside a convex polyhedron. Under discrepancy, we consider both the geometric and combinatorial discrepancy. For all the problems considered, we present (randomized) lower bounds on space. Most of our lower bounds are in terms of approximating the solution with respect to an error parameter ϵ. We provide approximation algorithms that closely match the lower bound on space for most of the problems.##
###
The covert set-cover problem with application to Network Discovery
[article]

2012
*
arXiv
*
pre-print

We address a version of the set-cover problem where we do not know the sets initially (and hence referred to as covert) but we can query an element to find out which sets contain this element as well as query a set to know the elements. We want to find a small set-cover using a minimal number of such queries. We present a Monte Carlo randomized algorithm that approximates an optimal set-cover of size OPT within O( N) factor with high probability using O(OPT ·^2 N) queries where N is the input

arXiv:1202.1090v1
fatcat:526es5tdpvbgzb7cgdd6yqspaq
## more »

... ze. We apply this technique to the network discovery problem that involves certifying all the edges and non-edges of an unknown n-vertices graph based on layered-graph queries from a minimal number of vertices. By reducing it to the covert set-cover problem we present an O(^2 n)-competitive Monte Carlo randomized algorithm for the covert version of network discovery problem. The previously best known algorithm has a competitive ratio of Ω (√(n n)) and therefore our result achieves an exponential improvement.
« Previous

*Showing results 1 — 15 out of 1,909 results*