A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is `application/pdf`

.

## Filters

##
###
Dynamic Parameterized Problems and Algorithms
[article]

2017
*
arXiv
*
pre-print

Fixed-parameter algorithms and kernelization are two powerful methods to solve NP-hard problems. Yet, so far those algorithms have been largely restricted to static inputs. In this paper we provide fixed-parameter algorithms and kernelizations for fundamental NP-hard problems with dynamic inputs. We consider a variety of parameterized graph and hitting set problems which are known to have f(k)n^1+o(1) time algorithms on inputs of size n, and we consider the question of whether there is a data

arXiv:1707.00362v1
fatcat:ficuqxkgn5atzgfb3mgnyfrnx4
## more »

... ructure that supports small updates (such as edge/vertex/set/element insertions and deletions) with an update time of g(k)n^o(1); such an update time would be essentially optimal. Update and query times independent of n are particularly desirable. Among many other results, we show that Feedback Vertex Set and k-Path admit dynamic algorithms with f(k)^O(1)n update and query times for some function f depending on the solution size k only. We complement our positive results by several conditional and unconditional lower bounds. For example, we show that unlike their undirected counterparts, Directed Feedback Vertex Set and Directed k-Path do not admit dynamic algorithms with n^o(1) update and query times even for constant solution sizes k≤ 3, assuming popular hardness hypotheses. We also show that unconditionally, in the cell probe model, Directed Feedback Vertex Set cannot be solved with update time that is purely a function of k.##
###
Faster Update Time for Turnstile Streaming Algorithms
[article]

2019
*
arXiv
*
pre-print

In this paper, we present a new algorithm for maintaining linear sketches in turnstile streams with faster update time. As an application, we show that log nCount sketches or CountMin sketches with a constant number of columns (i.e., buckets) can be implicitly maintained in worst-caseO(log^0.582 n) update time using O(log n) words of space, on a standard word RAM with word-size w=Θ(log n). The exponent 0.582≈ 2ω/3-1, where ω is the current matrix multiplication exponent. Due to the numerous

arXiv:1911.01351v1
fatcat:wcfsuerlfbdobozero7xubjmau
## more »

... ications of linear sketches, our algorithm improves the update time for many streaming problems in turnstile streams, in the high success probability setting, without using more space, including ℓ_2 norm estimation, ℓ_2 heavy hitters, point query with ℓ_1 or ℓ_2 error, etc. Our algorithm generalizes, with the same update time and space, to maintaining log n linear sketches, where each sketch partitions the coordinates into k2.##
###
An Illuminating Algorithm for the Light Bulb Problem
[article]

2018
*
arXiv
*
pre-print

The Light Bulb Problem is one of the most basic problems in data analysis. One is given as input n vectors in {-1,1}^d, which are all independently and uniformly random, except for a planted pair of vectors with inner product at least ρ· d for some constant ρ > 0. The task is to find the planted pair. The most straightforward algorithm leads to a runtime of Ω(n^2). Algorithms based on techniques like Locality-Sensitive Hashing achieve runtimes of n^2 - O(ρ); as ρ gets small, these approach

arXiv:1810.06740v1
fatcat:q3ef37n3knb6np2omjfjr7tssa
## more »

... atic. Building on prior work, we give a new algorithm for this problem which runs in time O(n^1.582 + nd), regardless of how small ρ is. This matches the best known runtime due to Karppa et al. Our algorithm combines techniques from previous work on the Light Bulb Problem with the so-called 'polynomial method in algorithm design,' and has a simpler analysis than previous work. Our algorithm is also easily derandomized, leading to a deterministic algorithm for the Light Bulb Problem with the same runtime of O(n^1.582 + nd), improving previous results.##
###
Probabilistic Rank and Matrix Rigidity
[article]

2017
*
arXiv
*
pre-print

We consider a notion of probabilistic rank and probabilistic sign-rank of a matrix, which measures the extent to which a matrix can be probabilistically represented by low-rank matrices. We demonstrate several connections with matrix rigidity, communication complexity, and circuit lower bounds, including: The Walsh-Hadamard Transform is Not Very Rigid. We give surprising upper bounds on the rigidity of a family of matrices whose rigidity has been extensively studied, and was conjectured to be

arXiv:1611.05558v2
fatcat:lqij2wpjzrduxoa3g5iswj6q2a
## more »

... ghly rigid. For the 2^n × 2^n Walsh-Hadamard transform H_n (a.k.a. Sylvester matrices, or the communication matrix of Inner Product mod 2), we show how to modify only 2^ϵ n entries in each row and make the rank drop below 2^n(1-Ω(ϵ^2/(1/ϵ))), for all ϵ > 0, over any field. That is, it is not possible to prove arithmetic circuit lower bounds on Hadamard matrices, via L. Valiant's matrix rigidity approach. We also show non-trivial rigidity upper bounds for H_n with smaller target rank. Matrix Rigidity and Threshold Circuit Lower Bounds. We give new consequences of rigid matrices for Boolean circuit complexity. We show that explicit n × n Boolean matrices which maintain rank at least 2^( n)^1-δ after n^2/2^( n)^δ/2 modified entries would yield a function lacking sub-quadratic-size AC^0 circuits with two layers of arbitrary linear threshold gates. We also prove that explicit 0/1 matrices over R which are modestly more rigid than the best known rigidity lower bounds for sign-rank would imply strong lower bounds for the infamously difficult class THR∘ THR.##
###
A Refined Laser Method and Faster Matrix Multiplication
[article]

2020
*
arXiv
*
pre-print

The complexity of matrix multiplication is measured in terms of ω, the smallest real number such that two n× n matrices can be multiplied using O(n^ω+ϵ) field operations for all ϵ>0; the best bound until now is ω<2.37287 [Le Gall'14]. All bounds on ω since 1986 have been obtained using the so-called laser method, a way to lower-bound the 'value' of a tensor in designing matrix multiplication algorithms. The main result of this paper is a refinement of the laser method that improves the

arXiv:2010.05846v1
fatcat:wrcl75w44jdwfgjpgrsxv4nw4y
## more »

... value bound for most sufficiently large tensors. Thus, even before computing any specific values, it is clear that we achieve an improved bound on ω, and we indeed obtain the best bound on ω to date: ω < 2.37286. The improvement is of the same magnitude as the improvement that [Le Gall'14] obtained over the previous bound [Vassilevska W.'12]. Our improvement to the laser method is quite general, and we believe it will have further applications in arithmetic complexity.##
###
Polynomial Representations of Threshold Functions and Algorithmic Applications
[article]

2016
*
arXiv
*
pre-print

The approach this time requires only the previous probabilistic polynomials by

arXiv:1608.04355v1
fatcat:ystinrm3xfekxe6blrjiobwpay
*Alman*and Williams [AW15] . ...*Alman*and Williams [AW15] designed a probabilistic polynomial for TH θ which already achieves a tight degree bound of Θ( √ n log s). ...##
###
Further limitations of the known approaches for matrix multiplication
[article]

2017
*
arXiv
*
pre-print

We consider the techniques behind the current best algorithms for matrix multiplication. Our results are threefold. (1) We provide a unifying framework, showing that all known matrix multiplication running times since 1986 can be achieved from a single very natural tensor - the structural tensor T_q of addition modulo an integer q. (2) We show that if one applies a generalization of the known techniques (arbitrary zeroing out of tensor powers to obtain independent matrix products in order to

arXiv:1712.07246v1
fatcat:4usynmvqdjerlkm23oczgerc54
## more »

... the asymptotic sum inequality of Schönhage) to an arbitrary monomial degeneration of T_q, then there is an explicit lower bound, depending on q, on the bound on the matrix multiplication exponent ω that one can achieve. We also show upper bounds on the value α that one can achieve, where α is such that n× n^α× n matrix multiplication can be computed in n^2+o(1) time. (3) We show that our lower bound on ω approaches 2 as q goes to infinity. This suggests a promising approach to improving the bound on ω: for variable q, find a monomial degeneration of T_q which, using the known techniques, produces an upper bound on ω as a function of q. Then, take q to infinity. It is not ruled out, and hence possible, that one can obtain ω=2 in this way.##
###
Algorithms and Hardness for Linear Algebra on Geometric Graphs
[article]

2020
*
arXiv
*
pre-print

For a function 𝖪 : ℝ^d×ℝ^d→ℝ_≥ 0, and a set P = { x_1, ..., x_n}⊂ℝ^d of n points, the 𝖪 graph G_P of P is the complete graph on n nodes where the weight between nodes i and j is given by 𝖪(x_i, x_j). In this paper, we initiate the study of when efficient spectral graph theory is possible on these graphs. We investigate whether or not it is possible to solve the following problems in n^1+o(1) time for a 𝖪-graph G_P when d < n^o(1): ∙ Multiply a given vector by the adjacency matrix or Laplacian

arXiv:2011.02466v1
fatcat:zkii7x5i75copjfwrql6vsdhdi
## more »

... trix of G_P ∙ Find a spectral sparsifier of G_P ∙ Solve a Laplacian system in G_P's Laplacian matrix For each of these problems, we consider all functions of the form 𝖪(u,v) = f(u-v_2^2) for a function f:ℝ→ℝ. We provide algorithms and comparable hardness results for many such 𝖪, including the Gaussian kernel, Neural tangent kernels, and more. For example, in dimension d = Ω(log n), we show that there is a parameter associated with the function f for which low parameter values imply n^1+o(1) time algorithms for all three of these problems and high parameter values imply the nonexistence of subquadratic time algorithms assuming Strong Exponential Time Hypothesis (𝖲𝖤𝖳𝖧), given natural assumptions on f. As part of our results, we also show that the exponential dependence on the dimension d in the celebrated fast multipole method of Greengard and Rokhlin cannot be improved, assuming 𝖲𝖤𝖳𝖧, for a broad class of functions f. To the best of our knowledge, this is the first formal limitation proven about fast multipole methods.##
###
Optimal-Degree Polynomial Approximations for Exponentials and Gaussian Kernel Density Estimation
[article]

2022
*
arXiv
*
pre-print

For any real numbers B ≥ 1 and δ∈ (0, 1) and function f: [0, B] →ℝ, let d_B; δ (f) ∈ℤ_> 0 denote the minimum degree of a polynomial p(x) satisfying sup_x ∈ [0, B]| p(x) - f(x) | < δ. In this paper, we provide precise asymptotics for d_B; δ (e^-x) and d_B; δ (e^x) in terms of both B and δ, improving both the previously known upper bounds and lower bounds. In particular, we show d_B; δ (e^-x) = Θ( max{√(B log(δ^-1)), log(δ^-1) /log(B^-1log(δ^-1))}), and d_B; δ (e^x) = Θ( max{ B, log(δ^-1)

arXiv:2205.06249v1
fatcat:wlypzbtmczcuhocz75cpgtrhim
## more »

... 1log(δ^-1))}). Polynomial approximations for e^-x and e^x have applications to the design of algorithms for many problems, and our degree bounds show both the power and limitations of these algorithms. We focus in particular on the Batch Gaussian Kernel Density Estimation problem for n sample points in Θ(log n) dimensions with error δ = n^-Θ(1). We show that the running time one can achieve depends on the square of the diameter of the point set, B, with a transition at B = Θ(log n) mirroring the corresponding transition in d_B; δ (e^-x): - When B=o(log n), we give the first algorithm running in time n^1 + o(1). - When B = κlog n for a small constant κ>0, we give an algorithm running in time n^1 + O(loglogκ^-1 /logκ^-1). The loglogκ^-1 /logκ^-1 term in the exponent comes from analyzing the behavior of the leading constant in our computation of d_B; δ (e^-x). - When B = ω(log n), we show that time n^2 - o(1) is necessary assuming SETH.##
###
Limits on All Known (and Some Unknown) Approaches to Matrix Multiplication
[article]

2018
*
arXiv
*
pre-print

*Alman*and Vassilevska W. ...

##
###
Limits on the Universal Method for Matrix Multiplication
[article]

2019
*
arXiv
*
pre-print

*Alman*and Vassilevska Williams recently defined the Universal Method, which substantially generalizes all the known approaches including Strassen's Laser Method and Cohn and Umans' Group Theoretic Method ...

##
###
Limits on the Universal Method for Matrix Multiplication

2019
*
Computational Complexity Conference
*

*Alman*and Vassilevska Williams [2] recently defined the Universal Method, which substantially generalizes all the known approaches including Strassen's Laser Method [20] and Cohn and Umans ' Group Theoretic ...

*Alman*12:3 a number of important questions: How close to 2 can we get using monomial degenerations; could it be that ω g (CW q ) ≤ 2.1? ...

*Alman*12:15 For any probability distribution p : L → [0, 1], let us count the number of x-variables used in (P1,...,Pn)∈Ln,p P 1 ⊗ • • • ⊗ P n . ...

##
###
Parameterized Sensitivity Oracles and Dynamic Algorithms using Exterior Algebras
[article]

2022
*
arXiv
*
pre-print

However,

arXiv:2204.10819v2
fatcat:utt5ay6ygbawjikdcl7xxngawa
*Alman*, Mnich and Vassilevska [AMW20] showed that under the aforementioned fine-grained conjectures, there is no dynamic algorithm for directed graphs. ... Moreover,*Alman*, Mnich and Vassilevska [AMW20] proved a conditional lower bound, that it does not have such an efficient dynamic parameterized algorithm assuming any one of three popular conjectures ...##
###
Cell-Probe Lower Bounds from Online Communication Complexity
[article]

2017
*
arXiv
*
pre-print

In this work, we introduce an online model for communication complexity. Analogous to how online algorithms receive their input piece-by-piece, our model presents one of the players, Bob, his input piece-by-piece, and has the players Alice and Bob cooperate to compute a result each time before the next piece is revealed to Bob. This model has a closer and more natural correspondence to dynamic data structures than classic communication models do, and hence presents a new perspective on data

arXiv:1704.06185v2
fatcat:fexrkwzaqjaxhiotuhxdkszw3y
## more »

... ctures. We first present a tight lower bound for the online set intersection problem in the online communication model, demonstrating a general approach for proving online communication lower bounds. The online communication model prevents a batching trick that classic communication complexity allows, and yields a stronger lower bound. We then apply the online communication model to prove data structure lower bounds for two dynamic data structure problems: the Group Range problem and the Dynamic Connectivity problem for forests. Both of the problems admit a worst case O( n)-time data structure. Using online communication complexity, we prove a tight cell-probe lower bound for each: spending o( n) (even amortized) time per operation results in at best an (-δ^2 n) probability of correctly answering a (1/2+δ)-fraction of the n queries.##
###
OV Graphs Are (Probably) Hard Instances

2020
*
Innovations in Theoretical Computer Science
*

A graph G on n nodes is an Orthogonal Vectors (OV) graph of dimension d if there are vectors v1, . . . , vn ∈ {0, 1} d such that nodes i and j are adjacent in G if and only if vi, vj = 0 over Z. In this paper, we study a number of basic graph algorithm problems, except where one is given as input the vectors defining an OV graph instead of a general graph. We show that for each of the following problems, an algorithm solving it faster on such OV graphs G of dimension only d = O(log n) than in

doi:10.4230/lipics.itcs.2020.83
dblp:conf/innovations/AlmanW20
fatcat:343pkhkgfncklhwq4eqrlibq7m
## more »

... e general case would refute a plausible conjecture about the time required to solve sparse MAX-k-SAT instances: Determining whether G contains a triangle. More generally, determining whether G contains a directed k-cycle for any k ≥ 3. Computing the square of the adjacency matrix of G over Z or F2. Maintaining the shortest distance between two fixed nodes of G, or whether G has a perfect matching, when G is a dynamically updating OV graph. We also prove some complementary results about OV graphs. We show that any problem which is NP-hard on constant-degree graphs is also NP-hard on OV graphs of dimension O(log n), and we give two problems which can be solved faster on OV graphs than in general: Maximum Clique, and Online Matrix-Vector Multiplication. Acknowledgements The authors would like to thank the anonymous reviewers for their comments on an earlier version. Hypothesis 1 (Strong Exponential Time Hypothesis). For every ε > 0, there is an integer k ≥ 3 such that k-SAT on n variables cannot be solved in O(2 (1−ε)n ) (randomized) time. OVC concerns the Orthogonal Vectors (OV) problem: Given as input a set A ⊆ {0, 1} d of |A| = n vectors, determine whether there are a, b ∈ A such that a, b = 0 (all inner products in this paper, including this one, are taken over Z unless stated otherwise).
« Previous

*Showing results 1 — 15 out of 96 results*