A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is `application/pdf`

.

## Filters

##
###
Bridging between 0/1 and Linear Programming via Random Walks
[article]

2019
*
arXiv
*
pre-print

Under the Strong Exponential Time Hypothesis, an integer linear program with n Boolean-valued variables and m equations cannot be solved in c^n time for any constant c < 2. If the domain of the variables is relaxed to [0,1], the associated linear program can of course be solved in polynomial time. In this work, we give a natural algorithmic bridging between these extremes of 0-1 and linear programming. Specifically, for any subset (finite union of intervals) E ⊂ [0,1] containing {0,1}, we give

arXiv:1904.04860v1
fatcat:mru5bc6lhnh5tjn2sfyxoyhuqe
## more »

... random-walk based algorithm with runtime O_E((2-measure(E))^npoly(n,m)) that finds a solution in E^n to any n-variable linear program with m constraints that is feasible over {0,1}^n. Note that as E expands from {0,1} to [0,1], the runtime improves smoothly from 2^n to polynomial. Taking E = [0,1/k) ∪ (1-1/k,1] in our result yields as a corollary a randomized (2-2/k)^npoly(n) time algorithm for k-SAT. While our approach has some high level resemblance to Schöning's beautiful algorithm, our general algorithm is based on a more sophisticated random walk that incorporates several new ingredients, such as a multiplicative potential to measure progress, a judicious choice of starting distribution, and a time varying distribution for the evolution of the random walk that is itself computed via an LP at each step (a solution to which is guaranteed based on the minimax theorem). Plugging the LP algorithm into our earlier polymorphic framework yields fast exponential algorithms for any CSP (like k-SAT, 1-in-3-SAT, NAE k-SAT) that admit so-called 'threshold partial polymorphisms.'##
###
A Simple Sublinear Algorithm for Gap Edit Distance
[article]

2020
*
arXiv
*
pre-print

We study the problem of estimating the edit distance between two n-character strings. While exact computation in the worst case is believed to require near-quadratic time, previous work showed that in certain regimes it is possible to solve the following gap edit distance problem in sub-linear time: distinguish between inputs of distance < k and >k^2. Our main result is a very simple algorithm for this benchmark that runs in time Õ(n/√(k)), and in particular settles the open problem of

arXiv:2007.14368v1
fatcat:oe6kb6h7rff5vmepdgrebyivzy
## more »

... a truly sublinear time for the entire range of relevant k. Building on the same framework, we also obtain a k-vs-k^2 algorithm for the one-sided preprocessing model with Õ(n) preprocessing time and Õ(n/k) query time (improving over a recent Õ(n/k+k^2)-query time algorithm for the same problem [GRS'20].##
###
Vertex isoperimetry and independent set stability for tensor powers of cliques
[article]

2017
*
arXiv
*
pre-print

The tensor power of the clique on t vertices (denoted by K_t^n) is the graph on vertex set {1, ..., t}^n such that two vertices x, y ∈{1, ..., t}^n are connected if and only if x_i ≠ y_i for all i ∈{1, ..., n}. Let the density of a subset S of K_t^n to be μ(S) := |S|/t^n, and let the vertex boundary of a set S to be vertices which are incident to some vertex of S, perhaps including points of S. We investigate two similar problems on such graphs. First, we study the vertex isoperimetry problem.

arXiv:1702.04432v1
fatcat:w3o5rjks4zfj3lzjroxmbv4tgi
## more »

... iven a density ν∈ [0, 1] what is the smallest possible density of the vertex boundary of a subset of K_t^n of density ν? Let Φ_t(ν) be the infimum of these minimum densities as n →∞. We find a recursive relation allows one to compute Φ_t(ν) in time polynomial to the number of desired bits of precision. Second, we study given an independent set I ⊆ K_t^n of density μ(I) = 1/t(1-ϵ), how close it is to a maximum-sized independent set J of density 1/t. We show that this deviation (measured by μ(I ∖ J)) is at most 4ϵ^ t/ t - (t-1) as long as ϵ < 1 - 3/t + 2/t^2. This substantially improves on results of Alon, Dinur, Friedgut, and Sudakov (2004) and Ghandehari and Hatami (2008) which had an O(ϵ) upper bound. We also show the exponent t/ t - (t-1) is optimal assuming n tending to infinity and ϵ tending to 0. The methods have similarity to recent work by Ellis, Keller, and Lifshitz (2016) in the context of Kneser graphs and other settings. The author hopes that these results have potential applications in hardness of approximation, particularly in approximate graph coloring and independent set problems.##
###
An Algorithmic Blend of LPs and Ring Equations for Promise CSPs
[article]

2018
*
arXiv
*
pre-print

Promise CSPs are a relaxation of constraint satisfaction problems where the goal is to find an assignment satisfying a relaxed version of the constraints. Several well-known problems can be cast as promise CSPs including approximate graph coloring, discrepancy minimization, and interesting variants of satisfiability. Similar to CSPs, the tractability of promise CSPs can be tied to the structure of operations on the solution space called polymorphisms, though in the promise world these

arXiv:1807.05194v1
fatcat:wptigingdref3kv6oasx5xtmce
## more »

... are much less constrained. Under the thesis that non-trivial polymorphisms govern tractability, promise CSPs therefore provide a fertile ground for the discovery of novel algorithms. In previous work, we classified Boolean promise CSPs when the constraint predicates are symmetric. In this work, we vastly generalize these algorithmic results. Specifically, we show that promise CSPs that admit a family of "regional-periodic" polymorphisms are in P, assuming that determining which region a point is in can be computed in polynomial time. Such polymorphisms are quite general and are obtained by gluing together several functions that are periodic in the Hamming weights in different blocks of the input. Our algorithm is based on a novel combination of linear programming and solving linear systems over rings. We also abstract a framework based on reducing a promise CSP to a CSP over an infinite domain, solving it there, and then rounding the solution to an assignment for the promise CSP instance. The rounding step is intimately tied to the family of polymorphisms and clarifies the connection between polymorphisms and algorithms in this context. As a key ingredient, we introduce the technique of finding a solution to a linear program with integer coefficients that lies in a different ring (such as Z[√(2)]) to bypass ad-hoc adjustments for lying on a rounding boundary.##
###
Smoothed Complexity of 2-player Nash Equilibria
[article]

2020
*
arXiv
*
pre-print

We prove that computing a Nash equilibrium of a two-player (n × n) game with payoffs in [-1,1] is PPAD-hard (under randomized reductions) even in the smoothed analysis setting, smoothing with noise of constant magnitude. This gives a strong negative answer to conjectures of Spielman and Teng [ST06] and Cheng, Deng, and Teng [CDT09]. In contrast to prior work proving PPAD-hardness after smoothing by noise of magnitude 1/poly(n) [CDT09], our smoothed complexity result is not proved via hardness

arXiv:2007.10857v1
fatcat:i7tgbussgrcwxk3rva6vzwhrwm
## more »

... approximation for Nash equilibria. This is by necessity, since Nash equilibria can be approximated to constant error in quasi-polynomial time [LMM03]. Our results therefore separate smoothed complexity and hardness of approximation for Nash equilibria in two-player games. The key ingredient in our reduction is the use of a random zero-sum game as a gadget to produce two-player games which remain hard even after smoothing. Our analysis crucially shows that all Nash equilibria of random zero-sum games are far from pure (with high probability), and that this remains true even after smoothing.##
###
The Resolution of Keller's Conjecture
[chapter]

2020
*
Lecture Notes in Computer Science
*

*Joshua*is supported by an NSF graduate research fellowship. Marijn and David are supported by NSF grant CCF-1813993. ...

##
###
CSPs with Global Modular Constraints: Algorithms and Hardness via Polynomial Representations
[article]

2019
*
arXiv
*
pre-print

We study the complexity of Boolean constraint satisfaction problems (CSPs) when the assignment must have Hamming weight in some congruence class modulo M, for various choices of the modulus M. Due to the known classification of tractable Boolean CSPs, this mainly reduces to the study of three cases: 2-SAT, HORN-SAT, and LIN-2 (linear equations mod 2). We classify the moduli M for which these respective problems are polynomial time solvable, and when they are not (assuming the ETH). Our study

arXiv:1902.04740v1
fatcat:xv7eh5ybgjezfiimqsdmpr2g4q
## more »

... eals that this modular constraint lends a surprising richness to these classic, well-studied problems, with interesting broader connections to complexity theory and coding theory. The HORN-SAT case is connected to the covering complexity of polynomials representing the NAND function mod M. The LIN-2 case is tied to the sparsity of polynomials representing the OR function mod M, which in turn has connections to modular weight distribution properties of linear codes and locally decodable codes. In both cases, the analysis of our algorithm as well as the hardness reduction rely on these polynomial representations, highlighting an interesting algebraic common ground between hard cases for our algorithms and the gadgets which show hardness. These new complexity measures of polynomial representations merit further study. The inspiration for our study comes from a recent work by N\"agele, Sudakov, and Zenklusen on submodular minimization with a global congruence constraint. Our algorithm for HORN-SAT has strong similarities to their algorithm, and in particular identical kind of set systems arise in both cases. Our connection to polynomial representations leads to a simpler analysis of such set systems, and also sheds light on (but does not resolve) the complexity of submodular minimization with a congruency requirement modulo a composite M.##
###
Lower Bounds for Maximally Recoverable Tensor Code and Higher Order MDS Codes
[article]

2021
*
arXiv
*
pre-print

An (m,n,a,b)-tensor code consists of m× n matrices whose columns satisfy 'a' parity checks and rows satisfy 'b' parity checks (i.e., a tensor code is the tensor product of a column code and row code). Tensor codes are useful in distributed storage because a single erasure can be corrected quickly either by reading its row or column. Maximally Recoverable (MR) Tensor Codes, introduced by Gopalan et al., are tensor codes which can correct every erasure pattern that is information theoretically

arXiv:2107.10822v1
fatcat:hvidhs5btjgajfsfovm56hhuo4
## more »

... sible to correct. The main questions about MR Tensor Codes are characterizing which erasure patterns are correctable and obtaining explicit constructions over small fields. In this paper, we study the important special case when a=1, i.e., the columns satisfy a single parity check equation. We introduce the notion of higher order MDS codes (MDS(ℓ) codes) which is an interesting generalization of the well-known MDS codes, where ℓ captures the order of genericity of points in a low-dimensional space. We then prove that a tensor code with a=1 is MR iff the row code is an MDS(m) code. We then show that MDS(m) codes satisfy some weak duality. Using this characterization and duality, we prove that (m,n,a=1,b)-MR tensor codes require fields of size q=Ω_m,b(n^min{b,m}-1). Our lower bound also extends to the setting of a>1. We also give a deterministic polynomial time algorithm to check if a given erasure pattern is correctable by the MR tensor code (when a=1).##
###
Efficient Low-Redundancy Codes for Correcting Multiple Deletions
[article]

2019
*
arXiv
*
pre-print

We consider the problem of constructing binary codes to recover from k-bit deletions with efficient encoding/decoding, for a fixed k. The single deletion case is well understood, with the Varshamov-Tenengolts-Levenshtein code from 1965 giving an asymptotically optimal construction with ≈ 2^n/n codewords of length n, i.e., at most n bits of redundancy. However, even for the case of two deletions, there was no known explicit construction with redundancy less than n^Ω(1). For any fixed k, we

arXiv:1507.06175v2
fatcat:myxd22bs4bfnjc52wan6n3gcjy
## more »

... uct a binary code with c_k n redundancy that can be decoded from k deletions in O_k(n ^4 n) time. The coefficient c_k can be taken to be O(k^2 k), which is only quadratically worse than the optimal, non-constructive bound of O(k). We also indicate how to modify this code to allow for a combination of up to k insertions and deletions.##
###
Promise Constraint Satisfaction: Algebraic Structure and a Symmetric Boolean Dichotomy
[article]

2021
*
arXiv
*
pre-print

A classic result due to Schaefer (1978) classifies all constraint satisfaction problems (CSPs) over the Boolean domain as being either in 𝖯 or 𝖭𝖯-hard. This paper considers a promise-problem variant of CSPs called PCSPs. A PCSP over a finite set of pairs of constraints Γ consists of a pair (Ψ_P, Ψ_Q) of CSPs with the same set of variables such that for every (P, Q) ∈Γ, P(x_i_1, ..., x_i_k) is a clause of Ψ_P if and only if Q(x_i_1, ..., x_i_k) is a clause of Ψ_Q. The promise problem PCSP(Γ) is

arXiv:1704.01937v2
fatcat:iwsddj3pvnedrotkrksgl5l4du
## more »

... o distinguish, given (Ψ_P, Ψ_Q), between the cases Ψ_P is satisfiable and Ψ_Q is unsatisfiable. Many natural problems including approximate graph and hypergraph coloring can be placed in this framework. This paper is motivated by the pursuit of understanding the computational complexity of Boolean promise CSPs. As our main result, we show that PCSP(Γ) exhibits a dichotomy (it is either polynomial time solvable or 𝖭𝖯-hard) when the relations in Γ are symmetric and allow for negations of variables. We achieve our dichotomy theorem by extending the weak polymorphism framework of Austrin, Guruswami, and Håstad [FOCS '14] which itself is a generalization of the algebraic approach to study CSPs. In both the algorithm and hardness portions of our proof, we incorporate new ideas and techniques not utilized in the CSP case. Furthermore, we show that the computational complexity of any promise CSP (over arbitrary finite domains) is captured entirely by its weak polymorphisms, a feature known as Galois correspondence, as well as give necessary and sufficient conditions for the structure of this set of weak polymorphisms. Such insights call us to question the existence of a general dichotomy for Boolean PCSPs.##
###
Bounds on the Size of Sound Monotone Switching Networks Accepting Permutation Sets of Directed Trees
[article]

2013
*
arXiv
*
pre-print

*Joshua*

*Brakensiek*would also like to thank his parents, Warren and Kathleen, for their support. ... We would also like to acknowledge the Massachusetts Institute of Technology for hosting the Research Science Institute;

*Joshua*Brakensiek's Research Science Institute sponsors Dr. and Mrs. ...

##
###
The Resolution of Keller's Conjecture
[article]

2020
*
arXiv
*
pre-print

*Joshua*is supported by an NSF graduate research fellowship. Marijn and David are supported by NSF grant CCF-1813993. ...

*Joshua*

*Brakensiek*, Marijn Heule, John Mackey, and David Narváez ...

##
###
Constant-factor approximation of near-linear edit distance in near-linear time
[article]

2020
*
arXiv
*
pre-print

We show that the edit distance between two strings of length n can be computed within a factor of f(ϵ) in n^1+ϵ time as long as the edit distance is at least n^1-δ for some δ(ϵ) > 0.

arXiv:1904.05390v2
fatcat:ssid3fxgyrdzjp3pc4e43lxyna
##
###
EFFICIENT GEOMETRIC PROBABILITIES OF MULTI-TRANSITING EXOPLANETARY SYSTEMS FROM CORBITS

2016
*
Astrophysical Journal
*

JB thanks his parents Warren and Kathleen

doi:10.3847/0004-637x/821/1/47
fatcat:iqqmiajw3jfvpcvlqznwyu74ja
*Brakensiek*for their support. DR thanks the Institute for Theory and Computation. We thank the Division of Planetary Sciences for a Hartmann Travel Grant. ...##
###
Coded trace reconstruction in a constant number of traces
[article]

2020
*
arXiv
*
pre-print

The coded trace reconstruction problem asks to construct a code C⊂{0,1}^n such that any x∈ C is recoverable from independent outputs ("traces") of x from a binary deletion channel (BDC). We present binary codes of rate 1-ε that are efficiently recoverable from (O_q(log^1/3(1/ε))) (a constant independent of n) traces of a BDC_q for any constant deletion probability q∈(0,1). We also show that, for rate 1-ε binary codes, Ω̃(log^5/2(1/ε)) traces are required. The results follow from a pair of

arXiv:1908.03996v3
fatcat:u5eovul645a2ljukm66lc6fkge
## more »

... box reductions that show that average-case trace reconstruction is essentially equivalent to coded trace reconstruction. We also show that there exist codes of rate 1-ε over an O_ε(1)-sized alphabet that are recoverable from O(log(1/ε)) traces, and that this is tight.
« Previous

*Showing results 1 — 15 out of 37 results*