IA Scholar Query: Randomly Rounding Rationals with Cardinality Constraints and Derandomizations.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgThu, 19 May 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Simplification strategies and extremal examples of simplicial complexes
https://scholar.archive.org/work/udys4b4aezazhhf662ahdf2k5q
Since the beginning of Topology, one of the most used approaches to study a geometric object has been to triangulate it. Many invariants to distinguish between different objects have been introduced over the years, the two most important surely being homology and the fundamental group. However, the direct computation of the fundamental group is infeasible and even homology computations could become computationally very expensive for triangulations with a large number of faces without proper preprocessing. This is why methods to reduce the number of faces of a complex, without changing its homology and homotopy type, are particularly of interest. In this thesis, we will focus on these simplification strategies and on explicit extremal examples. The first problem tackled is that of sphere recognition. It is known that 3-sphere recognition lies in NP and in co-NP, and that d-sphere recognition is undecidable for d > 4. However, the sphere recognition problem does not go away simply because it is algorithmically intractable. To the contrary, it appears naturally in the context of manifold recognition so there is a clear need to find good heuristics to process the examples. Here, we describe an heuristic procedure and its implementation in polymake that is able to recognize quite easily sphericity of even fairly large simplicial complexes. At the same time we show experimentally where the horizons for our heuristic lies, in particular for discrete Morse computations, which has implications for homology computations. Discrete Morse theory generalizes the concept of collapsibility, but even for a simple object like a single simplex one could get stuck during a random collapsing process before reaching a vertex. We show that for a simplex on n vertices, n > 7, there is a collapsing sequence that gets stuck on a d-dimensional simplicial complex on n vertices, for all d not in {1, n - 3, n - 2, n - 1}. Equivalently, and in the language of high-dimensional generalizations of trees, we construct hypertrees that are ant [...]Davide Lofano, Technische Universität Berlin, Frank H. Lutzwork_udys4b4aezazhhf662ahdf2k5qThu, 19 May 2022 00:00:00 GMTOptimizing Strongly Interacting Fermionic Hamiltonians
https://scholar.archive.org/work/6qukxghgeffi5lxujh46cptu34
The fundamental problem in much of physics and quantum chemistry is to optimize a low-degree polynomial in certain anticommuting variables. Being a quantum mechanical problem, in many cases we do not know an efficient classical witness to the optimum, or even to an approximation of the optimum. One prominent exception is when the optimum is described by a so-called "Gaussian state", also called a free fermion state. In this work we are interested in the complexity of this optimization problem when no good Gaussian state exists. Our primary testbed is the Sachdev–Ye–Kitaev (SYK) model of random degree-q polynomials, a model of great current interest in condensed matter physics and string theory, and one which has remarkable properties from a computational complexity standpoint. Among other results, we give an efficient classical certification algorithm for upper-bounding the largest eigenvalue in the q=4 SYK model, and an efficient quantum certification algorithm for lower-bounding this largest eigenvalue; both algorithms achieve constant-factor approximations with high probability.Matthew B. Hastings, Ryan O'Donnellwork_6qukxghgeffi5lxujh46cptu34Mon, 15 Nov 2021 00:00:00 GMTExplaining generalization in deep learning: progress and fundamental limits
https://scholar.archive.org/work/qfyzrtetinfmdmohy5p4k3ltki
This dissertation studies a fundamental open challenge in deep learning theory: why do deep networks generalize well even while being overparameterized, unregularized and fitting the training data to zero error? In the first part of the thesis, we will empirically study how training deep networks via stochastic gradient descent implicitly controls the networks' capacity. Subsequently, to show how this leads to better generalization, we will derive data-dependent uniform-convergence-based generalization bounds with improved dependencies on the parameter count. Uniform convergence has in fact been the most widely used tool in deep learning literature, thanks to its simplicity and generality. Given its popularity, in this thesis, we will also take a step back to identify the fundamental limits of uniform convergence as a tool to explain generalization. In particular, we will show that in some example overparameterized settings, any uniform convergence bound will provide only a vacuous generalization bound. With this realization in mind, in the last part of the thesis, we will change course and introduce an empirical technique to estimate generalization using unlabeled data. Our technique does not rely on any notion of uniform-convergece-based complexity and is remarkably precise. We will theoretically show why our technique enjoys such precision. We will conclude by discussing how future work could explore novel ways to incorporate distributional assumptions in generalization bounds (such as in the form of unlabeled data) and explore other tools to derive bounds, perhaps by modifying uniform convergence or by developing completely new tools altogether.Vaishnavh Nagarajanwork_qfyzrtetinfmdmohy5p4k3ltkiSun, 17 Oct 2021 00:00:00 GMT