IA Scholar Query: Succinct hitting sets and barriers to proving algebraic circuits lower bounds.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgWed, 02 Nov 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Polynomial Identity Testing via Evaluation of Rational Functions
https://scholar.archive.org/work/vf2d3e7ci5eenbljjhmo7z3ice
We introduce a hitting set generator for Polynomial Identity Testing based on evaluations of low-degree univariate rational functions at abscissas associated with the variables. Despite the univariate nature, we establish an equivalence up to rescaling with a generator introduced by Shpilka and Volkovich, which has a similar structure but uses multivariate polynomials in the abscissas. We study the power of the generator by characterizing its vanishing ideal, i.e., the set of polynomials that it fails to hit. Capitalizing on the univariate nature, we develop a small collection of polynomials that jointly produce the vanishing ideal. As corollaries, we obtain tight bounds on the minimum degree, sparseness, and partition class size of set-multilinearity in the vanishing ideal. Inspired by an alternating algebra representation, we develop a structured deterministic membership test for the vanishing ideal. As a proof of concept, we rederive known derandomization results based on the generator by Shpilka and Volkovich and present a new application for read-once oblivious algebraic branching programs.Dieter van Melkebeek, Andrew Morganwork_vf2d3e7ci5eenbljjhmo7z3iceWed, 02 Nov 2022 00:00:00 GMTPolynomial formulations as a barrier for reduction-based hardness proofs
https://scholar.archive.org/work/o2rzjd3tuvgn5oooqwjhd3eclu
The Strong Exponential Time Hypothesis (SETH) asserts that for every ε>0 there exists k such that k-SAT requires time (2-ε)^n. The field of fine-grained complexity has leveraged SETH to prove quite tight conditional lower bounds for dozens of problems in various domains and complexity classes, including Edit Distance, Graph Diameter, Hitting Set, Independent Set, and Orthogonal Vectors. Yet, it has been repeatedly asked in the literature whether SETH-hardness results can be proven for other fundamental problems such as Hamiltonian Path, Independent Set, Chromatic Number, MAX-k-SAT, and Set Cover. In this paper, we show that fine-grained reductions implying even λ^n-hardness of these problems from SETH for any λ>1, would imply new circuit lower bounds: super-linear lower bounds for Boolean series-parallel circuits or polynomial lower bounds for arithmetic circuits (each of which is a four-decade open question). We also extend this barrier result to the class of parameterized problems. Namely, for every λ>1 we conditionally rule out fine-grained reductions implying SETH-based lower bounds of λ^k for a number of problems parameterized by the solution size k. Our main technical tool is a new concept called polynomial formulations. In particular, we show that many problems can be represented by relatively succinct low-degree polynomials, and that any problem with such a representation cannot be proven SETH-hard (without proving new circuit lower bounds).Tatiana Belova, Alexander Golovnev, Alexander S. Kulikov, Ivan Mihajlin, Denil Sharipovwork_o2rzjd3tuvgn5oooqwjhd3ecluSun, 18 Sep 2022 00:00:00 GMTMultivariable quantum signal processing (M-QSP): prophecies of the two-headed oracle
https://scholar.archive.org/work/fuy2jmm6bndwrcdcnle7davqju
Recent work shows that quantum signal processing (QSP) and its multi-qubit lifted version, quantum singular value transformation (QSVT), unify and improve the presentation of most quantum algorithms. QSP/QSVT characterize the ability, by alternating ans\"atze, to obliviously transform the singular values of subsystems of unitary matrices by polynomial functions; these algorithms are numerically stable and analytically well-understood. That said, QSP/QSVT require consistent access to a single oracle, saying nothing about computing joint properties of two or more oracles; these can be far cheaper to determine given an ability to pit oracles against one another coherently. This work introduces a corresponding theory of QSP over multiple variables: M-QSP. Surprisingly, despite the non-existence of the fundamental theorem of algebra for multivariable polynomials, there exist necessary and sufficient conditions under which a desired stable multivariable polynomial transformation is possible. Moreover, the classical subroutines used by QSP protocols survive in the multivariable setting for non-obvious reasons, and remain numerically stable and efficient. Up to a well-defined conjecture, we give proof that the family of achievable multivariable transforms is as loosely constrained as could be expected. The unique ability of M-QSP to obliviously approximate joint functions of multiple variables coherently leads to novel speedups incommensurate with those of other quantum algorithms, and provides a bridge from quantum algorithms to algebraic geometry.Zane M. Rossi, Isaac L. Chuangwork_fuy2jmm6bndwrcdcnle7davqjuWed, 14 Sep 2022 00:00:00 GMTOn the parallel complexity of Group Isomorphism via Weisfeiler-Leman
https://scholar.archive.org/work/mg7roztjx5fdfmx4ryg4r6zbme
In this paper, we show that the constant-dimensional Weisfeiler-Leman algorithm for groups (Brachter Schweitzer, LICS 2020) can be fruitfully used to improve parallel complexity upper bounds on isomorphism testing for several families of groups. In particular, we show: - Groups with an Abelian normal Hall subgroup whose complement is O(1)-generated are identified by constant-dimensional Weisfeiler-Leman using only a constant number of rounds. This places isomorphism testing for this family of groups into ; the previous upper bound for isomorphism testing was (Qiao, Sarma, Tang, STACS 2011). - We use the individualize-and-refine paradigm to obtain a ^1 isomorphism test for groups without Abelian normal subgroups, previously only known to be in (Babai, Codenotti, Qiao, ICALP 2012). - We extend a result of Brachter Schweitzer (arXiv, 2021) on direct products of groups to the parallel setting. Namely, we also show that Weisfeiler-Leman can identify direct products in parallel, provided it can identify each of the indecomposable direct factors in parallel. They previously showed the analogous result for . We finally consider the count-free Weisfeiler-Leman algorithm, where we show that count-free WL is unable to even distinguish Abelian groups in polynomial-time. Nonetheless, we use count-free WL in tandem with bounded non-determinism and limited counting to obtain a new upper bound of β_1^0() for isomorphism testing of Abelian groups. This improves upon the previous ^0() upper bound due to Chattopadhyay, Torán, Wagner (ACM Trans. Comput. Theory, 2013).Joshua A. Grochow, Michael Levetwork_mg7roztjx5fdfmx4ryg4r6zbmeTue, 23 Aug 2022 00:00:00 GMTBeyond Natural Proofs: Hardness Magnification and Locality
https://scholar.archive.org/work/pxzw5rfppzbopiwrw7cjwxevky
Hardness magnification reduces major complexity separations (such as \(\mathsf {\mathsf {EXP}} \nsubseteq \mathsf {NC}^1 \) ) to proving lower bounds for some natural problem Q against weak circuit models. Several recent works [11, 13, 14, 40, 42, 43, 46] have established results of this form. In the most intriguing cases, the required lower bound is known for problems that appear to be significantly easier than Q , while Q itself is susceptible to lower bounds but these are not yet sufficient for magnification. In this work, we provide more examples of this phenomenon, and investigate the prospects of proving new lower bounds using this approach. In particular, we consider the following essential questions associated with the hardness magnification program: – Does hardness magnification avoid the natural proofs barrier of Razborov and Rudich [51] ? – Can we adapt known lower bound techniques to establish the desired lower bound for Q ? We establish that some instantiations of hardness magnification overcome the natural proofs barrier in the following sense: slightly superlinear-size circuit lower bounds for certain versions of the minimum circuit size problem \({\sf MCSP} \) imply the non-existence of natural proofs. As the non-existence of natural proofs implies the non-existence of efficient learning algorithms, we show that certain magnification theorems not only imply strong worst-case circuit lower bounds but also rule out the existence of efficient learning algorithms. Hardness magnification might sidestep natural proofs, but we identify a source of difficulty when trying to adapt existing lower bound techniques to prove strong lower bounds via magnification. This is captured by a locality barrier : existing magnification theorems unconditionally show that the problems Q considered above admit highly efficient circuits extended with small fan-in oracle gates, while lower bound techniques against weak circuit models quite often easily extend to circuits containing such oracles. This explains why direct adaptations of certain lower bounds are unlikely to yield strong complexity separations via hardness magnification.Lijie Chen, Shuichi Hirahara, Igor C. Oliveira, Ján Pich, Ninad Rajgopal, Rahul Santhanamwork_pxzw5rfppzbopiwrw7cjwxevkyFri, 12 Aug 2022 00:00:00 GMTThe Framework of the Universe: Protocols of Nature and the Unifying Theory of Matter + Perspectives Full
https://scholar.archive.org/work/w3fvrba2lnetzaqyixrydgju4i
The file with animations and full resolution (as the PDF is nearly unreadable for images) is FrameworkOfTheUniverseUpload--.docx md5:a863397ab90bf877b39f583961860779 You can also use wget https://zenodo.org/record/6529722/files/FrameworkOfTheUniverseUpload--.docx?download=1 To say that light has a speed when you yourself can clearly see that it does not is to mark the rest of our species as a fool. You are the one holding us back. If I show you a circle, and you say it is a square, and you prevent all from seeing that it is a circle; then what exactly is it that you're doing? This book explains light inside and out. It was not difficult to understand once I admitted that humans are a very inept species. Inept, only if they cannot admit that we are inept. The Framework of the Universe is an assortment of lecture-styled guides to help you understand the perspectives needed to understand the Framework. People don't understand how to do simple arithmetic anymore. The only bad question is, "out of millions before you, you're the one to figure out?" What bad question that is. Abstract No one can unlearn what has been taught to them. The scientific method is moving at a glacial pace, and nobody is acting in a logical manner. My previous letter was rejected for failing to meet scientific requirements. What requirements? This group argues that planets are attracted by gravity. I also oppose the system's restriction of a person's voice. If you are so gullible to believe that restriction of information is of good choice, then you have been willingly sold spoiled milk. If you believe that misinformation will cause people to trust the wrong thing, then you have not done a good job at educating them. If it is impossible to unlearn what you have been taught by 12 to 16 years of negative reinforcement, then how can journal editors remain impartial? Indeed, they are not. Negative reinforcement makes it impossible to unlearn what you've been taught if you never become aware of it. 1The Test The prime example are negatives; you [...]Andrew A. Lehtiwork_w3fvrba2lnetzaqyixrydgju4iSun, 08 May 2022 00:00:00 GMTFoundations for programming and implementing effect handlers
https://scholar.archive.org/work/b2vymvtluzesrpbgsv22n4f3nm
First-class control operators provide programmers with an expressive and efficient means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and control idioms as shareable libraries. Effect handlers provide a particularly structured approach to programming with first-class control by naming control reifying operations and separating from their handling. This thesis is composed of three strands of work in which I develop operational foundations for programming and implementing effect handlers as well as exploring the expressive power of effect handlers. The first strand develops a fine-grain call-by-value core calculus of a statically typed programming language with a structural notion of effect types, as opposed to the nominal notion of effect types that dominates the literature. With the structural approach, effects need not be declared before use. The usual safety properties of statically typed programming are retained by making crucial use of row polymorphism to build and track effect signatures. The calculus features three forms of handlers: deep, shallow, and parameterised. They each offer a different approach to manipulate the control state of programs. Traditional deep handlers are defined by folds over computation trees, and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are defined by case splits (rather than folds) over computation trees. Parameterised handlers are deep handlers extended with a state value that is threaded through the folds over computation trees. To demonstrate the usefulness of effects and handlers as a practical programming abstraction I implement the essence of a small UNIX-style operating system complete with multi-user environment, time-sharing, and file I/O. The second strand studies continuation passing style (CPS) and abstract machine semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow [...]Daniel Hillerström, University Of Edinburgh, Sam Lindley, John Longleywork_b2vymvtluzesrpbgsv22n4f3nmTue, 12 Apr 2022 00:00:00 GMTThe sensual and the moral: "Kongzi shilun" 孔子詩論 as an exegesis of the Shijing 詩經
https://scholar.archive.org/work/suncvjbgevejbfmflxvy66psx4
The "sensual" and the "moral" are two different interpretive strategies for reading the Shijing 詩經 (Book of Odes), exemplified on the one hand by the bamboo manuscript "Kongzi shilun" 孔子詩論 (Confucian poetics) adopting the "sensual" and on the other, the Han漢 (202 BCE- 220 CE) commentarial tradition engaging in the "moral". This thesis is a textual study and critical review of the manuscript alleged to be Confucius" commentaries on the poems. As a monographic exegesis on the Shijing "Kongzi shilun" antedates all extant commentaries and has, so far, no parallel transmitted text. By rendering a comprehensively annotated translation and review of the manuscript, this project contributes to the current research on the topic. The commentarial tradition of the Shijing since the Han, particularly the Maoshi 毛詩, has had profound influence over later scholarship. Although the Han erudition recognizes qing 情 (emotions, passions) as the motivation behind poetic creativity, it shies away from the concept by shifting to a prudish reading of the poems. In this thesis "the moral" is meant to be the paradigmatic interpretation of poetry through li禮 (rules of propriety) as a means used by the sage kings to instruct the people, and "the sensual" is meant to be qing, which embraces the rich sentiments of human emotions, passions and feelings that "Kongzi shilun" reads from the odes. Between the poiesis of qing and the bounds of li, "Kongzi shilun" has now bridged the gap left by the Han scholarship regarding the notion that germinates poetry. This thesis does not seek to subvert the concept of li in the hermeneutics of the Shijing, but to claim that the Confucian precept represented by the manuscript author does not censure qing or the poems that celebrate it, and espouses the use of li as a means of transcending human desires and regulating social and spiritual relations. "Kongzi shilun" has certainly enhanced current understanding of Confucius" didactics represented by the manuscript and inspired our appreciation of the Shijing em [...]Daniel Sai Keung Leework_suncvjbgevejbfmflxvy66psx4Mon, 28 Mar 2022 00:00:00 GMTIf VNP Is Hard, Then so Are Equations for It
https://scholar.archive.org/work/luuuf3djhjhklibpctpi2x353i
Assuming that the Permanent polynomial requires algebraic circuits of exponential size, we show that the class VNP does not have efficiently computable equations. In other words, any nonzero polynomial that vanishes on the coefficient vectors of all polynomials in the class VNP requires algebraic circuits of super-polynomial size. In a recent work of Chatterjee, Kumar, Ramya, Saptharishi and Tengse (FOCS 2020), it was shown that the subclasses of VP and VNP consisting of polynomials with bounded integer coefficients do have equations with small algebraic circuits. Their work left open the possibility that these results could perhaps be extended to all of VP or VNP. The results in this paper show that assuming the hardness of Permanent, at least for VNP, allowing polynomials with large coefficients does indeed incur a significant blow up in the circuit complexity of equations.Mrinal Kumar, C. Ramya, Ramprasad Saptharishi, Anamay Tengse, Petra Berenbrink, Benjamin Monmegework_luuuf3djhjhklibpctpi2x353iWed, 09 Mar 2022 00:00:00 GMTQuantum Computing for Optimization and Machine Learning
https://scholar.archive.org/work/dzl35jwmfre5xj7qndldnvx75q
Quantum Computing leverages the quantum properties of subatomic matter to enable computations faster than those possible on a regular computer. Quantum Computers have become increasingly practical in recent years, with some small-scale machines becoming available for public use. The rising importance of machine learning has highlighted a large class of computing and optimization problems that process massive amounts of data and incur correspondingly large computational costs. This raises the natural question of how quantum computers may be leveraged to solve these problems more efficiently. This dissertation presents some encouraging results on the design of quantum algorithms for machine learning and optimization. We first focus on tasks with provably more efficient quantum algorithms. We show a quantum speedup for convex optimization by extending quantum gradient estimation algorithms to efficiently compute subgradients of non-differentiable functions. We also develop a quantum framework for simulated annealing algorithms which is used to show a quantum speedup in estimating the volumes of convex bodies. Finally, we demonstrate a quantum algorithm for solving matrix games, which can be applied to a variety of learning problems such as linear classification, minimum enclosing ball, and $\ell-2$ margin SVMs. We then shift our focus to variational quantum algorithms, which describe a family of heuristic algorithms that use parameterized quantum circuits as function models that can be fit for various learning and optimization tasks. We seek to analyze the properties of these algorithms including their efficient formulation and training, expressivity, and the convergence of the associated optimization problems. We formulate a model of quantum Wasserstein GANs in order to facilitate the robust and scalable generative learning of quantum states. We also investigate the expressivity of so called \emph{Quantum Neural Networks} compared to classical ReLU networks and investigate both theoretical and empirical separations [...]Shouvanik Chakrabartiwork_dzl35jwmfre5xj7qndldnvx75qDagstuhl Reports, Volume 11, Issue 5, May 2021, Complete Issue
https://scholar.archive.org/work/hxruv3f5fngwlpxj5vthmczr7a
Dagstuhl Reports, Volume 11, Issue 5, May 2021, Complete Issuework_hxruv3f5fngwlpxj5vthmczr7aWed, 01 Dec 2021 00:00:00 GMT