IA Scholar Query: Buchberger's Algorithm: A Constraint-Based Completion Procedure.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgThu, 21 Jul 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440A Node Elimination Algorithm for Cubature of High-Dimensional Polytopes
https://scholar.archive.org/work/vuoqoemzzzhbpgqvqfdpt25xhi
Node elimination is a numerical approach to obtain cubature rules for the approximation of multivariate integrals. Beginning with a known cubature rule, nodes are selected for elimination, and a new, more efficient rule is constructed by iteratively solving the moment equations. This paper introduces a new criterion for selecting which nodes to eliminate that is based on a linearization of the moment equation. In addition, a penalized iterative solver is introduced, that ensures that weights are positive and nodes are inside the integration domain. A strategy for constructing an initial quadrature rule for various polytopes in several space dimensions is described. High efficiency rules are presented for two, three and four dimensional polytopes. The new rules are compared with rules that are obtained by combining tensor products of one dimensional quadrature rules and domain transformations, as well as with known analytically constructed cubature rules.Arkadijs Slobodkins, Johannes Tauschwork_vuoqoemzzzhbpgqvqfdpt25xhiThu, 21 Jul 2022 00:00:00 GMTInvestigation of sketch interpretation techniques into 2D and 3D conceptual design geometry
https://scholar.archive.org/work/5ox5pthx7zdbzk5imsyjljbq7u
This thesis presents the results of new techniques investigated for applying on-line sketching into 2D and 3D conceptual design geometry throughout a whole development process: data collection, concrete curve segmentation and fitting, 2D geometric constraint extraction and solver, and 3D feature recognition and modelling. This is a new approach. A real time sketch and fuzzy knowledge-based prototype system has been developed in four phases. In the first phase, the segmentation approach investigated accepts the input of on-line free-hand sketch, and segments them into meaningful parts, by using fuzzy knowledge in terms of sketching position, direction, speed and acceleration. During the second phase, a parallel curve classification and identification method is studied by employing fuzzy heuristic knowledge in terms of curve linearity and convexity, in order to quickly classify and identify a variety of 2D shapes including straight lines, circles, arcs, ellipse, elliptical arcs, and free-form curves. Afterwards, a geometric constraint inference engine and a constraint solver are utilised according to degrees of freedom analysis, to capture a designer's intention, to infer geometric constraints simply and automatically, and to generate a possible solution without involving iterative computing. The solver also supports variational geometry in 2D and 3D. In the last phase, rule-based feature interpretation and manipulation techniques are investigated. While drawing, the 2D geometry is accumulated until it can be interpreted as a 3D feature. The feature is then placed in the 3D space and a new feature can be built incrementally upon previous versions. The given examples and case studies show that the system can interpret users' intention on 2D and 3D geometry satisfactorily and effectively. It can not only accept sketched input, but also users' menu-based interactive input of 2D primitives and 3D projections' This mixed automatic feature interpretation and interactive design environment can encourage designers with poo [...]Sheng-Feng Qinwork_5ox5pthx7zdbzk5imsyjljbq7uTue, 19 Jul 2022 00:00:00 GMTThe Plane Test Is a Local Tester for Multiplicity Codes
https://scholar.archive.org/work/ru3dn645czhktmw3vv4ww6yrqi
Multiplicity codes are a generalization of RS and RM codes where for each evaluation point we output the evaluation of a low-degree polynomial and all of its directional derivatives up to order s. Multi-variate multiplicity codes are locally decodable with the natural local decoding algorithm that reads values on a random line and corrects to the closest uni-variate multiplicity code. However, it was not known whether multiplicity codes are locally testable, and this question has been posed since the introduction of these codes with no progress up to date. In fact, it has been also open whether multiplicity codes can be characterized by local constraints, i.e., if there exists a probabilistic algorithm that queries few symbols of a word c, accepts every c in the code with probability 1, and rejects every c not in the code with nonzero probability. We begin by giving a simple example showing the line test does not give local characterization when d > q. Surprisingly, we then show the plane test is a local characterization when s < q and d < qs-1 for prime q. In addition, we show the s-dimensional test is a local tester for multiplicity codes, when s < q. Combining the two results, we show our main result that the plane test is a local tester for multiplicity codes of degree d < qs-1, with constant rejection probability for constant q, s. Our technique is new. We represent the given input as a possibly very high-degree polynomial, and we show that for some choice of plane, the restriction of the polynomial to the plane is a high-degree bi-variate polynomial. The argument has to work modulo the appropriate kernels, and for that we use Grobner theory, the Combinatorial Nullstellensatz theorem and its generalization to multiplicities. Even given that, the argument is delicate and requires choosing a non-standard monomial order for the argument to work.Dan Karliner, Roie Salama, Amnon Ta-Shma, Shachar Lovettwork_ru3dn645czhktmw3vv4ww6yrqiMon, 11 Jul 2022 00:00:00 GMTDivisibility of Spheres with Measurable Pieces
https://scholar.archive.org/work/cjplq5bhmncudcpfdcff55reze
For an r-tuple (γ_1,...,γ_r) of special orthogonal d× d matrices, we say that the Euclidean (d-1)-dimensional sphere S^d-1 is (γ_1,...,γ_r)-divisible if there is a subset A⊆ S^d-1 such that its translations by the rotations γ_1,...,γ_r partition the sphere. Motivated by some old open questions of Mycielski and Wagon, we investigate the version of this notion where the set A has to be measurable with respect to the spherical measure. Our main result shows that measurable divisibility is impossible for a "generic" (in various meanings) r-tuple of rotations. This is in stark contrast to the recent result of Conley, Marks and Unger which implies that, for every "generic" r-tuple, divisibility is possible with parts that have the property of Baire.Clinton T. Conley, Jan Grebík, Oleg Pikhurkowork_cjplq5bhmncudcpfdcff55rezeSat, 09 Jul 2022 00:00:00 GMTModularity and Combination of Associative Commutative Congruence Closure Algorithms enriched with Semantic Properties
https://scholar.archive.org/work/etrrijimj5d5dcls5hzdevz2ba
Algorithms for computing congruence closure of ground equations over uninterpreted symbols and interpreted symbols satisfying associativity and commutativity (AC) properties are proposed. The algorithms are based on a framework for computing a congruence closure by abstracting nonflat terms by constants as proposed first in Kapur's congruence closure algorithm (RTA97). The framework is general, flexible, and has been extended also to develop congruence closure algorithms for the cases when associative-commutative function symbols can have additional properties including idempotency, nilpotency, identities, cancellativity and group properties as well as their various combinations. Algorithms are modular; their correctness and termination proofs are simple, exploiting modularity. Unlike earlier algorithms, the proposed algorithms neither rely on complex AC compatible well-founded orderings on nonvariable terms nor need to use the associative-commutative unification and extension rules in completion for generating canonical rewrite systems for congruence closures. They are particularly suited for integrating into the Satisfiability modulo Theories (SMT) solvers. A new way to view Groebner basis algorithm for polynomial ideals with integer coefficients as a combination of the congruence closures over the AC symbol * with the identity 1 and the congruence closure over an Abelian group with + is outlined.Deepak Kapurwork_etrrijimj5d5dcls5hzdevz2baWed, 29 Jun 2022 00:00:00 GMTGeneric root counts and flatness in tropical geometry
https://scholar.archive.org/work/f2gn57yuxnbanjwkq4vjqwmq4q
We use tropical and non-archimedean geometry to give generic root counts for families of polynomial equations. These families are given as morphisms of schemes X→Y that factor through a closed embedding into a relative torus over a parameter space Y. We prove a generalization of Bernstein's theorem for these morphisms, showing that the root count of a single well-behaved tropical fiber spreads to an open dense subset of Y. By applying this to modifications of universal polynomial systems, we obtain new generic root counts for determinantal subvarieties of the universal parameter space. An important role in these theorems is played by the notion of tropical flatness, which allows us to infer generic properties of X→Y from a single tropical fiber. We show that the tropical analogue of the generic flatness theorem holds, in the sense that X→Y is tropically flat over an open dense subset of the Berkovich analytification of Y.Paul Alexander Helminck, Yue Renwork_f2gn57yuxnbanjwkq4vjqwmq4qWed, 15 Jun 2022 00:00:00 GMTA Generalization of Self-Improving Algorithms
https://scholar.archive.org/work/ntqmqw2ujfffjlzoopxihcdewa
Ailon et al. [SICOMP'11] proposed self-improving algorithms for sorting and Delaunay triangulation (DT) when the input instances x 1 , ⋅⋅⋅, x n follow some unknown product distribution . That is, x i is drawn independently from a fixed unknown distribution \(\mathcal{D}_i\) . After spending O ( n 1 + ε ) time in a learning phase, the subsequent expected running time is O (( n + H )/ε), where H ∈ { H S , H DT }, and H S and H DT are the entropies of the distributions of the sorting and DT output, respectively. In this paper, we allow dependence among the x i 's under the group product distribution . There is a hidden partition of [1, n ] into groups; the x i 's in the k -th group are fixed unknown functions of the same hidden variable u k ; and the u k 's are drawn from an unknown product distribution. We describe self-improving algorithms for sorting and DT under this model when the functions that map u k to x i 's are well-behaved. After an O (poly( n ))-time training phase, we achieve O ( n + H S ) and O ( nα ( n ) + H DT ) expected running times for sorting and DT, respectively, where α (·) is the inverse Ackermann function.Kai Jin, Siu-Wing Cheng, Man-Kwun Chiu, Man Ting Wongwork_ntqmqw2ujfffjlzoopxihcdewaWed, 27 Apr 2022 00:00:00 GMTHardware-Tailored Diagonalization Circuits
https://scholar.archive.org/work/vo2sxci7qbhnrhv6c3ldx2fatm
A central building block of many quantum algorithms is the diagonalization of Pauli operators. Although it is always possible to construct a quantum circuit that simultaneously diagonalizes a given set of commuting Pauli operators, only resource-efficient circuits are reliably executable on near-term quantum computers. Generic diagonalization circuits can lead to an unaffordable Swap-gate overhead on quantum devices with limited hardware connectivity. A common alternative is excluding two-qubit gates, however, this comes at the cost of restricting the class of diagonalizable sets of Pauli operators to tensor product bases (TPBs). In this letter, we introduce a theoretical framework for constructing hardware-tailored (HT) diagonalization circuits. We apply our framework to group the Pauli operators occurring in the decomposition of a given Hamiltonian into jointly-HT-diagonalizable sets. We investigate several classes of popular Hamiltonians and observe that our approach requires a smaller number of measurements than conventional TPB approaches. Finally, we experimentally demonstrate the practical applicability of our technique, which showcases the great potential of our circuits for near-term quantum computing.Daniel Miller, Laurin E. Fischer, Igor O. Sokolov, Panagiotis Kl. Barkoutsos, Ivano Tavernelliwork_vo2sxci7qbhnrhv6c3ldx2fatmMon, 07 Mar 2022 00:00:00 GMTGeometric Algebra and Algebraic Geometry of Loop and Potts Models
https://scholar.archive.org/work/dui2tde2o5adlmw2arz4z3evpy
We uncover a connection between two seemingly separate subjects in integrable models: the representation theory of the affine Temperley-Lieb algebra, and the algebraic structure of solutions to the Bethe equations of the XXZ spin chain. We study the solution of Bethe equations analytically by computational algebraic geometry, and find that the solution space encodes rich information about the representation theory of Temperley-Lieb algebra. Using these connections, we compute the partition function of the completely-packed loop model and of the closely related random-cluster Potts model, on medium-size lattices with toroidal boundary conditions, by two quite different methods. We consider the partial thermodynamic limit of infinitely long tori and analyze the corresponding condensation curves of the zeros of the partition functions. Two components of these curves are obtained analytically in the full thermodynamic limit.Janko Böhm, Jesper Lykke Jacobsen, Yunfeng Jiang, Yang Zhangwork_dui2tde2o5adlmw2arz4z3evpyMon, 07 Feb 2022 00:00:00 GMTHalf-Trek Criterion for Identifiability of Latent Variable Models
https://scholar.archive.org/work/seupmmc2tzg27jy5ytx26zqnyq
We consider linear structural equation models with latent variables and develop a criterion to certify whether the direct causal effects between the observable variables are identifiable based on the observed covariance matrix. Linear structural equation models assume that both observed and latent variables solve a linear equation system featuring stochastic noise terms. Each model corresponds to a directed graph whose edges represent the direct effects that appear as coefficients in the equation system. Prior research has developed a variety of methods to decide identifiability of direct effects in a latent projection framework, in which the confounding effects of the latent variables are represented by correlation among noise terms. This approach is effective when the confounding is sparse and effects only small subsets of the observed variables. In contrast, the new latent-factor half-trek criterion (LF-HTC) we develop in this paper operates on the original unprojected latent variable model and is able to certify identifiability in settings, where some latent variables may also have dense effects on many or even all of the observables. Our LF-HTC is an effective sufficient criterion for rational identifiability, under which the direct effects can be uniquely recovered as rational functions of the joint covariance matrix of the observed random variables. When restricting the search steps in the LF-HTC to consider subsets of latent variables of bounded size, the criterion can be verified in time that is polynomial in the size of the graph.Rina Foygel Barber, Mathias Drton, Nils Sturma, Luca Weihswork_seupmmc2tzg27jy5ytx26zqnyqSat, 01 Jan 2022 00:00:00 GMTPolynomial XL: A Variant of the XL Algorithm Using Macaulay Matrices over Polynomial Rings
https://scholar.archive.org/work/ywojwpow5bdnnb5ajcyy26yace
Solving a system of m multivariate quadratic equations in n variables (the ℳQ problem) is one of the main challenges of algebraic cryptanalysis. The XL algorithm (XL for short) is a major approach for solving the ℳQ problem with linearization over a coefficient field. Furthermore, the hybrid approach with XL (h-XL) is a variant of XL guessing some variables beforehand. In this paper, we present a variant of h-XL, which we call the polynomial XL (PXL). In PXL, the whole n variables are divided into k variables to be fixed and the remaining n-k variables as "main variables", and we generate the Macaulay matrix with respect to the n-k main variables over a polynomial ring of the k variables. By eliminating some columns of the Macaulay matrix over the polynomial ring before guessing k variables, the amount of manipulations required for each guessed value can be reduced. Our complexity analysis indicates that PXL is efficient on the system with n ≈ m. For example, on systems over 𝔽_2^8 with n=m=80, the number of manipulations required by the hybrid approaches with XL and Wiedemann XL and PXL is estimated as 2^252, 2^234, and 2^220, respectively.Hiroki Furue, Momonari Kudowork_ywojwpow5bdnnb5ajcyy26yaceThu, 09 Dec 2021 00:00:00 GMT