IA Scholar Query: Polynomial Multiplication over Binary Fields Using Charlier Polynomial Representation with Low Space Complexity.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgMon, 24 Oct 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Learning and Covering Sums of Independent Random Variables with Unbounded Support
https://scholar.archive.org/work/nhi6f2uwgrb7zppfgrozhtutfi
We study the problem of covering and learning sums X = X_1 + ⋯ + X_n of independent integer-valued random variables X_i (SIIRVs) with unbounded, or even infinite, support. De et al. at FOCS 2018, showed that the maximum value of the collective support of X_i's necessarily appears in the sample complexity of learning X. In this work, we address two questions: (i) Are there general families of SIIRVs with unbounded support that can be learned with sample complexity independent of both n and the maximal element of the support? (ii) Are there general families of SIIRVs with unbounded support that admit proper sparse covers in total variation distance? As for question (i), we provide a set of simple conditions that allow the unbounded SIIRV to be learned with complexity poly(1/ϵ) bypassing the aforementioned lower bound. We further address question (ii) in the general setting where each variable X_i has unimodal probability mass function and is a different member of some, possibly multi-parameter, exponential family ℰ that satisfies some structural properties. These properties allow ℰ to contain heavy tailed and non log-concave distributions. Moreover, we show that for every ϵ > 0, and every k-parameter family ℰ that satisfies some structural assumptions, there exists an algorithm with Õ(k) ·poly(1/ϵ) samples that learns a sum of n arbitrary members of ℰ within ϵ in TV distance. The output of the learning algorithm is also a sum of random variables whose distribution lies in the family ℰ. En route, we prove that any discrete unimodal exponential family with bounded constant-degree central moments can be approximated by the family corresponding to a bounded subset of the initial (unbounded) parameter space.Alkis Kalavasis, Konstantinos Stavropoulos, Manolis Zampetakiswork_nhi6f2uwgrb7zppfgrozhtutfiMon, 24 Oct 2022 00:00:00 GMTProbabilistic design of optimal sequential decision-making algorithms in learning and control
https://scholar.archive.org/work/5k4vbu5tvnhtxfi3cnohkaive4
This survey is focused on certain sequential decision-making problems that involve optimizing over probability functions. We discuss the relevance of these problems for learning and control. The survey is organized around a framework that combines a problem formulation and a set of resolution methods. The formulation consists of an infinite-dimensional optimization problem. The methods come from approaches to search optimal solutions in the space of probability functions. Through the lenses of this overarching framework we revisit popular learning and control algorithms, showing that these naturally arise from suitable variations on the formulation mixed with different resolution methods. A running example, for which we make the code available, complements the survey. Finally, a number of challenges arising from the survey are also outlined.Emiland Garrabe, Giovanni Russowork_5k4vbu5tvnhtxfi3cnohkaive4Thu, 30 Jun 2022 00:00:00 GMTMultiscale derivation, analysis and simulation of collective dynamics models: geometrical aspects and applications
https://scholar.archive.org/work/xmg4xzvixfbhdooukuq2wv5bki
This thesis is a contribution to the study of swarming phenomena from the point of view of mathematical kinetic theory. This multiscale approach starts from stochastic individual based (or particle) models and aims at the derivation of partial differential equation models on statistical quantities when the number of particles tends to infinity. This latter class of models is better suited for mathematical analysis in order to reveal and explain large-scale emerging phenomena observed in various biological systems such as flocks of birds or swarms of bacteria. Within this objective, a large part of this thesis is dedicated to the study of a body-attitude coordination model and, through this example, of the influence of geometry on self-organisation. The first part of the thesis deals with the rigorous derivation of partial differential equation models from particle systems with mean-field interactions. After a review of the literature, in particular on the notion of propagation of chaos, a rigorous convergence result is proved for a large class of geometrically enriched piecewise deterministic particle models towards local BGK-type equations. In addition, the method developed is applied to the design and analysis of a new particle-based algorithm for sampling. This first part also addresses the question of the efficient simulation of particle systems using recent GPU routines. The second part of the thesis is devoted to kinetic and fluid models for body-oriented particles. The kinetic model is rigorously derived as the mean-field limit of a particle system. In the spatially homogeneous case, a phase transition phenomenon is investigated which discriminates, depending on the parameters of the model, between a "disordered" dynamics and a self-organised "ordered" dynamics. The fluid (or macroscopic) model was derived as the hydrodynamic limit of the kinetic model a few years ago by Degond et al. The analytical and numerical study of this model reveal the existence of new self-organised phenomena which are confirmed a [...]Antoine Diez, Pierre Degond, Sara Merino-Aceituno, EPSRCwork_xmg4xzvixfbhdooukuq2wv5bkiWed, 25 May 2022 00:00:00 GMTDecomposing neural networks as mappings of correlation functions
https://scholar.archive.org/work/xdrk5bbonzgytllfznsnh5w25y
Understanding the functional principles of information processing in deep neural networks continues to be a challenge, in particular for networks with trained and thus non-random weights. To address this issue, we study the mapping between probability distributions implemented by a deep feed-forward network. We characterize this mapping as an iterated transformation of distributions, where the non-linearity in each layer transfers information between different orders of correlation functions. This allows us to identify essential statistics in the data, as well as different information representations that can be used by neural networks. Applied to an XOR task and to MNIST, we show that correlations up to second order predominantly capture the information processing in the internal layers, while the input layer also extracts higher-order correlations from the data. This analysis provides a quantitative and explainable perspective on classification.Kirsten Fischer, Alexandre René, Christian Keup, Moritz Layer, David Dahmen, Moritz Heliaswork_xdrk5bbonzgytllfznsnh5w25yTue, 01 Feb 2022 00:00:00 GMT