IA Scholar Query: A Calculus of Bounded Capacities.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgSat, 31 Dec 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help14404. Das transhumanistische Menschen- und Körperverständnis in fünf Diskursen
https://scholar.archive.org/work/3twnyfuqyffovbkf2nqpvbe2d4
4. Das transhumanistische Menschenund Körperverständnis in fünf Diskursen 4.1 »Dear Mother Nature« -die »Natur des Menschen« im Transhumanismus 4.1.1 Der Begriff der »Natur des Menschen« in der Debatte um den Transhumanismus »Dear Mother Nature« -das Naturdenken im Transhumanismus »Dear Mother Nature«, schreibt More in seinem Brief an »Mutter Natur« (»A Letter to Mother Nature«). 1 Inspiriert davon, verfasst auch Young eine E-Mail an »Nature«.work_3twnyfuqyffovbkf2nqpvbe2d4Sat, 31 Dec 2022 00:00:00 GMT"I will sample until things get better – or until I die." Potential and limits of citizen science to promote social accountability for environmental pollution
https://scholar.archive.org/work/nigug6gbijf4ffo3jzgeryynfu
Mining can cause harm to both human health and ecosystems. Regulators in low-income countries often struggle to enforce decent environmental standards due to financial, technical, and personal capacity constraints and political capture. In such settings, social accountability strategies are often promoted through which citizens attempt to hold governmental and private actors directly to account and demand better governance. However, social accountability initiatives are rarely effective. We demonstrate how political ecology analysis can inform social accountability theory and practice by identifying the power structures that define the potentials and limits of a social accountability strategy. We study the coal mining area of Hwange in Western Zimbabwe, where mining not only supplies coal to power plants and factories of multinational companies but also pollutes the Deka River. Together with local community monitors, we implemented the first citizen science project conducted in Zimbabwe and identified the sources and extent of the pollution. The scientific data strengthened the community monitors' advocacy for a cleaner environment and empowered them in their exchanges with the companies and the environmental regulator. However, only some of their demands have been met. The political ecology analysis, spanning from the local to transnational levels, reveals why local social accountability initiatives are insufficient to spring the low-accountability trap in a state captured by a politico-military elite, and why corporate governance regimes have not been successful either. We argue that pro-accountability networks are more effective when they include complementary players such as multinational enterprises, provided their responsible procurement approach moves from a corporate risk management to a developmental logic.Désirée Ruppen, Fritz Bruggerwork_nigug6gbijf4ffo3jzgeryynfuThu, 01 Sep 2022 00:00:00 GMTJoint optimal beamforming and power control in cell-free massive MIMO
https://scholar.archive.org/work/5r5mnwnepna43khtq3c2zs5psa
We derive a fast and optimal algorithm for solving practical weighted max-min SINR problems in cell-free massive MIMO networks. For the first time, the optimization problem jointly covers long-term power control and distributed beamforming design under imperfect cooperation. In particular, we consider user-centric clusters of access points cooperating on the basis of possibly limited channel state information sharing. Our optimal algorithm merges powerful power control tools based on interference calculus with the recently developed team theoretic framework for distributed beamforming design. In addition, we propose a variation that shows faster convergence in practice.Lorenzo Miretti, Renato Luis Garrido Cavalcante, Slawomir Stanczakwork_5r5mnwnepna43khtq3c2zs5psaThu, 11 Aug 2022 00:00:00 GMTTheory of Deep Learning: Neural Tangent Kernel and Beyond
https://scholar.archive.org/work/tebuyad2ijgbde2twqk5tbtl3u
In the recent years, Deep Neural Networks (DNNs) have managed to succeed at tasks that previously appeared impossible, such as human-level object recognition, text synthesis, translation, playing games, and many more. In spite of these major achievements, our understanding of these models, in particular of what happens during their training, remains very limited. This PhD started with the introduction of the Neural Tangent Kernel (NTK) to describe the evolution of the function represented by the network during training. In the infinite-width limit, i.e. when the number of neurons in the layers of the network grows to infinity, the NTK converges to a deterministic and time-independent limit, leading to a simple yet complete description of the dynamics of infinitely-wide DNNs. This allowed one to give the first general proof of convergence of DNNs to a global minimum, and yielded the first description of the limiting spectrum of the Hessian of the loss surface of DNNs throughout training. More importantly, the NTK plays a crucial role in describing the generalization abilities of DNNs, i.e. the performance of the trained network on unseen data. The NTK analysis uncovered a direct link between the function learned by infinitely wide DNNs and Kernel Ridge Regression predictors, whose generalization properties are studied in this thesis using tools of random matrix theory. Our analysis of KRR reveals the importance of the eigendecomposition of the NTK, which is affected by a number of architectural choices. In very deep networks, an ordered regime and a chaotic regime appear, determined by the choice of non-linearity and the balance between the weights and bias parameters; these two phases are characterized by different speeds of decay of the eigenvalues of the NTK, leading to a tradeoff between convergence speed and generalization. In practical contexts such as Generative Adversarial Networks or Topology Optimization, the network architecture can be chosen to guarantee certain properties of the NTK and its spectrum. These results give an almost complete description of infinitely-wide DNNs in the NTK regime. It is then natural to wonder how it extends to finite-width networks used in practice. In the NTK regime, the discrepancy between finite-and infinite-widths DNNs is mainly a result of the variance with respect to the sampling of the parameters, as shown empirically and mathematically, relying on the similarity between DNNs and random feature models. In contrast to the NTK regime, where the NTK remains constant during training, there exist so-called active regimes, where the evolution of the NTK is significant, and which appear in a number of settings. We describe one such regime in Deep Linear Networks with a very small initialization, where the training dynamics approaches a sequence of saddle-points, representing linear maps of increasing rank, leading to a low-rank bias which is absent in the NTK regime.Arthur Ulysse Jacot-Guillarmodwork_tebuyad2ijgbde2twqk5tbtl3uThu, 11 Aug 2022 00:00:00 GMTIncompatible measurements in quantum information science
https://scholar.archive.org/work/yqg363eakzhd5ccxmhj453za3e
Some measurements in quantum mechanics disturb each other. This has puzzled physicists since the formulation of the theory, but only in the last decades the incompatibility of measurements has been analyzed in depth and detail, by using the notion of joint measurability of generalized measurements. In this paper we review joint measurability and incompatibility from the perspective of quantum information science. We start by motivating the basic definitions and concepts. Then, we present an overview on applications of incompatibility, e.g., in measurement uncertainty relations, the characterization of quantum correlations, or information processing tasks like quantum state discrimination. Finally, we discuss emerging directions of research, such as a resource theory of incompatibility as well as other concepts to grasp the nature of measurements in quantum mechanics.Otfried Gühne, Erkka Haapasalo, Tristan Kraft, Juha-Pekka Pellonpää, Roope Uolawork_yqg363eakzhd5ccxmhj453za3eThu, 11 Aug 2022 00:00:00 GMTDiagnosing and Fixing Manifold Overfitting in Deep Generative Models
https://scholar.archive.org/work/j72pv6vph5hclotfvbqricoxgy
Likelihood-based, or explicit, deep generative models use neural networks to construct flexible high-dimensional densities. This formulation directly contradicts the manifold hypothesis, which states that observed data lies on a low-dimensional manifold embedded in high-dimensional ambient space. In this paper we investigate the pathologies of maximum-likelihood training in the presence of this dimensionality mismatch. We formally prove that degenerate optima are achieved wherein the manifold itself is learned but not the distribution on it, a phenomenon we call manifold overfitting. We propose a class of two-step procedures consisting of a dimensionality reduction step followed by maximum-likelihood density estimation, and prove that they recover the data-generating distribution in the nonparametric regime, thus avoiding manifold overfitting. We also show that these procedures enable density estimation on the manifolds learned by implicit models, such as generative adversarial networks, hence addressing a major shortcoming of these models. Several recently proposed methods are instances of our two-step procedures; we thus unify, extend, and theoretically justify a large class of models.Gabriel Loaiza-Ganem, Brendan Leigh Ross, Jesse C. Cresswell, Anthony L. Cateriniwork_j72pv6vph5hclotfvbqricoxgyWed, 10 Aug 2022 00:00:00 GMTDevelopment of a three-dimensional grid refinement method for the application of the Lattice Boltzmann Method to high Reynolds flows
https://scholar.archive.org/work/pulike433ramljkzjvchs64vb4
The lattice Boltzmann method (LBM) models fluid dynamics based on kinetic theory. It discretises the continuous Boltzmann equation in velocity, space and time and solves a transport equation for particle populations. When solved on uniform Cartesian grids, this result in a highly scalable algorithm which can accurately and efficiently model unsteady and turbulent flows. However, the reliance on uniform grids makes the LBM prohibitively expensive for the simulation of high Reynolds number flows. To reduce computational costs, grid refinement is required. A hierarchical grid refinement method with regularised coupling is introduced in this thesis, and its implementation in the open-source LBM solver OpenLB is explained. The regularised coupling restricts the information exchange at the refinement interface to the leading order terms of the particle populations. This ensures that local viscous stresses are conserved across the interface and prevents numerical instabilities. The method is validated against two benchmarks: the vortex shedding flow around a circular cylinder and the lid-driven flow inside a cubic cavity. It is demonstrated that the grid-refined methodology accurately captures the flow features at different Reynolds numbers, including two instability modes during the cylinder wake transition. Refinement reduces the number of active cells by a factor of 28 for the cylinder at Reynolds number Re=300. For the lid-driven cavity case at Re=3200, the performance efficiency and continuity across the refinement interface are assessed. The regularised coupling diminishes the transmission of spurious waves and improves numerical stability. Finally, this method is applied with a Smagorinsky subgrid model to an open shallow cavity at Re=50,000. Good agreement between the LBM solution and reference experimental and numerical data in terms of time-average velocities and turbulence statistics exists. This successful application demonstrates that the newly implemented methods achieve accurate and stable LBM simulations [...]Zhishang Xu, Sina Stapelfeldt, Ricardo Puente Rico, China Scholarship Councilwork_pulike433ramljkzjvchs64vb4Wed, 10 Aug 2022 00:00:00 GMTOn symmetric simplicial (super)string backgrounds, (super-)WZW defect fusion and the Chern-Simons theory
https://scholar.archive.org/work/ypb62pe4grgbjair3skxeu7oti
The super-σ-model of dynamics of the super-charged loop in an ambient supermanifold in the presence of worldsheet defects of arbitrary topology is formalised within Gawȩdzki's higher-cohomological approach, drawing inspiration from the precursor arXiv:0808.1419 [hep-th]. A distinguished class of the corresponding backgrounds (supertargets with additional bicategorial supergeometric data), organised into simplicial hierarchies, is considered. To these, configurational (super)symmetry of the bulk field theory is lifted coherently, whereby the notion of a maximally (super)symmetric background, and in particular that of a simplicial Lie background, arises as the target structure requisite for the definition of the super-σ-model with defects fully transmissive to the currents of the bulk (super)symmetry. The formal concepts are illustrated in two settings of physical relevance: that of the WZW σ-model of the bosonic string in a compact simple 1-connected Lie group and that of the GS super-σ-model of the superstring in the Minkowski super-space. In the former setting, the structure of the background is fixed through a combination of simplicial, symmetry(-reducibility) and cohomological arguments, and a novel link between fusion of the maximally symmetric WZW defects of Fuchs et al. and the 3d CS theory with timelike Wilson lines with fixed holonomy is established. Moreover, a purely geometric interpretation of the Verlinde fusion rules is proposed. In the latter setting, a multiplicative structure compatible with supersymmetry is shown to exist on the GS super-1-gerbe of arXiv:1706.05682 [hep-th], and subsequently used in a novel construction of a class of maximally (rigidly) supersymmetric bi-branes whose elementary fusion is also studied.Rafał R. Suszekwork_ypb62pe4grgbjair3skxeu7otiWed, 10 Aug 2022 00:00:00 GMTGeneralized Reinforcement Learning: Experience Particles, Action Operator, Reinforcement Field, Memory Association, and Decision Concepts
https://scholar.archive.org/work/xiq7d3eqmvezdnktb7eykrxgsm
Learning a control policy that involves time-varying and evolving system dynamics often poses a great challenge to mainstream reinforcement learning algorithms. In most standard methods, actions are often assumed to be a rigid, fixed set of choices that are sequentially applied to the state space in a predefined manner. Consequently, without resorting to substantial re-learning processes, the learned policy lacks the ability in adapting to variations in the action set and the action's "behavioral" outcomes. In addition, the standard action representation and the action-induced state transition mechanism inherently limit how reinforcement learning can be applied in complex, real-world applications primarily due to the intractability of the resulting large state space and the lack of facility to generalize the learned policy to the unknown part of the state space. This paper proposes a Bayesian-flavored generalized reinforcement learning framework by first establishing the notion of parametric action model to better cope with uncertainty and fluid action behaviors, followed by introducing the notion of reinforcement field as a physics-inspired construct established through "polarized experience particles" maintained in the learning agent's working memory. These particles effectively encode the dynamic learning experience that evolves over time in a self-organizing way. On top of the reinforcement field, we will further generalize the policy learning process to incorporate high-level decision concepts by considering the past memory as having an implicit graph structure, in which the past memory instances (or particles) are interconnected with similarity between decisions defined, and thereby, the "associative memory" principle can be applied to augment the learning agent's world model.Po-Hsiang Chiu, Manfred Huberwork_xiq7d3eqmvezdnktb7eykrxgsmTue, 09 Aug 2022 00:00:00 GMTDDPG-Driven Deep-Unfolding with Adaptive Depth for Channel Estimation with Sparse Bayesian Learning
https://scholar.archive.org/work/sy5lnpbphbau5m27b4olsvgxeu
Deep-unfolding neural networks (NNs) have received great attention since they achieve satisfactory performance with relatively low complexity. Typically, these deep-unfolding NNs are restricted to a fixed-depth for all inputs. However, the optimal number of layers required for convergence changes with different inputs. In this paper, we first develop a framework of deep deterministic policy gradient (DDPG)-driven deep-unfolding with adaptive depth for different inputs, where the trainable parameters of deep-unfolding NN are learned by DDPG, rather than updated by the stochastic gradient descent algorithm directly. Specifically, the optimization variables, trainable parameters, and architecture of deep-unfolding NN are designed as the state, action, and state transition of DDPG, respectively. Then, this framework is employed to deal with the channel estimation problem in massive multiple-input multiple-output systems. Specifically, first of all we formulate the channel estimation problem with an off-grid basis and develop a sparse Bayesian learning (SBL)-based algorithm to solve it. Secondly, the SBL-based algorithm is unfolded into a layer-wise structure with a set of introduced trainable parameters. Thirdly, the proposed DDPG-driven deep-unfolding framework is employed to solve this channel estimation problem based on the unfolded structure of the SBL-based algorithm. To realize adaptive depth, we design the halting score to indicate when to stop, which is a function of the channel reconstruction error. Furthermore, the proposed framework is extended to realize the adaptive depth of the general deep neural networks (DNNs). Simulation results show that the proposed algorithm outperforms the conventional optimization algorithms and DNNs with fixed depth with much reduced number of layers.Qiyu Hu, Shuhan Shi, Yunlong Cai, Guanding Yuwork_sy5lnpbphbau5m27b4olsvgxeuTue, 09 Aug 2022 00:00:00 GMTThe low-rank hypothesis of complex systems
https://scholar.archive.org/work/acal7tsbinfx3m3gl4fujcc3nq
Complex systems are high-dimensional nonlinear dynamical systems with intricate interactions among their constituents. To make interpretable predictions about their large-scale behavior, it is typically assumed, without a clear statement, that these dynamics can be reduced to a few number of equations involving a low-rank matrix describing the network of interactions -- we call it the low-rank hypothesis. By leveraging fundamental theorems on singular value decomposition, we verify the hypothesis for various random network models, either by making explicit their low-rank formulation or by demonstrating the exponential decrease of their singular values. More importantly, we validate the hypothesis experimentally for real networks by showing that their effective rank is considerably lower than their number of vertices. We then introduce a dimension reduction procedure for general dynamical systems on networks that yields optimal low-dimensional dynamics. Notably, we find that recurrent neural networks can be exactly reduced. We use our empirical and theoretical results on the low-rank hypothesis to shed light on the dimension reduction of nonlinear dynamics on real networks, from microbial, to neuronal, to epidemiological dynamics. Finally, we prove that higher-order interactions naturally emerge from the dimension reduction, thus providing theoretical insights into the origin of higher-order interactions in complex systems.Vincent Thibeault, Antoine Allard, Patrick Desrosierswork_acal7tsbinfx3m3gl4fujcc3nqTue, 09 Aug 2022 00:00:00 GMTQuantum-Classical Hybrid Systems and their Quasifree Transformations
https://scholar.archive.org/work/7ccxqn7e4jg3ngsa77wgmjcmzq
We study continuous variable systems, in which quantum and classical degrees of freedom are combined and treated on the same footing. Thus all systems, including the inputs or outputs to a channel, may be quantum-classical hybrids. This allows a unified treatment of a large variety of quantum operations involving measurements or dependence on classical parameters. The basic variables are given by canonical operators with scalar commutators. Some variables may commute with all others and hence generate a classical subsystem. We systematically study the class of "quasifree" operations, which are characterized equivalently either by an intertwining condition for phase-space translations or by the requirement that, in the Heisenberg picture, Weyl operators are mapped to multiples of Weyl operators. This includes the well-known Gaussian operations, evolutions with quadratic Hamiltonians, and "linear Bosonic channels", but allows for much more general kinds of noise. For example, all states are quasifree. We sketch the analysis of quasifree preparation, measurement, repeated observation, cloning, teleportation, dense coding, the setup for the classical limit, and some aspects of irreversible dynamics, together with the precise salient tradeoffs of uncertainty, error, and disturbance. Although the spaces of observables and states are infinite dimensional for every non-trivial system that we consider, we treat the technicalities related to this in a uniform and conclusive way, providing a calculus that is both easy to use and fully rigorous.Lars Dammeier, Reinhard F. Wernerwork_7ccxqn7e4jg3ngsa77wgmjcmzqTue, 09 Aug 2022 00:00:00 GMTDeciding All Behavioral Equivalences at Once: A Game for Linear-Time–Branching-Time Spectroscopy
https://scholar.archive.org/work/jjeatgzwwfhehd6uawfx5uer4y
We introduce a generalization of the bisimulation game that finds distinguishing Hennessy-Milner logic formulas from every finitary, subformula-closed language in van Glabbeek's linear-time--branching-time spectrum between two finite-state processes. We identify the relevant dimensions that measure expressive power to yield formulas belonging to the coarsest distinguishing behavioral preorders and equivalences; the compared processes are equivalent in each coarser behavioral equivalence from the spectrum. We prove that the induced algorithm can determine the best fit of (in)equivalences for a pair of processes.Benjamin Bisping, David N. Jansen, Uwe Nestmannwork_jjeatgzwwfhehd6uawfx5uer4yMon, 08 Aug 2022 00:00:00 GMTA Survey on Non-Geostationary Satellite Systems: The Communication Perspective
https://scholar.archive.org/work/gfzgk7gienfrjgn3pm4hrztboq
The next phase of satellite technology is being characterized by a new evolution in non-geostationary orbit (NGSO) satellites, which conveys exciting new communication capabilities to provide non-terrestrial connectivity solutions and to support a wide range of digital technologies from various industries. NGSO communication systems are known for a number of key features such as lower propagation delay, smaller size, and lower signal losses in comparison to the conventional geostationary orbit (GSO) satellites, which can potentially enable latency-critical applications to be provided through satellites. NGSO promises a substantial boost in communication speed and energy efficiency, and thus, tackling the main inhibiting factors of commercializing GSO satellites for broader utilization. The promised improvements of NGSO systems have motivated this paper to provide a comprehensive survey of the state-of-the-art NGSO research focusing on the communication prospects, including physical layer and radio access technologies along with the networking aspects and the overall system features and architectures. Beyond this, there are still many NGSO deployment challenges to be addressed to ensure seamless integration not only with GSO systems but also with terrestrial networks. These unprecedented challenges are also discussed in this paper, including coexistence with GSO systems in terms of spectrum access and regulatory issues, satellite constellation and architecture designs, resource management problems, and user equipment requirements. Finally, we outline a set of innovative research directions and new opportunities for future NGSO research.Hayder Al-Hraishawi, Houcine Chougrani, Steven Kisseleff, Eva Lagunas, Symeon Chatzinotaswork_gfzgk7gienfrjgn3pm4hrztboqSun, 07 Aug 2022 00:00:00 GMTComplete classification of global solutions to the obstacle problem
https://scholar.archive.org/work/6lprvwvqbvcp5m5irx5fgz35qq
The characterization of global solutions to the obstacle problems in ℝ^N, or equivalently of null quadrature domains, has been studied over more than 90 years. In this paper we give a conclusive answer to this problem by proving the following long-standing conjecture: The coincidence set of a global solution to the obstacle problem is either a half-space, an ellipsoid, a paraboloid, or a cylinder with an ellipsoid or a paraboloid as base.Simon Eberle and Alessio Figalli and Georg S. Weisswork_6lprvwvqbvcp5m5irx5fgz35qqFri, 05 Aug 2022 00:00:00 GMTThe Authorization and Glorification of Plunder
https://scholar.archive.org/work/j65qh4whure4dac3gd4pv2uhru
Research in taxation often treats it as a branch of law or economics, but in this thesis I argue that this obscures the fact that tax systems are not based on scientific, techno-rational principles, but are socially constructed phenomena, embodying fundamental, value-based decisions imbricated in power relationships. I demonstrate that throughout history tax systems have reflected the prevailing state form and the dominant power relationships underpinning them and that we are currently living in a neoliberal state, in which societal relations are determined by economic principles. I therefore argue that the UK tax system tends to be utilized to encourage individuals to engage in economic, entrepreneurial activity and are presented as being governed by techno-rational, economic principles, but are, in fact, a rationalizing discourse for the transfer of power from labour to capital and from poorer to wealthier taxpayers. This transformation is underpinned by the exercise of power, but in a neoliberal state power operates in a covert, capillary fashion through assemblages and the construction of knowledge, rather than in an overt, hierarchical fashion. I demonstrate how the contemporary debates relating to tax simplification and the use of general principles rather than detailed rules in tax legislation have been, or might be, used to further entrench neoliberal values in the tax system, but that the failure to achieve significant simplification due to its open and transparent nature demonstrates the limits of power and the more opaque nature of general principles might have more potential for achieving this. However, no power can be absolute and I argue that the increased public interest in and awareness of taxation since 2010, which led to the emergence of UK Uncut, demonstrates that there is always the potential for resistance to a hegemonic discourse, which may lead to the emergence of alternative discourses.Malcolm Jameswork_j65qh4whure4dac3gd4pv2uhruThu, 04 Aug 2022 00:00:00 GMTAI Descartes: Combining Data and Theory for Derivable Scientific Discovery
https://scholar.archive.org/work/dpvjd3ej6vffna5h46x3orfkxu
Scientists have long aimed to discover meaningful formulae which accurately describe experimental data. A common approach is to manually create mathematical models of natural phenomena using domain knowledge, and then fit these models to data. In contrast, machine-learning algorithms automate the construction of accurate data-driven models while consuming large amounts of data. The problem of enforcing logic constraints on the functional form of a learned model (e.g., nonnegativity) has been explored in the literature; however, finding models that are consistent with general background knowledge is an open problem. We develop a method for combining logical reasoning with symbolic regression, enabling principled derivations of models of natural phenomena. We demonstrate these concepts for Kepler's third law of planetary motion, Einstein's relativistic time-dilation law, and Langmuir's theory of adsorption, automatically connecting experimental data with background theory in each case. We show that laws can be discovered from few data points when using formal logical reasoning to distinguish the correct formula from a set of plausible formulas that have similar error on the data. The combination of reasoning with machine learning provides generalizeable insights into key aspects of natural phenomena. We envision that this combination will enable derivable discovery of fundamental laws of science and believe that our work is a crucial first step towards automating the scientific method.Cristina Cornelio, Sanjeeb Dash, Vernon Austel, Tyler Josephson, Joao Goncalves, Kenneth Clarkson, Nimrod Megiddo, Bachir El Khadir, Lior Horeshwork_dpvjd3ej6vffna5h46x3orfkxuThu, 04 Aug 2022 00:00:00 GMTService Modeling and Delay Analysis of Packet Delivery over a Wireless Link
https://scholar.archive.org/work/qdqac3dnarcstk4lghsf3oonvy
For delay analysis of packet delivery over a wireless link, several novel ideas are introduced. One is to construct an equivalent G/G/1 non-lossy queueing model to ease the analysis, enabled by exploiting empirical models of packet error rate, packet service time and packet loss rate obtained from measurement. The second is to exploit a classical queueing theory result to approximate the mean delay. For estimating the delay distribution, the newly developed stochastic network calculus (SNC) theory is made use of, forming the third idea. To enable this SNC based analysis, a stochastic service curve characterization of the link is introduced, relying on a packet service time model obtained from the empirical models. The focused link is a 802.15.4 wireless link. Extensive experimental investigation under a wide range of settings was conducted. The proposed ideas are validated with the experiment results. The validation confirms that the proposed approaches, integrating both empirical and analytical modes, are effective for service modeling and delay analysis. This suggests an integrated approach, now found previously, for quantitative understanding of the delay performance of packet delivery over a wireless link.Yan Zhang, Yuming Jiang, Songwei Fuwork_qdqac3dnarcstk4lghsf3oonvyThu, 04 Aug 2022 00:00:00 GMTA Theoretical Framework for Inference and Learning in Predictive Coding Networks
https://scholar.archive.org/work/kc5dpvhnhbartn2nz5zxer4aoa
Predictive coding (PC) is an influential theory in computational neuroscience, which argues that the cortex forms unsupervised world models by implementing a hierarchical process of prediction error minimization. PC networks (PCNs) are trained in two phases. First, neural activities are updated to optimize the network's response to external stimuli. Second, synaptic weights are updated to consolidate this change in activity – an algorithm called prospective configuration. While previous work has shown how in various limits, PCNs can be found to approximate backpropagation (BP), recent work has demonstrated that PCNs operating in this standard regime, which does not approximate BP, nevertheless obtain competitive training and generalization performance to BP-trained networks while outperforming them on tasks such as online, few-shot, and continual learning, where brains are known to excel. Despite this promising empirical performance, little is understood theoretically about the properties and dynamics of PCNs in this regime. In this paper, we provide a comprehensive theoretical analysis of the properties of PCNs trained with prospective configuration. We first derive analytical results concerning the inference equilibrium for PCNs and a previously unknown close connection relationship to target propagation (TP). Secondly, we provide a theoretical analysis of learning in PCNs as a variant of generalized expectation-maximization and use that to prove the convergence of PCNs to critical points of the BP loss function, thus showing that deep PCNs can, in theory, achieve the same generalization performance as BP, while maintaining their unique advantages.Beren Millidge, Yuhang Song, Tommaso Salvatori, Thomas Lukasiewicz, Rafal Bogaczwork_kc5dpvhnhbartn2nz5zxer4aoaWed, 03 Aug 2022 00:00:00 GMTSome properties of the p-Bergman kernel and metric
https://scholar.archive.org/work/rytyp5cb2nft7hdkrkdciec3ya
The p-Bergman kernel K_p(·) is shown to be of C^1,1/2 for 1<p<∞. An unexpected relation between the off-diagonal p-Bergman kernel K_p(·,z) and certain weighted L^2 Bergman kernel is given for 1≤ p≤ 2. As applications, we show that for each 1≤ p≤ 2, K_p(·,z)∈ L^q(Ω) for q< 2pn/2n-α(Ω) and |K_s(z)-K_p(z)| ≲ |s-p||log |s-p|| whenever the hyperconvexity index α(Ω) is positive, as well as an L^p extension theorem from a single point in a complete Kähler domain. Counterexamples for 2<p<∞ are given respectively. We also obtain an optimal upper bound for the holomorphic sectional curvature of the p-Bergman metric when 2≤ p<∞. For bounded C^2 domains, it is shown that the Hardy space and the Bergman space satisfy H^p(Ω)⊂ A^q(Ω) where q=p(1+1/n). Upper bounds of the Banach-Mazur distance between two p-Bergman spaces are given through estimation of the p-Schwarz content.Bo-Yong Chen, Yuanpu Xiongwork_rytyp5cb2nft7hdkrkdciec3yaWed, 03 Aug 2022 00:00:00 GMT