IA Scholar Query: Dense Subset Sum may be the hardest.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgWed, 30 Nov 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Partial hyperbolicity and pseudo-Anosov dynamics
https://scholar.archive.org/work/i2rrut2zdjct5e4wyph4fjnvve
We show that if a hyperbolic 3-manifold admits a partially hyperbolic diffeomorphism then it also admits an Anosov flow. Moreover, we give a complete classification of partially hyperbolic diffeomorphism in hyperbolic 3-manifolds as well as partially hyperbolic diffeomorphisms in Seifert manifolds inducing pseudo-Anosov dynamics in the base. This classification is given in terms of the structure of their center (branching) foliations and the notion of collapsed Anosov flows.Sergio R. Fenley, Rafael Potriework_i2rrut2zdjct5e4wyph4fjnvveWed, 30 Nov 2022 00:00:00 GMTDIGRAC: Digraph Clustering Based on Flow Imbalance
https://scholar.archive.org/work/ey3lgxetkrgsrjjm4uxszkegim
Node clustering is a powerful tool in the analysis of networks. We introduce a graph neural network framework, named DIGRAC, to obtain node embeddings for directed networks in a self-supervised manner, including a novel probabilistic imbalance loss, which can be used for network clustering. Here, we propose directed flow imbalance measures, which are tightly related to directionality, to reveal clusters in the network even when there is no density difference between clusters. In contrast to standard approaches in the literature, in this paper, directionality is not treated as a nuisance, but rather contains the main signal. DIGRAC optimizes directed flow imbalance for clustering without requiring label supervision, unlike existing graph neural network methods, and can naturally incorporate node features, unlike existing spectral methods. Extensive experimental results on synthetic data, in the form of directed stochastic block models, and real-world data at different scales, demonstrate that our method, based on flow imbalance, attains state-of-the-art results on directed graph clustering when compared against 10 state-of-the-art methods from the literature, for a wide range of noise and sparsity levels, graph structures, and topologies, and even outperforms supervised methods.Yixuan He and Gesine Reinert and Mihai Cucuringuwork_ey3lgxetkrgsrjjm4uxszkegimTue, 29 Nov 2022 00:00:00 GMTWorst-Case to Expander-Case Reductions
https://scholar.archive.org/work/zaywh425ivcfhncq72xldpfgye
In recent years, the expander decomposition method was used to develop many graph algorithms, resulting in major improvements to longstanding complexity barriers. This powerful hammer has led the community to (1) believe that most problems are as easy on worst-case graphs as they are on expanders, and (2) suspect that expander decompositions are the key to breaking the remaining longstanding barriers in fine-grained complexity. We set out to investigate the extent to which these two things are true (and for which problems). Towards this end, we put forth the concept of worst-case to expander-case self-reductions. We design a collection of such reductions for fundamental graph problems, verifying belief (1) for them. The list includes k-Clique, 4-Cycle, Maximum Cardinality Matching, Vertex-Cover, and Minimum Dominating Set. Interestingly, for most (but not all) of these problems the proof is via a simple gadget reduction, not via expander decompositions, showing that this hammer is effectively useless against the problem and contradicting (2).Amir Abboud, Nathan Wallheimerwork_zaywh425ivcfhncq72xldpfgyeThu, 24 Nov 2022 00:00:00 GMTHaploid algebras in C^*-tensor categories and the Schellekens list
https://scholar.archive.org/work/r43ee33psfcwrk3g65zxgzbar4
We prove that a haploid associative algebra in a C^*-tensor category 𝒞 is equivalent to a Q-system (a special C^*-Frobenius algebra) in 𝒞 if and only if it is rigid. This allows us to prove the unitarity of all the 70 strongly rational holomorphic vertex operator algebras with central charge c=24 and non-zero weight-one subspace, corresponding to entries 1-70 of the so called Schellekens list. Furthermore, using the recent generalized deep hole construction of these vertex operator algebras, we prove that they are also strongly local in the sense of Carpi, Kawahigashi, Longo and Weiner and consequently we obtain some new holomorphic conformal nets associated to the entries of the list. Finally, we completely classify the simple CFT type vertex operator superalgebra extensions of the unitary N=1 and N=2 super-Virasoro vertex operator superalgebras with central charge c<3/2 and c<3 respectively, relying on the known classification results for the corresponding superconformal nets.Sebastiano Carpi, Tiziano Gaudio, Luca Giorgetti, Robin Hillierwork_r43ee33psfcwrk3g65zxgzbar4Wed, 23 Nov 2022 00:00:00 GMTSearch for electroweak production of supersymmetric particles in compressed mass spectra with the ATLAS detector at the LHC
https://scholar.archive.org/work/syrsys6kc5cv5ohkityim6kady
Two analyses searching for the production of supersymmetric particles through the electroweak interaction are presented: the chargino search, targeting the pair production of charginos decaying into W bosons and neutralinos, and the displaced track search, looking for charged tracks arising from the decays of higgsinos into pions. These searches target compressed phase spaces, where the mass difference between the next-to-lightest and lightest supersymmetric particle is relatively small. The searches use proton-proton collision data collected at a centre-of-mass energy of 13 TeV with the ATLAS detector at the LHC. In the chargino search, the targeted mass difference between charginos and neutralinos is close to the mass of the W boson. In such phase space, the chargino pair production is kinematically similar to the WW background, making the chargino signal experimentally challenging to be discriminated from the WW background. Machine learning techniques are adopted to separate the supersymmetric signal from the backgrounds. The results exclude chargino masses up to about 140 GeV for mass splittings down to about 100 GeV, superseding the previous results in particularly interesting regions where the chargino pair production could have hidden behind the looking-alike WW background. In the displaced track search, the mass difference between the produced sparticles and the lightest neutralinos goes down to 0.3 GeV. The experimental signature has a low momentum charged track with an origin displaced from the collision point. The results show that the analysis has the sensitivity to exclude different hypotheses for higgsino masses up to 175 GeV if no excess is observed in data. For lower masses, the larger signal cross-section allows to achieve higher significance for different mass splitting scenarios. All these signal hypotheses have not been probed by any existing analysis of LHC data.Eric Ballabenework_syrsys6kc5cv5ohkityim6kadyTue, 22 Nov 2022 00:00:00 GMTCLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training
https://scholar.archive.org/work/iwozexo74ve5bo2xhta6xw56s4
Pre-training across 3D vision and language remains under development because of limited training data. Recent works attempt to transfer vision-language pre-training models to 3D vision. PointCLIP converts point cloud data to multi-view depth maps, adopting CLIP for shape classification. However, its performance is restricted by the domain gap between rendered depth maps and images, as well as the diversity of depth distributions. To address this issue, we propose CLIP2Point, an image-depth pre-training method by contrastive learning to transfer CLIP to the 3D domain, and adapt it to point cloud classification. We introduce a new depth rendering setting that forms a better visual effect, and then render 52,460 pairs of images and depth maps from ShapeNet for pre-training. The pre-training scheme of CLIP2Point combines cross-modality learning to enforce the depth features for capturing expressive visual and textual features and intra-modality learning to enhance the invariance of depth aggregation. Additionally, we propose a novel Dual-Path Adapter (DPA) module, i.e., a dual-path structure with simplified adapters for few-shot learning. The dual-path structure allows the joint use of CLIP and CLIP2Point, and the simplified adapter can well fit few-shot tasks without post-search. Experimental results show that CLIP2Point is effective in transferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIP and other self-supervised 3D networks, achieving state-of-the-art results on zero-shot and few-shot classification.Tianyu Huang, Bowen Dong, Yunhan Yang, Xiaoshui Huang, Rynson W.H. Lau, Wanli Ouyang, Wangmeng Zuowork_iwozexo74ve5bo2xhta6xw56s4Sun, 20 Nov 2022 00:00:00 GMTDiscretisations and Preconditioners for Magnetohydrodynamics Models
https://scholar.archive.org/work/e3ywh5aywrhypdoh5slgo3t3lm
The magnetohydrodynamics (MHD) equations are generally known to be difficult to solve numerically, due to their highly nonlinear structure and the strong coupling between the electromagnetic and hydrodynamic variables, especially for high Reynolds and coupling numbers. In the first part of this work, we present a scalable augmented Lagrangian preconditioner for a finite element discretisation of the 𝐁-𝐄 formulation of the incompressible viscoresistive MHD equations. For stationary problems, our solver achieves robust performance with respect to the Reynolds and coupling numbers in two dimensions and good results in three dimensions. Our approach relies on specialised parameter-robust multigrid methods for the hydrodynamic and electromagnetic blocks. The scheme ensures exactly divergence-free approximations of both the velocity and the magnetic field up to solver tolerances. In the second part, we focus on incompressible, resistive Hall MHD models and derive structure-preserving finite element methods for these equations. We present a variational formulation of Hall MHD that enforces the magnetic Gauss's law precisely (up to solver tolerances) and prove the well-posedness of a Picard linearisation. For the transient problem, we present time discretisations that preserve the energy and magnetic and hybrid helicity precisely in the ideal limit for two types of boundary conditions. In the third part, we investigate anisothermal MHD models. We start by performing a bifurcation analysis for a magnetic Rayleigh–Bénard problem at a high coupling number S=1,000 by choosing the Rayleigh number in the range between 0 and 100,000 as the bifurcation parameter. We study the effect of the coupling number on the bifurcation diagram and outline how we create initial guesses to obtain complex solution patterns and disconnected branches for high coupling numbers.Fabian Laakmannwork_e3ywh5aywrhypdoh5slgo3t3lmSun, 20 Nov 2022 00:00:00 GMTThe distorting lens of human mobility data
https://scholar.archive.org/work/snbsflvsqrhwzo3mkk3zkj5ava
The description of complex human mobility patterns is at the core of many important applications ranging from urbanism and transportation to epidemics containment. Data about collective human movements, once scarce, has become widely available thanks to new sources such as Phone CDR, GPS devices, or Smartphone apps. Nevertheless, it is still common to rely on a single dataset by implicitly assuming that it is a valid instance of universal dynamics, regardless of factors such as data gathering and processing techniques. Here, we test such an overarching assumption on an unprecedented scale by comparing human mobility datasets obtained from 7 different data-sources, tracing over 500 millions individuals in 145 countries. We report wide quantifiable differences in the resulting mobility networks and, in particular, in the displacement distribution previously thought to be universal. These variations -- that do not necessarily imply that the human mobility is not universal -- also impact processes taking place on these networks, as we show for the specific case of epidemic spreading. Our results point to the crucial need for disclosing the data processing and, overall, to follow good practices to ensure the robustness and the reproducibility of the results.Riccardo Gallotti, Davide Maniscalco, Marc Barthelemy, Manlio De Domenicowork_snbsflvsqrhwzo3mkk3zkj5avaFri, 18 Nov 2022 00:00:00 GMTNear-Term Quantum Computing Techniques: Variational Quantum Algorithms, Error Mitigation, Circuit Compilation, Benchmarking and Classical Simulation
https://scholar.archive.org/work/5cil662o5bclbky4ypzlw2akiq
Quantum computing is a game-changing technology for global academia, research centers and industries including computational science, mathematics, finance, pharmaceutical, materials science, chemistry and cryptography. Although it has seen a major boost in the last decade, we are still a long way from reaching the maturity of a full-fledged quantum computer. That said, we will be in the Noisy-Intermediate Scale Quantum (NISQ) era for a long time, working on dozens or even thousands of qubits quantum computing systems. An outstanding challenge, then, is to come up with an application that can reliably carry out a nontrivial task of interest on the near-term quantum devices with non-negligible quantum noise. To address this challenge, several near-term quantum computing techniques, including variational quantum algorithms, error mitigation, quantum circuit compilation and benchmarking protocols, have been proposed to characterize and mitigate errors, and to implement algorithms with a certain resistance to noise, so as to enhance the capabilities of near-term quantum devices and explore the boundaries of their ability to realize useful applications. Besides, the development of near-term quantum devices is inseparable from the efficient classical simulation, which plays a vital role in quantum algorithm design and verification, error-tolerant verification and other applications. This review will provide a thorough introduction of these near-term quantum computing techniques, report on their progress, and finally discuss the future prospect of these techniques, which we hope will motivate researchers to undertake additional studies in this field.He-Liang Huang, Xiao-Yue Xu, Chu Guo, Guojing Tian, Shi-Jie Wei, Xiaoming Sun, Wan-Su Bao, Gui-Lu Longwork_5cil662o5bclbky4ypzlw2akiqThu, 17 Nov 2022 00:00:00 GMTA_r-stable curves and the Chow ring of ℳ_3
https://scholar.archive.org/work/4a4tcr5hengihjpp452pfwqkbq
In this work, we introduce the moduli stack ℳ_g,n^r of n-pointed, A_r-stable curves of genus g and use it to compute the Chow ring of ℳ_3. As a byproduct, we also compute the Chow ring of ℳ_3^7. All the Chow rings are assumed to be with coefficients in ℤ[1/6].Michele Pernicework_4a4tcr5hengihjpp452pfwqkbqThu, 17 Nov 2022 00:00:00 GMTCounting Subgraphs in Somewhere Dense Graphs
https://scholar.archive.org/work/rvbsk2qgrbdejbks47ldc45kca
We study the problems of counting copies and induced copies of a small pattern graph H in a large host graph G. Recent work fully classified the complexity of those problems according to structural restrictions on the patterns H. In this work, we address the more challenging task of analysing the complexity for restricted patterns and restricted hosts. Specifically we ask which families of allowed patterns and hosts imply fixed-parameter tractability, i.e., the existence of an algorithm running in time f(H)· |G|^O(1) for some computable function f. Our main results present exhaustive and explicit complexity classifications for families that satisfy natural closure properties. Among others, we identify the problems of counting small matchings and independent sets in subgraph-closed graph classes 𝒢 as our central objects of study and establish the following crisp dichotomies as consequences of the Exponential Time Hypothesis: (1) Counting k-matchings in a graph G∈𝒢 is fixed-parameter tractable if and only if 𝒢 is nowhere dense. (2) Counting k-independent sets in a graph G∈𝒢 is fixed-parameter tractable if and only if 𝒢 is nowhere dense. Moreover, we obtain almost tight conditional lower bounds if 𝒢 is somewhere dense, i.e., not nowhere dense. These base cases of our classifications subsume a wide variety of previous results on the matching and independent set problem, such as counting k-matchings in bipartite graphs (Curticapean, Marx; FOCS 14), in F-colourable graphs (Roth, Wellnitz; SODA 20), and in degenerate graphs (Bressan, Roth; FOCS 21), as well as counting k-independent sets in bipartite graphs (Curticapean et al.; Algorithmica 19).Marco Bressan, Leslie Ann Goldberg, Kitty Meeks, Marc Rothwork_rvbsk2qgrbdejbks47ldc45kcaWed, 16 Nov 2022 00:00:00 GMTHolistic Evaluation of Language Models
https://scholar.archive.org/work/xl5k5dwfrffx5c6m2ivnwuczju
Language models (LMs) are becoming the foundation for almost all major language technologies, but their capabilities, limitations, and risks are not well understood. We present Holistic Evaluation of Language Models (HELM) to improve the transparency of language models. First, we taxonomize the vast space of potential scenarios (i.e. use cases) and metrics (i.e. desiderata) that are of interest for LMs. Then we select a broad subset based on coverage and feasibility, noting what's missing or underrepresented (e.g. question answering for neglected English dialects, metrics for trustworthiness). Second, we adopt a multi-metric approach: We measure 7 metrics (accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency) for each of 16 core scenarios when possible (87.5% of the time). This ensures metrics beyond accuracy don't fall to the wayside, and that trade-offs are clearly exposed. We also perform 7 targeted evaluations, based on 26 targeted scenarios, to analyze specific aspects (e.g. reasoning, disinformation). Third, we conduct a large-scale evaluation of 30 prominent language models (spanning open, limited-access, and closed models) on all 42 scenarios, 21 of which were not previously used in mainstream LM evaluation. Prior to HELM, models on average were evaluated on just 17.9% of the core HELM scenarios, with some prominent models not sharing a single scenario in common. We improve this to 96.0%: now all 30 models have been densely benchmarked on the same core scenarios and metrics under standardized conditions. Our evaluation surfaces 25 top-level findings. For full transparency, we release all raw model prompts and completions publicly for further analysis, as well as a general modular toolkit. We intend for HELM to be a living benchmark for the community, continuously updated with new scenarios, metrics, and models.Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, Yuta Koreedawork_xl5k5dwfrffx5c6m2ivnwuczjuWed, 16 Nov 2022 00:00:00 GMTA Dichotomy Theorem for Linear Time Homomorphism Orbit Counting in Bounded Degeneracy Graphs
https://scholar.archive.org/work/rlpb56f4nfd2jcnuj36wbwuu2e
Counting the number of homomorphisms of a pattern graph H in a large input graph G is a fundamental problem in computer science. There are myriad applications of this problem in databases, graph algorithms, and network science. Often, we need more than just the total count. Especially in large network analysis, we wish to compute, for each vertex v of G, the number of H-homomorphisms that v participates in. This problem is referred to as homomorphism orbit counting, as it relates to the orbits of vertices of H under its automorphisms. Given the need for fast algorithms for this problem, we study when near-linear time algorithms are possible. A natural restriction is to assume that the input graph G has bounded degeneracy, a commonly observed property in modern massive networks. Can we characterize the patterns H for which homomorphism orbit counting can be done in linear time? We discover a dichotomy theorem that resolves this problem. For pattern H, let l be the length of the longest induced path between any two vertices of the same orbit (under the automorphisms of H). If l <= 5, then H-homomorphism orbit counting can be done in linear time for bounded degeneracy graphs. If l > 5, then (assuming fine-grained complexity conjectures) there is no near-linear time algorithm for this problem. We build on existing work on dichotomy theorems for counting the total H-homomorphism count. Somewhat surprisingly, there exist (and we characterize) patterns H for which the total homomorphism count can be computed in linear time, but the corresponding orbit counting problem cannot be done in near-linear time.Daniel Paul-Pena, C. Seshadhriwork_rlpb56f4nfd2jcnuj36wbwuu2eWed, 16 Nov 2022 00:00:00 GMT'The Barghest o' Whitby' : (a genealogical study of) death/doom metal music(al) network in Northern England
https://scholar.archive.org/work/rjbcsm6ibraq7n4mspr3lmv4hi
Metal music has existed in one form or another for about half a century. While the musical style and the culture started out in a relatively unified way, with the 'extreme turn' of late 80s and 90s, metal culture stratified. Doom metal, being one of the oldest styles in this newly formed structure, became even more fragmented. Through amalgamations with other music styles or as further alterations on these amalgamations. I call these styles extreme doom. Death/doom is such a style. These smaller styles in metal culture have so far been investigated hierarchically. However, the implication that a hierarchy has is problematic in this context. Metal music studies is a budding field, so, we need to think more critically about the way we conceptualise the history of metal academically in these early years. Yet, so far, this stratification, with its hierarchy, has not been challenged or even discussed in detail. Scholarship often mentions these so-called 'sub-genres' uncritically. In order to challenge this idea, there needs to be a new model. However, because of the size and breadth of metal culture, one single work cannot come even close to covering the styles existing today. In this thesis, I attempt to draw boundaries around only death/doom to propose a way of modelling a new metal history. To achieve this, I define these newer and smaller styles as marginal styles using marginality idea of Park. Following this idea, sociology of music comes to rescue with Crossley's music worlds. Music worlds, because of its emphasis put on the musical style -it is central-, is an intriguing perspective to look at the fragmented nature of metal music. A metal music world is a social construction performed by the participants, including musicians, fans, engineers, managers, label executives, and the press, around a metal musical style. These smaller styles, then, become ideal candidates for the application of this theory. This thesis treats death/doom in such a way using ethnographic, historical, and musicological methods.Mehmet Yavuzwork_rjbcsm6ibraq7n4mspr3lmv4hiTue, 15 Nov 2022 00:00:00 GMTCross-Modal Contrastive Hashing Retrieval for Infrared Video and EEG
https://scholar.archive.org/work/z5s6bix75nbq5hgfchwb6lnnw4
It is essential to estimate the sleep quality and diagnose the clinical stages in time and at home, because they are closely related to and important causes of chronic diseases and daily life dysfunctions. However, the existing "gold-standard" sensing machine for diagnosis (Polysomnography (PSG) with Electroencephalogram (EEG) measurements) is almost infeasible to deploy at home in a "ubiquitous" manner. In addition, it is costly to train clinicians for the diagnosis of sleep conditions. In this paper, we proposed a novel technical and systematic attempt to tackle the previous barriers: first, we proposed to monitor and sense the sleep conditions using the infrared (IR) camera videos synchronized with the EEG signal; second, we proposed a novel cross-modal retrieval system termed as Cross-modal Contrastive Hashing Retrieval (CCHR) to build the relationship between EEG and IR videos, retrieving the most relevant EEG signal given an infrared video. Specifically, the CCHR is novel in the following two perspectives. Firstly, to eliminate the large cross-modal semantic gap between EEG and IR data, we designed a novel joint cross-modal representation learning strategy using a memory-enhanced hard-negative mining design under the framework of contrastive learning. Secondly, as the sleep monitoring data are large-scale (8 hours long for each subject), a novel contrastive hashing module is proposed to transform the joint cross-modal features to the discriminative binary hash codes, enabling the efficient storage and inference. Extensive experiments on our collected cross-modal sleep condition dataset validated that the proposed CCHR achieves superior performances compared with existing cross-modal hashing methods.Jianan Han, Shaoxing Zhang, Aidong Men, Qingchao Chenwork_z5s6bix75nbq5hgfchwb6lnnw4Mon, 14 Nov 2022 00:00:00 GMTRemoving Additive Structure in 3SUM-Based Reductions
https://scholar.archive.org/work/wo3tsh75gvgfvnuilzxocphboq
Our work explores the hardness of 3SUM instances without certain additive structures, and its applications. As our main technical result, we show that solving 3SUM on a size-n integer set that avoids solutions to a+b=c+d for {a, b}{c, d} still requires n^2-o(1) time, under the 3SUM hypothesis. Such sets are called Sidon sets and are well-studied in the field of additive combinatorics. - Combined with previous reductions, this implies that the All-Edges Sparse Triangle problem on n-vertex graphs with maximum degree √(n) and at most n^k/2 k-cycles for every k ≥ 3 requires n^2-o(1) time, under the 3SUM hypothesis. This can be used to strengthen the previous conditional lower bounds by Abboud, Bringmann, Khoury, and Zamir [STOC'22] of 4-Cycle Enumeration, Offline Approximate Distance Oracle and Approximate Dynamic Shortest Path. In particular, we show that no algorithm for the 4-Cycle Enumeration problem on n-vertex m-edge graphs with n^o(1) delays has O(n^2-ε) or O(m^4/3-ε) pre-processing time for ε >0. We also present a matching upper bound via simple modifications of the known algorithms for 4-Cycle Detection. - A slight generalization of the main result also extends the result of Dudek, Gawrychowski, and Starikovskaya [STOC'20] on the 3SUM hardness of nontrivial 3-Variate Linear Degeneracy Testing (3-LDTs): we show 3SUM hardness for all nontrivial 4-LDTs. The proof of our main technical result combines a wide range of tools: Balog-Szemerédi-Gowers theorem, sparse convolution algorithm, and a new almost-linear hash function with almost 3-universal guarantee for integers that do not have small-coefficient linear relations.Ce Jin, Yinzhan Xuwork_wo3tsh75gvgfvnuilzxocphboqMon, 14 Nov 2022 00:00:00 GMTThe geometric Satake equivalence for integral motives
https://scholar.archive.org/work/6gmded5btndzlbmza4rkb4knia
We prove the geometric Satake equivalence for mixed Tate motives over the integral motivic cohomology spectrum. This refines previous versions of the geometric Satake equivalence for split groups and power series affine Grassmannians. Our new geometric results include Whitney-Tate stratifications of Beilinson-Drinfeld Grassmannians and cellular decompositions of semi-infinite orbits. With future global applications in mind, we also achieve an equivalence relative to a power of the affine line. Finally, we use our equivalence to give Tannakian constructions of the C-group and a modified form of Vinberg's monoid.Robert Cass, Thibaud van den Hove, Jakob Scholbachwork_6gmded5btndzlbmza4rkb4kniaWed, 09 Nov 2022 00:00:00 GMTStochastic Average Model Methods
https://scholar.archive.org/work/ylr36l7dhbdzplqkkqpc2qmqt4
We consider the solution of finite-sum minimization problems, such as those appearing in nonlinear least-squares or general empirical risk minimization problems. We are motivated by problems in which the summand functions are computationally expensive and evaluating all summands on every iteration of an optimization method may be undesirable. We present the idea of stochastic average model (SAM) methods, inspired by stochastic average gradient methods. SAM methods sample component functions on each iteration of a trust-region method according to a discrete probability distribution on component functions; the distribution is designed to minimize an upper bound on the variance of the resulting stochastic model. We present promising numerical results concerning an implemented variant extending the derivative-free model-based trust-region solver POUNDERS, which we name SAM-POUNDERS.Matt Menickelly, Stefan M. Wildwork_ylr36l7dhbdzplqkkqpc2qmqt4Wed, 09 Nov 2022 00:00:00 GMTNearly optimal independence oracle algorithms for edge estimation in hypergraphs
https://scholar.archive.org/work/cod5s6nvoben7bu6hhympuxzhe
We study a query model of computation in which an n-vertex k-hypergraph can be accessed only via its independence oracle or via its colourful independence oracle, and each oracle query may incur a cost depending on the size of the query. In each of these models, we obtain oracle algorithms to approximately count the hypergraph's edges, and we unconditionally prove that no oracle algorithm for this problem can have significantly smaller worst-case oracle cost than our algorithms.Holger Dell and John Lapinskas and Kitty Meekswork_cod5s6nvoben7bu6hhympuxzheMon, 07 Nov 2022 00:00:00 GMTDetermining the Reaction Zone Length in Shock Initiated PETN
https://scholar.archive.org/work/ud2l24zn6bbtzdjbrwhlvnrkse
Pentaerythritol tetranitrate (PETN) is a secondary explosive used in electrical detonators in the form of a pressed powder. The reaction zone length of PETN is smaller than that of most other explosives, therefore there is a lack of data due to insufficient resolution of existing methods. Furthermore, most prior work has been on steady state behaviour, so the transition regime before steady state is particularly poorly understood. The research described in this thesis was undertaken in order to characterise the reaction zone length and wave curvature during the evolution from initiation to steady state. The investigation was focused on a detonator setting, so confined cylindrical pellets of a similar scale were used here. To separate the effect of the chemical reaction from the mechanical response to shock, plate impact experiments were performed on an inert simulant: a fine icing sugar with comparable particle size. The shock velocity and rise time were found to exhibit dependence on the thickness of the bed, suggesting that these effects may also play a role in PETN prior to development of detonation. A fibre launched laser flyer detonator system was constructed to allow repeatable shock initiation of the target samples with a high throughput. This apparatus could produce a highly tuneable shock without much of the electrical noise present with electrical detonators. High-rate capacitive sensing was applied as a technique for measuring detonation properties in small columns of PETN. Development of the diagnostic incorporated design of the sensor itself, event synchronisation handling and noise reduction. A custom-made data processing algorithm was used to extract useful information from the sensor signal. This technology was found to have the temporal and spatial resolution required, as well as being cheaper and easier to implement than competing methods. Experiments using this diagnostic were performed to measure the reaction zone length and curvature for a range of densities and sample sizes. The data could a [...]James Edgeley, Apollo-University Of Cambridge Repository, Chris Braithwaitework_ud2l24zn6bbtzdjbrwhlvnrkseThu, 03 Nov 2022 00:00:00 GMT