IA Scholar Query: A Stronger Kolmogorov Zero-One Law for Resource-Bounded Measure.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgTue, 29 Nov 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440PurdueThesis_XuejunZhao
https://scholar.archive.org/work/j4izygnldbgxfhthvie3jdiony
This study examines data-driven contract design in the small data regime and large data regime respectively, and the implications from contract pricing in the pharmaceutical supply chain.Xuejun Zhaowork_j4izygnldbgxfhthvie3jdionyTue, 29 Nov 2022 00:00:00 GMTA Comprehensive Survey on Enterprise Financial Risk Analysis: Problems, Methods, Spotlights and Applications
https://scholar.archive.org/work/vzphdcql3zeodjfz4lpw7q2dpq
Enterprise financial risk analysis aims at predicting the enterprises' future financial risk.Due to the wide application, enterprise financial risk analysis has always been a core research issue in finance. Although there are already some valuable and impressive surveys on risk management, these surveys introduce approaches in a relatively isolated way and lack the recent advances in enterprise financial risk analysis. Due to the rapid expansion of the enterprise financial risk analysis, especially from the computer science and big data perspective, it is both necessary and challenging to comprehensively review the relevant studies. This survey attempts to connect and systematize the existing enterprise financial risk researches, as well as to summarize and interpret the mechanisms and the strategies of enterprise financial risk analysis in a comprehensive way, which may help readers have a better understanding of the current research status and ideas. This paper provides a systematic literature review of over 300 articles published on enterprise risk analysis modelling over a 50-year period, 1968 to 2022. We first introduce the formal definition of enterprise risk as well as the related concepts. Then, we categorized the representative works in terms of risk type and summarized the three aspects of risk analysis. Finally, we compared the analysis methods used to model the enterprise financial risk. Our goal is to clarify current cutting-edge research and its possible future directions to model enterprise risk, aiming to fully understand the mechanisms of enterprise risk communication and influence and its application on corporate governance, financial institution and government regulation.Yu Zhao, Huaming Duwork_vzphdcql3zeodjfz4lpw7q2dpqMon, 28 Nov 2022 00:00:00 GMTNonparametric Two-Sample Testing by Betting
https://scholar.archive.org/work/4guievem7jfffnz3cntk4kshjm
We study the problem of designing consistent sequential two-sample tests in a nonparametric setting. Guided by the principle of testing by betting, we reframe this task into that of selecting a sequence of payoff functions that maximize the wealth of a fictitious bettor, betting against the null in a repeated game. In this setting, the relative increase in the bettor's wealth has a precise interpretation as the measure of evidence against the null, and thus our sequential test rejects the null when the wealth crosses an appropriate threshold. We develop a general framework for setting up the betting game for two-sample testing, in which the payoffs are selected by a prediction strategy as data-driven predictable estimates of the witness function associated with the variational representation of some statistical distance measures, such as integral probability metrics (IPMs). We then formally relate the statistical properties of the test~(such as consistency, type-II error exponent and expected sample size) to the regret of the corresponding prediction strategy. We construct a practical sequential two-sample test by instantiating our general strategy with the kernel-MMD metric, and demonstrate its ability to adapt to the difficulty of the unknown alternative through theoretical and empirical results. Our framework is versatile, and easily extends to other problems; we illustrate this by applying our approach to construct consistent tests for the following problems: (i) time-varying two-sample testing with non-exchangeable observations, and (ii) an abstract class of "invariant" testing problems, including symmetry and independence testing.Shubhanshu Shekhar, Aaditya Ramdaswork_4guievem7jfffnz3cntk4kshjmMon, 28 Nov 2022 00:00:00 GMTIterated Function Systems: A Comprehensive Survey
https://scholar.archive.org/work/bbiicvay2zf3zbnix2642tq54e
We provide an overview of iterated function systems (IFS), where randomly chosen state-to-state maps are applied iteratively to a state. We aim to summarize the state of art and, where possible, identify fundamental challenges and opportunities for further research.Ramen Ghosh, Jakub Marecekwork_bbiicvay2zf3zbnix2642tq54eSat, 26 Nov 2022 00:00:00 GMTMultivariate rank via entropic optimal transport: sample efficiency and generative modeling
https://scholar.archive.org/work/ogshqrwf6nfe5bj3xmb6e4r2ge
The framework of optimal transport has been leveraged to extend the notion of rank to the multivariate setting while preserving desirable properties of the resulting goodness-of-fit (GoF) statistics. In particular, the rank energy (RE) and rank maximum mean discrepancy (RMMD) are distribution-free under the null, exhibit high power in statistical testing, and are robust to outliers. In this paper, we point to and alleviate some of the practical shortcomings of these proposed GoF statistics, namely their high computational cost, high statistical sample complexity, and lack of differentiability with respect to the data. We show that all these practically important issues are addressed by considering entropy-regularized optimal transport maps in place of the rank map, which we refer to as the soft rank. We consequently propose two new statistics, the soft rank energy (sRE) and soft rank maximum mean discrepancy (sRMMD), which exhibit several desirable properties. Given n sample data points, we provide non-asymptotic convergence rates for the sample estimate of the entropic transport map to its population version that are essentially of the order n^-1/2 when the starting measure is subgaussian and the target measure has compact support. This result is novel compared to existing results which achieve a rate of n^-1 but crucially rely on both measures having compact support. We leverage this result to demonstrate fast convergence of sample sRE and sRMMD to their population version making them useful for high-dimensional GoF testing. Our statistics are differentiable and amenable to popular machine learning frameworks that rely on gradient methods. We leverage these properties towards showcasing the utility of the proposed statistics for generative modeling on two important problems: image generation and generating valid knockoffs for controlled feature selection.Shoaib Bin Masud, Matthew Werenski, James M. Murphy, Shuchin Aeronwork_ogshqrwf6nfe5bj3xmb6e4r2geFri, 25 Nov 2022 00:00:00 GMTRevised mixing coefficient scaling for sheared stably stratified turbulence
https://scholar.archive.org/work/nsbzfws53nfp7gmagpbflui72y
We revisit and extend the turbulent Froude number ( $Fr_k$ ) scaling for the mixing coefficient ( $\varGamma$ ) introduced by Garanaik & Venayagamoorthy (GV) (J. Fluid Mech., vol. 867, 2019, pp. 323–333) by directly incorporating the effects of mean shear through the non-dimensional shear parameter $S_{\ast } = S k/\epsilon _k$ . For flows where the effects of mean shear are stronger than the background vertical stratification, we find $\varGamma \sim Fr_k^{-2} S_\ast ^{-1}$ for weakly stratified sheared turbulence and $\varGamma \sim Fr_k^{-1}S_\ast ^{-1}$ for moderately stratified sheared turbulence. The scaling procedure is inconclusive for strongly stratified sheared turbulence, but using two independent datasets of homogeneous, sheared, stably stratified turbulence, we empirically observe $\varGamma \sim Fr_k^{-0.5} S_\ast ^{-1}$ . Our revised scaling better collapses both datasets compared with the original GV scaling, and we note that the moderately stratified sheared regime is extremely narrow (or maybe even non-existent). We also apply our scaling to the time-varying open channel simulations of Issaev et al. (J. Fluid Mech., vol. 935, 2022) and observe $\varGamma \sim Fr_k^{-2}S_\ast ^{-1}$ for weakly stratified sheared turbulence, but we observe deviations from our revised scaling for moderate and strong stratifications due to time-varying mean shear and vertical transport. Finally, we apply our revised scaling to field measurements of Conry, Kit & Fernando (Environ. Fluid Mech., vol. 20, 2020, pp. 1177–1197) and observe $\varGamma \sim Fr_k^{-2} S_\ast ^{-1}$ . We emphasize that our revised scaling is applicable only for stably stratified, vertically sheared turbulence with weak spatio-temporal variations of the mean shear and stratification, and we expect different scaling to apply when additional effects such as depth-varying radiative heating/cooling are present or when the orientation of the mean shear relative to the gravity vector is modified (e.g. horizontal shear).Young R. Yi, Jeffrey R. Koseffwork_nsbzfws53nfp7gmagpbflui72yThu, 24 Nov 2022 00:00:00 GMTRelaxing assumptions in deep probabilistic modelling
https://scholar.archive.org/work/shsuzigibvdwtigrvf7cybpdoy
The current generation of deep neural network-based models demonstrate tremendous capacity to learn distributions at scale. Given this success, deep learning and deep generative modelling have progressively been applied across a broader range of increasingly demanding applications, as well as in safety-critical domains such as healthcare. However, existing models are reliant upon restrictive theoretical assumptions, deriving from longstanding distributions and divergences at their core, which inhibit their continued advance. By leveraging wider distribution and divergence families, transferring broader parametric assumptions to deep generative models increases the scope of the functions they can approximate. In particular, Kullback-Leibler divergence and the Gaussian distribution are assumed at the heart of variational autoencoders and score-based models, and are central to their limitations. This thesis argues that both assumptions can be viewed through wider lenses—the skew-geometric Jensen-Shannon divergence family and the generalised normal distribution family respectively. Several contributions are made to both the theory of deep learning, specifically deep generative modelling, and its application to electronic health records (EHRs). Firstly, a new type of variational autoencoder is introduced, capitalising on the flexibility of the skew-geometric Jensen-Shannon divergence, to overcome the prior theoretical shortcomings and lack of interpretability of latent space constraints. JSGα-VAEs lead to better reconstruction and generation when compared to baseline VAEs and utilise a single hyperparameter which can be easily interpreted in latent space. Secondly, heavy-tailed denoising score matching (HTDSM) is proposed, motivated by superior concentration of measure for the noising distribution in high-dimensional space. HTDSM offers improved score estimation, controllable sampling convergence, and more class-balanced unconditional generative performance. Finally, several results which indicate that the generalisat [...]Jacob Deasy, Apollo-University Of Cambridge Repository, Pietro Lio, Ari Ercolework_shsuzigibvdwtigrvf7cybpdoyTue, 22 Nov 2022 00:00:00 GMTReconstructing the Assembly of Massive Galaxies. II. Galaxies Develop Massive and Dense Stellar Cores as They Evolve and Head Toward Quiescence at Cosmic Noon
https://scholar.archive.org/work/5yeo7vfxqzb6rd536u4ybni4ny
We use the SED-fitting code Prospector to reconstruct the nonparametric star formation history (SFH) of massive (log M_*>10.3) star-forming galaxies (SFGs) and quiescent galaxies (QGs) at redshift z_obs∼2 to investigate the joint evolution of star-formation activity and structural properties. We find significant correlations between the SFH of the galaxies and their morphology. Compared to extended SFGs, compact SFGs are more likely to have experienced multiple star-formation episodes, with the fractional mass formed during the older (≥1 Gyr) episode being larger, suggesting that high-redshift SFGs assembled their central regions earlier and then kept growing in central mass as they become more compact. The SFH of compact QGs does not significantly differ from the average for this category, and shows an early burst followed by a gradual decline of the star formation rate. The SFH of extended QGs, however, is similar to that of post-starburst galaxies and their morphology is also frequently disturbed. Knowledge of the SFH also enables us to empirically reconstruct the structural evolution of individual galaxies. While the progenitor effect is clearly observed and accounted for in our analysis, it alone is insufficient to explain the observed structural evolution. We show that, as they evolve from star-forming phase to quiescence, galaxies grow massive dense stellar cores. Quenching begins at the center and then propagates outward to the rest of the structure. We discuss possible physical scenarios for the observed evolution and find that our empirical constraints are in good quantitative agreement with the model predictions from dissipative accretion of gas to the center followed by massive starbursts before final quiescence (wet compaction).Zhiyuan Ji, Mauro Giavaliscowork_5yeo7vfxqzb6rd536u4ybni4nyMon, 21 Nov 2022 00:00:00 GMTSmooth Spatial Modeling of Extreme Mediterranean Precipitation
https://scholar.archive.org/work/cmxc2syjmzat5bnom2e2f4pa2q
Extreme precipitation events can lead to disastrous floods, which are the most significant natural hazards in the Mediterranean regions. Therefore, a proper characterization of these events is crucial. Extreme events defined as annual maxima can be modeled with the generalized extreme value (GEV) distribution. Owing to spatial heterogeneity, the distribution of extremes is non-stationary in space. To take non-stationarity into account, the parameters of the GEV distribution can be viewed as functions of covariates that convey spatial information. Such functions may be implemented as a generalized linear model (GLM) or with a more flexible non-parametric non-linear model such as an artificial neural network (ANN). In this work, we evaluate several statistical models that combine the GEV distribution with a GLM or with an ANN for a spatial interpolation of the GEV parameters. Key issues are the proper selection of the complexity level of the ANN (i.e., the number of hidden units) and the proper selection of spatial covariates. Three sites are included in our study: a region in the French Mediterranean, the Cap Bon area in northeast Tunisia, and the Merguellil catchment in central Tunisia. The comparative analysis aim at assessing the genericity of state-of-the-art approaches to interpolate the distribution of extreme precipitation events.Hela Hammami, Julie Carreau, Luc Neppel, Sadok Elasmi, Haifa Fekiwork_cmxc2syjmzat5bnom2e2f4pa2qMon, 21 Nov 2022 00:00:00 GMTKeep Calm and Carry On: The Short- vs. Long-Run Effects of Mindfulness Meditation on (Academic) Performance
https://scholar.archive.org/work/nhk7baxubbgyhb7hhmiehdyz6i
Mindfulness-based meditation practices are becoming increasingly popular in Western societies, including in the business world and in education. While the scientific literature has largely documented the benefits of mindfulness meditation for mental health, little is still known about potential spillovers of these practices on other important life outcomes, such as performance. We address this question through a field experiment in an educational setting. We study the causal impact of mindfulness meditation on academic performance through a randomized evaluation of a well-known 8-week mindfulness meditation training delivered to university students on campus. As expected, the intervention improves students' mental health and non-cognitive skills. However, it takes time before students' performance can benefit from mindfulness meditation: we find that, if anything, the intervention marginally decreases average grades in the short run, i.e., during the exam period right after the end of the intervention, whereas it significantly increases academic performance, by about 0.4 standard deviations, in the long run (ca. 6 months after the end of intervention). We investigate the underlying mechanisms and discuss the implications of our results.Lea Cassar, Mira Fischer, Vanessa Valerowork_nhk7baxubbgyhb7hhmiehdyz6iFri, 18 Nov 2022 00:00:00 GMTTesting for context-dependent changes in neural encoding in naturalistic experiments
https://scholar.archive.org/work/yxflqanfjrcpthmbm2u6zyxwtm
We propose a decoding-based approach to detect context effects on neural codes in longitudinal neural recording data. The approach is agnostic to how information is encoded in neural activity, and can control for a variety of possible confounding factors present in the data. We demonstrate our approach by determining whether it is possible to decode location encoding from prefrontal cortex in the mouse and, further, testing whether the encoding changes due to task engagement.Yenho Chen, Carl W. Harris, Xiaoyu Ma, Zheng Li, Francisco Pereira, Charles Y.Zhengwork_yxflqanfjrcpthmbm2u6zyxwtmThu, 17 Nov 2022 00:00:00 GMTAgafonov's Theorem for finite and infinite alphabets and probability distributions different from equidistribution
https://scholar.archive.org/work/vdwoh3deynhkfk7smdkra5g5he
An infinite sequence α over an alphabet Σ is μ-distributed w.r.t. a probability map μ if, for every finite string w, the limiting frequency of w in α exists and equals μ(w). of how to characterize the probability maps μ for which μ-distributedness is preserved across finite-state selection, or equivalently, by selection by programs using constant space. We prove the following result for any finite or countably infinite alphabet Σ: every finite-state selector over Σ selects a μ-distributed sequence from every μ-distributed sequence if and only if μ is induced by a Bernoulli distribution on Σ, that is a probability distribution on the alphabet extended to words by taking the product. The primary – and remarkable – consequence of our main result is a complete characterization of the set of probability maps, on finite and infinite alphabets, for which finite-state selection preserves μ-distributedness. The main positive takeaway is that (the appropriate generalization of) Agafonov's Theorem holds for Bernoulli distributions (rather than just equidistributions) on both finite and countably infinite alphabets. As a further consequence, we obtain a result in the area of symbolic dynamical systems: the shift-invariant measures μ on Σ^ω such that any finite-state selector preserves the property of genericity for μ, are exactly the positive Bernoulli measures.Thomas Seiller, Jakob Grue Simonsenwork_vdwoh3deynhkfk7smdkra5g5heTue, 15 Nov 2022 00:00:00 GMTUnderstanding Approximation for Bayesian Inference in Neural Networks
https://scholar.archive.org/work/rvkzdhnl75csjd7m52yfa3ouya
Bayesian inference has theoretical attractions as a principled framework for reasoning about beliefs. However, the motivations of Bayesian inference which claim it to be the only 'rational' kind of reasoning do not apply in practice. They create a binary split in which all approximate inference is equally 'irrational'. Instead, we should ask ourselves how to define a spectrum of more- and less-rational reasoning that explains why we might prefer one Bayesian approximation to another. I explore approximate inference in Bayesian neural networks and consider the unintended interactions between the probabilistic model, approximating distribution, optimization algorithm, and dataset. The complexity of these interactions highlights the difficulty of any strategy for evaluating Bayesian approximations which focuses entirely on the method, outside the context of specific datasets and decision-problems. For given applications, the expected utility of the approximate posterior can measure inference quality. To assess a model's ability to incorporate different parts of the Bayesian framework we can identify desirable characteristic behaviours of Bayesian reasoning and pick decision-problems that make heavy use of those behaviours. Here, we use continual learning (testing the ability to update sequentially) and active learning (testing the ability to represent credence). But existing continual and active learning set-ups pose challenges that have nothing to do with posterior quality which can distort their ability to evaluate Bayesian approximations. These unrelated challenges can be removed or reduced, allowing better evaluation of approximate inference methods.Sebastian Farquharwork_rvkzdhnl75csjd7m52yfa3ouyaFri, 11 Nov 2022 00:00:00 GMTEvolving provably explainable fuzzy pattern tree classifiers using grammatical evolution
https://scholar.archive.org/work/gmwtgv6kg5ehndxtmhtnlrorou
The central hypothesis of this thesis is that Fuzzy Pattern Trees (FPTs) can be considered a powerful explainable artificial intelligence (XAI) technique and that highly accurate FPTs can be evolved using Grammatical Evolution (GE), an evolutionary computation technique. While no single definition of what constitutes an XAI system is agreed upon, we investigate the suitability of a system to be deemed as XAI based on four core criteria: transparency, accuracy, trustworthiness and the ability to incorporate domain knowledge. We start with a bottom-up system to identify useful subtrees, i.e., building blocks, in a GE run, which were subsequently made available for later generations. While this improved the performance, the complicated and unintuitive nature of the full solutions found using GE were not useful for XAI. We then pivot strategy and aim to create an intrinsically interpretable model. Specifically, we evolved FPTs using GE, which we call FGE. We systematically explore their ability to satisfy the core criteria, described above. We first show FGE meets the accuracy requirement and investigate the effect ensemble methods have on performance, improving it in half of benchmarks. Parsimony pressure was shown to reduce the size of the trees with no compromise in performance. Next, the transparency and trustworthiness of FPTs are directly investigated by a domain expert. This is done using a selection of real work benchmark problem sets. Models with sensible logic, as judged by the expert, outperform models with poor logic, validating that FGE creates interpretable models.Aidan Murphywork_gmwtgv6kg5ehndxtmhtnlrorouThu, 10 Nov 2022 00:00:00 GMTNaming the largest number: Exploring the boundary between mathematics and the philosophy of mathematics
https://scholar.archive.org/work/g723wnhm2nbzbmmwiwigf4jgke
What is the largest number accessible to the human imagination? The question is neither entirely mathematical nor entirely philosophical. Mathematical formulations of the problem fall into two classes: those that fail to fully capture the spirit of the problem, and those that turn it back into a philosophical problem.David Simmonswork_g723wnhm2nbzbmmwiwigf4jgkeWed, 09 Nov 2022 00:00:00 GMTThe Spatial Scale Dependence of The Hurst Coefficient in Global Annual Precipitation Data, and Its Role in Characterising Regional Precipitation Deficits within a Naturally Changing Climate
https://scholar.archive.org/work/nnudba3ifba4riqrmmupg4wfom
Hurst's seminal characterisation of long-term persistence (LTP) in geophysical records more than seven decades ago continues to inspire investigations into the Hurst phenomenon, not just in hydrology and climatology, but in many other scientific fields. Here, we present a new theoretical development based on stochastic Hurst–Kolmogorov (HK) dynamics that explains the recent finding that the Hurst coefficient increases with the spatial scale of averaging for regional annual precipitation. We also present some further results on the scale dependence of H in regional precipitation, and reconcile an apparent inconsistency between sample results and theory. LTP in average basin scale precipitation is shown to be consistent with LTP in the annual flows of some large river basins. An analysis of the crossing properties of precipitation deficits in regions exhibiting LTP shows that the Hurst coefficient can be a parsimonious descriptor of the risk of severe precipitation deficits. No evidence is found for any systematic trend in precipitation deficits attributable to anthropogenic climate change across the regions analysed. Future precipitation deficit risk assessments should, in the first instance, be based on stochastic HK simulations that encompass the envelope of uncertainty synonymous with LTP, and not rely exclusively on GCM projections that may not properly capture long-term natural variability in the climate. Some views and opinions are expressed on the implications for policy making in sustainable water resources management.Enda O'Connell, Greg O'Donnell, Demetris Koutsoyianniswork_nnudba3ifba4riqrmmupg4wfomMon, 07 Nov 2022 00:00:00 GMTUnclonability and Quantum Cryptanalysis: From Foundations to Applications
https://scholar.archive.org/work/rfthbhoi4zcrff65a2fyzhjubq
The impossibility of creating perfect identical copies of unknown quantum systems is a fundamental concept in quantum theory and one of the main non-classical properties of quantum information. This limitation imposed by quantum mechanics, famously known as the no-cloning theorem, has played a central role in quantum cryptography as a key component in the security of quantum protocols. In this thesis, we look at Unclonability in a broader context in physics and computer science and more specifically through the lens of cryptography, learnability and hardware assumptions. We introduce new notions of unclonability in the quantum world, namely quantum physical unclonability, and study the relationship with cryptographic properties and assumptions such as unforgeability, and quantum pseudorandomness. The purpose of this study is to bring new insights into the field of quantum cryptanalysis and into the notion of unclonability itself. We also discuss several applications of this new type of unclonability as a cryptographic resource for designing provably secure quantum protocols. Furthermore, we present a new practical cryptanalysis technique concerning the problem of approximate cloning of quantum states. We design a quantum machine learning-based cryptanalysis algorithm to demonstrate the power of quantum learning tools as both attack strategies and powerful tools for the practical study of quantum unclonability.Mina Doostiwork_rfthbhoi4zcrff65a2fyzhjubqMon, 31 Oct 2022 00:00:00 GMTNature of the Galaxies On Top Of Quasars producing MgII absorption
https://scholar.archive.org/work/xlyv6nyt4ngijo47rqd466rtlu
Quasar-galaxy pairs at small separations are important probes of gas flows in the disk-halo interface in galaxies. We study host galaxies of 198 MgII absorbers at 0.39≤ z_abs≤1.05 that show detectable nebular emission lines in the SDSS spectra. We report measurements of impact parameter (5.9≤ D[kpc]≤16.9) and absolute B-band magnitude (-18.7≤ M_B≤ -22.3 mag) of host galaxies of 74 of these absorbers using multi-band images from the DESI Legacy Imaging Survey, more than doubling the number of known host galaxies with D≤17 kpc. This has allowed us to quantify the relationship between MgII rest equivalent width(W_2796) and D, with best-fit parameters of W_2796(D=0) = 3.44± 0.20 Angstrom and an exponential scale length of 21.6^+2.41_-1.97 kpc. We find a significant anti-correlation between M_B and D, and M_B and W_2796, consistent with the brighter galaxies producing stronger MgII absorption. We use stacked images to detect average emissions from galaxies in the full sample. Using these images and stacked spectra, we derive the mean stellar mass (9.4≤ log(M_*/M_⊙) ≤ 9.8), star formation rate (2.3≤ SFR[M_⊙ yr^-1] ≤ 4.5), age (2.5-4 Gyr), metallicity (12+log(O/H)∼8.3) and ionization parameter (log q[cm s^-1]∼ 7.7) for these galaxies. The average M_* found is less compared to those of MgII absorbers studied in the literature. The average SFR and metallicity inferred are consistent with that expected in the main sequence and the known stellar mass-metallicity relation, respectively. High spatial resolution follow-up spectroscopic and imaging observations of this sample are imperative for probing gas flows close to the star-forming regions of high-z galaxies.Labanya Kumar Guha, Raghunathan Srianandwork_xlyv6nyt4ngijo47rqd466rtluMon, 31 Oct 2022 00:00:00 GMTSIMPLE-RC: Group Network Inference with Non-Sharp Nulls and Weak Signals
https://scholar.archive.org/work/fdtshhuqujfwrgzy3pnewsrdde
Large-scale network inference with uncertainty quantification has important applications in natural, social, and medical sciences. The recent work of Fan, Fan, Han and Lv (2022) introduced a general framework of statistical inference on membership profiles in large networks (SIMPLE) for testing the sharp null hypothesis that a pair of given nodes share the same membership profiles. In real applications, there are often groups of nodes under investigation that may share similar membership profiles at the presence of relatively weaker signals than the setting considered in SIMPLE. To address these practical challenges, in this paper we propose a SIMPLE method with random coupling (SIMPLE-RC) for testing the non-sharp null hypothesis that a group of given nodes share similar (not necessarily identical) membership profiles under weaker signals. Utilizing the idea of random coupling, we construct our test as the maximum of the SIMPLE tests for subsampled node pairs from the group. Such technique reduces significantly the correlation among individual SIMPLE tests while largely maintaining the power, enabling delicate analysis on the asymptotic distributions of the SIMPLE-RC test. Our method and theory cover both the cases with and without node degree heterogeneity. These new theoretical developments are empowered by a second-order expansion of spiked eigenvectors under the ℓ_∞-norm, built upon our work for random matrices with weak spikes. Our theoretical results and the practical advantages of the newly suggested method are demonstrated through several simulation and real data examples.Jianqing Fan, Yingying Fan, Jinchi Lv, Fan Yangwork_fdtshhuqujfwrgzy3pnewsrddeMon, 31 Oct 2022 00:00:00 GMTUnstructured Grid Dynamical Modeling of Planetary Atmospheres using planetMPAS: The Influence of the Rigid Lid, Computational Efficiency, and Examples of Martian and Jovian Application
https://scholar.archive.org/work/bmsq4sknxndfpp6qoeaqy7juvu
We present a new planetary global circulation model, planetMPAS, based on the state-of-the-art NCAR MPAS General Circulation Model. Taking advantage of the cross compatibility between WRF and MPAS, planetMPAS includes most of the planetWRF physics parameterization schemes for terrestrial planets such as Mars and Titan. PlanetMPAS also includes a set of physics that represents radiative transfer, dry convection, moist convection and its associated microphysics for the Jovian atmosphere. We demonstrate that, despite the rigid-lid approximation, planetMPAS is suitable to simulate the climate systems in Martian and Jovian atmospheres with potential application to slow-rotating planets such as Titan. Simulations using planetMPAS show that the new model can reproduce many aspects of the observed features on Mars and Jupiter, such as the seasonal CO2 cycle, polar argon enrichment, zonal mean temperature, and qualitative dust opacity on Mars, as well as the equatorial superrotation and banded zonal wind patterns on Jupiter.Yuan Lian, Mark I. Richardsonwork_bmsq4sknxndfpp6qoeaqy7juvuMon, 31 Oct 2022 00:00:00 GMT