IA Scholar Query: A General Framework for Probabilistic Characterizing Formulae.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgFri, 23 Dec 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Classification of COVID-19 on Chest X-Ray Images Through the Fusion of HOG and LPQ Feature Sets
https://scholar.archive.org/work/kycoff7nlfgahbsq4k4qbd624e
Covid-19 is a contagious disease that affects people's everyday life, personal health, as well as a nation's economy. COVID-19 infected individuals, according to a clinical study, are most usually contaminated with a severe condition after coming into a primary infection. The chest radiograph (also known as the chest X-ray or CXR) or a chest CT scan is a more reliable imaging method for diagnosing COVID-19 infected individuals. This article proposed a novel technique for classifying CXR scan images as healthy or affected COVID-19 by fusing the features extracted using Histogram of Oriented Gradient (HOG) and Local Phase Quantization (LPQ). This research is an experimental study that employed 7232 CXR images from a COVID-19 Radiography dataset as training and testing data. As a result, by using both individual and fused feature extraction methodologies, a developed model was created and fed into the machine learning techniques. The testing results reveal that the improved architecture outperforms current methods for identifying COVID-19 patients in terms of accuracy rate, which reached 97.15 %.Rebin Hamaamin, Shakhawan Wady, Ali Kareemwork_kycoff7nlfgahbsq4k4qbd624eFri, 23 Dec 2022 00:00:00 GMTComparison of Single Transmit Queuing System Including Proportions of Execution Using Fuzzy Queuing Model and Intuitionistic Fuzzy Queuing Model with Two Classes
https://scholar.archive.org/work/5muydvk3u5hwvhoxnovv353tsu
This research provides a two-class single transmit queuing model. We also calculate the model's execution proportions under a vague environment. The main purpose of this inquiry is to compare the results of a single transmit queuing model based on fuzzy queuing theory and intuitionistic fuzzy queuing theory. Triangular fuzzy numbers and triangular intuitionistic fuzzy numbers are used to describe the entry (arrival) and administration (service) rates. The fuzzy queuing theory model's evaluation metrics are supplied as a range of values, but the intuitionistic fuzzy queuing theory model offers a wide range of values. An analysis is offered to discover quality measures utilizing a proposed methodology in which the fuzzy values are retained as is without being altered into crisp values, hence we can use the proposed method to draw scientific conclusions in an uncertain environment. Two numerical problems are solved to demonstrate the sustainability of the suggested method. Subsequently, prototype components were exposed to sensitivity analyses. Sensitivity testing is used to find discrepancies between the two groups when calculating their execution proportions. We employed the triangular fuzzy number in an intuitionistic fuzzy environment in this study, accounting for the degree of comfort and refusal so that the sum of both values is always less than 1. For this type of fuzzy integer, we gave various non-normal arithmetic procedures. The proposed formulations are simple and direct, having been devised using classical algebraic mathematics. This strategy is simple and straightforward to use in actual situations. The nearest interval number is then used to round a TIFN. The key benefit of this approach is that using a multi-section algorithm, we can quickly solve a bound unbridled optimization problem with coefficients as TIFN. The present methodologies and strategies are intended to be applicable to a variety of contemporary decision-making challenges in areas of economic share, finance, administration, and ecology, which are the focus of our future study.work_5muydvk3u5hwvhoxnovv353tsuMon, 31 Oct 2022 00:00:00 GMTDescriptive Combinatorics and Distributed Algorithms
https://scholar.archive.org/work/pjgjlnfrkzd5vmd7p7c5yr66pe
In this article we shall explore a fascinating area called descriptive combinatorics and its recently discovered connections to distributed algorithms-a fundamental part of computer science that is becoming increasingly important in the modern era of decentralized computation. The interdisciplinary nature of these connections means that there is very little common background shared by the researchers who are interested in them. With this in mind, this article was written under the assumption that the reader would have close to no background in either descriptive set theory or computer science. The reader will judge to what degree this endeavor was successful. The article comprises two parts. In the first part we give a brief introduction to some of the central notions and problems of descriptive combinatorics. The second part is devoted to a survey of some of the results concerning theAnton Bernshteynwork_pjgjlnfrkzd5vmd7p7c5yr66peSat, 01 Oct 2022 00:00:00 GMTIsadore M. Singer (1924–2021) In Memoriam Part 1: Scientific Works
https://scholar.archive.org/work/aejx3oq2lvch5gdpwoqpzzlbqe
Robert Bryant, Jean-Michel Bismut, Jeff Cheeger, Phillip Griffiths, Simon Donaldson, Nigel Hitchin, H Blaine Lawson, Michail Gromov, Adam Marcus, Daniel Spielman, Nikhil Srivastava, Edward Wittenwork_aejx3oq2lvch5gdpwoqpzzlbqeSat, 01 Oct 2022 00:00:00 GMTNeutrosophic Fuzzy Association Rule Generation-Based Big Data Mining Analysis Algorithm
https://scholar.archive.org/work/wbrjynbtdjgx7bphh3uf7hnsde
As a very common and classic big data (BD) mining algorithm, the association rule data mining (DM) algorithm is often used to determine the internal correlation between different items and set a certain threshold to determine the size of the correlation. However, the traditional association rule algorithm is more suitable for establishing Boolean association rules between different items of different types of data, and hardening the sharp boundaries of the data causes the performance of the association rules to decrease. In order to overcome this shortcoming of classic DM, this article introduces association rules, support and confidence, the Apriori algorithm and fuzzy association rules based on the neutrosophic fuzzy association rule (NFAR). This paper is based on the data set of the supermarket purchase goods database, by drawing a radar chart to describe the correlation between different goods and different item sets support, and confidence calculation based on association rules support. Finally, the association rules are generated. Compared to the results produced by NFAR and ordinary association rules, the accuracy of the NFAR association rules algorithm in small data sets is 88.48%, while the accuracy of traditional association rules algorithm is only 80.87%, nearly 8 percentage points higher. On large data sets, the prediction accuracy of the neutral fuzzy association rules algorithm is 95.68%, while that of the traditional method is only 89.63%. Therefore, the NFAR algorithm can improve the accuracy and effectiveness of DM. This algorithm has great application prospects and development space in big DM and analysis.Qunfeng Wei, Bin Qi, Raghavan Dhanasekaranwork_wbrjynbtdjgx7bphh3uf7hnsdeSun, 25 Sep 2022 00:00:00 GMTQuantized Constrained Molecular Chains: Vibrations, Internal Rotations and Polymerization
https://scholar.archive.org/work/k5v2beg5o5adjdxg2kbok4djei
The present work has a twofold purpose. a) It proposes a quantum-mechanical approach to constrained molecular chains and their small vibrations and rotations, by employing in a compact way vector variables and operators associated to the constituent units of the chain. The methods here differ from standard approaches based upon cartesian coordinates and normal modes and generalize previous quantum Hamiltonians describing only rotational degrees of freedom. Several models in D = 2, 3 spatial dimensions, with new Hermitean Hamiltonians, are formulated and analyzed. The chains studied successively display an increasing number of constraints: freely-jointed, freely-rotating and with constrained torsions. Conservation of total orbital angular momentum is analyzed. As a partial test, by using the present approach, the vibrational frequencies of certain triatomic molecules (water vapour, hydrogen sulfide, heavy water and sulfur dioxide) are computed and shown to be consistent with experimental data. b) A new (quantum-mechanical) analysis of polymerization, namely, the growth of a freely-jointed molecular chain (of the kind considered above) by binding an additional unit 1 to the chain, is presented. They move in a very dilute solution in a fluid at rest in thermal equilibrium about room temperature. The analysis is based upon a mixed (quantum-classical) distribution function in phase-space: a quantum Wigner-like one for unit 1 and a classical Liouville one for the chain. That leads to an approximate Schmolukowski equation for unit 1 alone and, through it, to compute the mean first passage time (MFPT) for unit 1 to become bound by the chain. The resulting MFPT displays a temperature dependence consistent with the Arrhenius formula for rate constants in chemical reactions.Ramon F. Alvarez-Estradawork_k5v2beg5o5adjdxg2kbok4djeiFri, 23 Sep 2022 00:00:00 GMTProceedings of the 2nd International Workshop on Learning to Quantify (LQ 2022)
https://scholar.archive.org/work/76ayl25qfralpmcm3u2rpofkcq
The 2nd International Workshop on Learning to Quantify (LQ 2022 – https: //lq-2022.github.io/) was held in Grenoble, FR, on September 23, 2022, as a satellite workshop of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD 2022). While the 1st edition of the workshop (LQ 2021 – https://cikmlq2021. github.io/, which was instead co-located with the 30th ACM International Conference on Information and Knowledge Management (CIKM 2021)) had to be an entirely online event, LQ 2022 was a hybrid event, with presentations given in-presence and both in-presence attendees and remote attendees.Juan José Del Coz, Pablo González, Alejandro Moreo, Fabrizio Sebastianiwork_76ayl25qfralpmcm3u2rpofkcqFri, 23 Sep 2022 00:00:00 GMTBoosting Simple Learners
https://scholar.archive.org/work/6bt5pb6tojhwfjybbqmmnnzzpu
Boosting is a celebrated machine learning approach which is based on the idea of combining weak and moderately inaccurate hypotheses to a strong and accurate one. We study boosting under the assumption that the weak hypotheses belong to a class of bounded capacity. This assumption is inspired by the common convention that weak hypotheses are "rules-of-thumbs" from an "easy-to-learn class". (Schapire and Freund '12, Shalev-Shwartz and Ben-David '14.) Formally, we assume the class of weak hypotheses has a bounded VC dimension. We focus on two main questions: (i) Oracle Complexity: How many weak hypotheses are needed to produce an accurate hypothesis? We design a novel boosting algorithm and demonstrate that it circumvents a classical lower bound by Freund and Schapire ('95, '12). Whereas the lower bound shows that Ω(1/γ^2) weak hypotheses with γ-margin are sometimes necessary, our new method requires only Õ(1/γ) weak hypothesis, provided that they belong to a class of bounded VC dimension. Unlike previous boosting algorithms which aggregate the weak hypotheses by majority votes, the new boosting algorithm uses more complex ("deeper") aggregation rules. We complement this result by showing that complex aggregation rules are in fact necessary to circumvent the aforementioned lower bound. (ii) Expressivity: Which tasks can be learned by boosting weak hypotheses from a bounded VC class? Can complex concepts that are "far away" from the class be learned? Towards answering the first question we introduce combinatorial-geometric parameters which capture expressivity in boosting. As a corollary we provide an affirmative answer to the second question for well-studied classes, including half-spaces and decision stumps. Along the way, we establish and exploit connections with Discrepancy Theory.Noga Alon and Alon Gonen and Elad Hazan and Shay Moranwork_6bt5pb6tojhwfjybbqmmnnzzpuThu, 22 Sep 2022 00:00:00 GMTA Novel Hybrid Machine Learning Model for Wind Speed Probabilistic Forecasting
https://scholar.archive.org/work/pareuqmqqnhhhmytubcxiqdiui
Accurately capturing wind speed fluctuations and quantifying the uncertainties has important implications for energy planning and management. This paper proposes a novel hybrid machine learning model to solve the problem of probabilistic prediction of wind speed. The model couples the light gradient boosting machine (LGB) model with the Gaussian process regression (GPR) model, where the LGB model can provide high-precision deterministic wind speed prediction results, and the GPR model can provide reliable probabilistic prediction results. The proposed model was applied to predict wind speeds for a real wind farm in the United States. The eight contrasting models are compared in terms of deterministic prediction and probabilistic prediction, respectively. The experimental results show that the LGB-GPR model improves the point forecast accuracy (RMSE) by up to 20.0% and improves the probabilistic forecast reliability (CRPS) by up to 21.5% compared to a single GPR model. This research is of great significance for improving the reliability of wind speed, probabilistic predictions, and the sustainable development of new energy.Guanjun Liu, Chao Wang, Hui Qin, Jialong Fu, Qin Shenwork_pareuqmqqnhhhmytubcxiqdiuiThu, 22 Sep 2022 00:00:00 GMTOptimization with Constraint Learning: A Framework and Survey
https://scholar.archive.org/work/bg52kjeirvd57kmetymsfpkv7i
Many real-life optimization problems frequently contain one or more constraints or objectives for which there are no explicit formulas. If data is however available, these data can be used to learn the constraints. The benefits of this approach are clearly seen, however there is a need for this process to be carried out in a structured manner. This paper therefore provides a framework for Optimization with Constraint Learning (OCL) which we believe will help to formalize and direct the process of learning constraints from data. This framework includes the following steps: (i) setup of the conceptual optimization model, (ii) data gathering and preprocessing, (iii) selection and training of predictive models, (iv) resolution of the optimization model, and (v) verification and improvement of the optimization model. We then review the recent OCL literature in light of this framework, and highlight current trends, as well as areas for future research.Adejuyigbe Fajemisin, Donato Maragno, Dick den Hertogwork_bg52kjeirvd57kmetymsfpkv7iThu, 22 Sep 2022 00:00:00 GMTMultipartite channel assemblages
https://scholar.archive.org/work/654v6kjz3ncadplpwkurpp64jm
Motivated by the recent studies on post-quantum steering, we generalize the notion of bipartite channel steering by introducing the concept of multipartite no-signaling channel assemblages. We first show that beyond the bipartite case, the no-signaling and quantum descriptions of channel assemblages do not coincide. Using the Choi-Jamiołkowski isomorphism, we present a complete characterization of these classes of assemblages and use this characterization to provide sufficient conditions for extremality of quantum channel assemblages within the set of all no-signaling channel assemblages. Finally, we introduce and discuss a relaxed version of channel steering where only certain subsystems obey the no-signaling constraints. In this latter asymmetric scenario we show the possibility of certifying a perfect key bit that is secure against a general no-signaling eavesdropper.Michał Banacki, Ravishankar Ramanathan, Paweł Horodeckiwork_654v6kjz3ncadplpwkurpp64jmThu, 22 Sep 2022 00:00:00 GMTGaussian Agency problems with memory and Linear Contracts
https://scholar.archive.org/work/uovonylqgvew3fnausghv7ijpe
Can a principal still offer optimal dynamic contracts that are linear in end-of-period outcomes when the agent controls a process that exhibits memory? We provide a positive answer by considering a general Gaussian setting where the output dynamics are not necessarily semi-martingales or Markov processes. We introduce a rich class of principal-agent models that encompasses dynamic agency models with memory. From the mathematical point of view, we develop a methodology to deal with the possible non-Markovianity and non-semimartingality of the control problem, which can no longer be directly solved by means of the usual Hamilton-Jacobi-Bellman equation. Our main contribution is to show that, for one-dimensional models, this setting always allows for optimal linear contracts in end-of-period observable outcomes with a deterministic optimal level of effort. In higher dimension, we show that linear contracts are still optimal when the effort cost function is radial and we quantify the gap between linear contracts and optimal contracts for more general quadratic costs of efforts.Eduardo Abi Jaberwork_uovonylqgvew3fnausghv7ijpeThu, 22 Sep 2022 00:00:00 GMTA formal algebraic approach for the quantitative modeling of connectors in architectures
https://scholar.archive.org/work/trxi3vrtdbehvgx6p2qxwg4jiq
In this paper we propose an algebraic formalization of connectors in the quantitative setting, in order to address their non-functional features in architectures of component-based systems. We firstly present a weighted Algebra of Interactions over a set of ports and a commutative and idempotent semiring, which is proved sufficient for modeling well-known coordination schemes in the weighted setup. In turn, we study a weighted Algebra of Connectors over a set of ports and a commutative and idempotent semiring, which extends the weighted Algebra of Interactions with types that encode Rendezvous and Broadcast synchronization. We show the expressiveness of the algebra by modeling the weighted connectors of several coordination schemes. Moreover, we derive two subalgebras, namely the weighted Algebra of Synchrons and the weighted Algebra of Triggers, and study their properties. Finally, we introduce a concept of congruence relation for connectors in the weighted setup and we provide conditions for proving such a congruence.Christina Chrysovalanti Fountoukidou, Maria Pittouwork_trxi3vrtdbehvgx6p2qxwg4jiqWed, 21 Sep 2022 00:00:00 GMTJet thermalization in QCD kinetic theory
https://scholar.archive.org/work/rt5ux5vfpzd7hedtrqighx4rsy
We perform numerical studies in QCD kinetic theory to investigate the energy and angular profiles of a high energy parton - as a proxy for a jet produced heavy ion collisions - passing through a Quark-Gluon Plasma (QGP). We find that the fast parton loses energy to the plasma mainly via a radiative turbulent gluon cascade that transport energy locally from the jet down to the temperature scale where dissipation takes place. In this first stage, the angular structure of the turbulent cascade is found to be relatively collimated. However, when the lost energy reaches the plasma temperature is it rapidly transported to large angles w.r.t. the jet axis and thermalizes. We investigate the contribution of the soft jet constituents to the total jet energy. We show that for jet opening angles of about 0.3 rad or smaller the effect is negligible. Conversely, larger opening angles become more and more sensitive to the thermal component of the jet and thus to medium response. Our result showcase the importance of the jet cone size in mitigating or enhancing the details of dissipation in jet quenching observables.Y. Mehtar-Tani, S. Schlichting, I. Soudiwork_rt5ux5vfpzd7hedtrqighx4rsyWed, 21 Sep 2022 00:00:00 GMTHopf algebras and non-associative algebras in the study of iterated-integral signatures and rough paths
https://scholar.archive.org/work/vri56bn4anbznfnl6s7cmyo5fq
Over the course of three different collaborative projects, we gather evidence of how Hopf, Lie and pre-Lie, Zinbiel and dendriform, as well as Tortkara algebras appear in and influence the systematic combinatorial treatment of iterated-integral signatures of paths and rough paths. First, we investigate how Lie and pre-Lie structures of Lie polynomials and trees give rise to Hopf algebra homomorphisms which one can use to translate the higher orders of rough paths, and thus in a sense renormalize them. We obtain an interplay at the level of the rough differential equations (RDEs) driven by the translated rough path vs the original rough path, and furthermore explore how this translation-renormalization is in bijection with a renormalization group of a corresponding regularity structure. Secondly, we answer a question by Bernd Sturmfels of how the signature of a path under a polynomial map p(X) can be retrieved from the signature of the original path X. After we discussed this with elementary means, we explain how this can be seen as a corollary of a much more general statement on homomorphisms on the halfshuffle Zinbiel algebra vs the iterated-integral signature, which can be seen as being immediately equivalent to the classic halfshuffle relation of the signature. Finally, we study how the signed area enclosed by a two-dimensional path and the connection line between starting point and end point corresponds to an algebraic anticommutative area operation satisfying the Tortkara identity. We revisit the work of Rocha on coordinates of the first kind which led him to introduce such an area operation for the first time, work which can be formulated in terms of a dendriform algebra and the pre-Lie, symmetrized pre-Lie, Lie and associative operations it canonically induces. With our main result in this project being the fact that the whole shuffle algebra can be expressed in terms of shuffle polynomials of area polynomials, which answers a conjecture by Lyons that the knowledge of all areas of areas suffices to compute [...]Rosa Lili Dora Preiß, Technische Universität Berlin, Peter K. Frizwork_vri56bn4anbznfnl6s7cmyo5fqWed, 21 Sep 2022 00:00:00 GMTFokker-planck Multi-species Equations In The Adiabatic Asymptotics
https://scholar.archive.org/work/fsj2qppcsvabdf6sstfi75xgqy
The main concern of the present paper is the study of the multi-scale dynamics of thermonuclear fusion plasmas via a multi-species Fokker-Planck kinetic model. One of the goals is the generalization of the standard Fokker-Planck collision operator to a multispecies one, conserving mass, total momentum and energy, as well as satisfying Boltzmann's H-theorem. Secondly, the paper investigates in more details the reduced model used for the electron description in present simulations, and which considers the electrons in a thermodynamic equilibrium (adiabatic regime), whereas the ions are kept kinetic. On the one hand, we perform some mathematical asymptotic limits to obtain in the electron/ion low mass ratio limit the above-mentioned electron adiabatic regime. On the other hand, we develop a numerical scheme , based on a Hermite spectral method, and perfom numerical simulations to illustrate and investigate in more details this asymptotics.Francis Filbetwork_fsj2qppcsvabdf6sstfi75xgqyWed, 21 Sep 2022 00:00:00 GMTTowards a Rigorous Statistical Analysis of Empirical Password Datasets
https://scholar.archive.org/work/cwfgt4joa5g7lgtcjiqjv6k4w4
A central challenge in password security is to characterize the attacker's guessing curve i.e., what is the probability that the attacker will crack a random user's password within the first G guesses. A key challenge is that the guessing curve depends on the attacker's guessing strategy and the distribution of user passwords both of which are unknown to us. In this work we aim to follow Kerckhoffs' principle and analyze the performance of an optimal attacker who knows the password distribution. Let λ_G denote the probability that such an attacker can crack a random user's password within G guesses. We develop several statistically rigorous techniques to upper and lower bound λ_G given N independent samples from the unknown distribution. We show that our bounds hold with high confidence and apply our techniques to analyze eight password datasets. Our empirical analysis shows that even state-of-the-art password cracking models are often significantly less guess efficient than an attacker who can optimize its attack based on its (partial) knowledge of the password distribution. We also apply our techniques to re-examine the empirical password distribution and Zipf's Law. We find that the empirical distribution closely matches our bounds on λ_G when G is not too large i.e., G ≪ N. However, for larger values of G our empirical analysis rigorously demonstrates that the empirical distribution (resp. Zipf's Law) overestimates the attacker's success rate. We apply our techniques to upper/lower bound the effectiveness of password throttling mechanisms (key-stretching) which are used to reduce the number of attacker guesses G. Finally, if we make an additional assumption about the way users respond to password restrictions, we can use our techniques to evaluate the effectiveness of password composition policies which restrict the passwords users may select.Jeremiah Blocki, Peiyuan Liuwork_cwfgt4joa5g7lgtcjiqjv6k4w4Wed, 21 Sep 2022 00:00:00 GMTAddressing selection bias and measurement error in COVID-19 case count data using auxiliary information
https://scholar.archive.org/work/xgrodqs3c5gfzdbrnknup4pmgm
Coronavirus case-count data has influenced government policies and drives most epidemiological forecasts. Limited testing is cited as the key driver behind minimal information on the COVID-19 pandemic. While expanded testing is laudable, measurement error and selection bias are the two greatest problems limiting our understanding of the COVID-19 pandemic; neither can be fully addressed by increased testing capacity. In this paper, we demonstrate their impact on estimation of point prevalence and the effective reproduction number. We show that estimates based on the millions of molecular tests in the US has the same mean square error as a small simple random sample. To address this, a procedure is presented that combines case-count data and random samples over time to estimate selection propensities based on key covariate information. We then combine these selection propensities with epidemiological forecast models to construct a doubly robust estimation method that accounts for both measurement-error and selection bias. This method is then applied to estimate Indiana's active infection prevalence using case-count, hospitalization, and death data with demographic information, a statewide random molecular sample collected from April 25–29th, and Delphi's COVID-19 Trends and Impact Survey. We end with a series of recommendations based on the proposed methodology.Walter Dempseywork_xgrodqs3c5gfzdbrnknup4pmgmWed, 21 Sep 2022 00:00:00 GMTStochasticity of Cosmic Rays from Supernova Remnants and the Ionization Rates in Molecular Clouds
https://scholar.archive.org/work/xa2ft45hcbdzlbd2s32dqroige
Cosmic rays are the only agent able to penetrate into the interior of dense molecular clouds. Depositing (part of) their energy through ionisation, cosmic rays play an essential role in determining the physical and chemical evolution of star-forming regions. To a first approximation their effect can be quantified by the cosmic-ray induced ionization rate. Interestingly, theoretical estimates of the ionization rate assuming the cosmic-ray spectra observed in the local interstellar medium result in an ionization rate that is one to two orders of magnitude below the values inferred from observations. However, due to the discrete nature of sources, the local spectra of MeV cosmic rays are in general not representative for the spectra elsewhere in the Galaxy. Such stochasticity effects have the potential of reconciling modelled ionization rates with measured ones. Here, we model the distribution of low-energy cosmic-ray spectra expected from a statistical population of supernova remnants in the Milky Way. The corresponding distribution for the ionization rate is derived and confronted with data. We find that the stochastic uncertainty helps with explaining the surprisingly high ionization rates observed in many molecular clouds.Vo Hong Minh Phan, Sarah Recchia, Philipp Mertsch, Stefano Gabiciwork_xa2ft45hcbdzlbd2s32dqroigeWed, 21 Sep 2022 00:00:00 GMTCauchy Slice Holography: A New AdS/CFT Dictionary
https://scholar.archive.org/work/hbtgpbboxrh33dkgscmrsbyp2a
We investigate a new approach to holography in asymptotically AdS spacetimes, in which time rather than space is the emergent dimension. By making a sufficiently large T^2-deformation of a Euclidean CFT, we define a holographic theory that lives on Cauchy slices of the Lorentzian bulk. (More generally, for an arbitrary Hamiltonian constraint equation that closes, we show how to obtain it by an irrelevant deformation from a CFT with suitable anomalies.) The partition function of this theory defines a natural map between the bulk canonical quantum gravity theory Hilbert space, and the Hilbert space of the usual (undeformed) boundary CFT. We argue for the equivalence of the ADM and CFT Hamiltonians. We also explain how bulk unitarity emerges naturally, even though the boundary theory is not reflection-positive. This allows us to reformulate the holographic principle in the language of Wheeler-DeWitt canonical quantum gravity. Along the way, we outline a procedure for obtaining a bulk Hilbert space from the gravitational path integral with Dirichlet boundary conditions. Following previous conjectures, we postulate that this finite-cutoff gravitational path integral agrees with the T^2-deformed theory living on an arbitrary boundary manifold -- at least near the semiclassical regime. However, the T^2-deformed theory may be easier to UV complete, in which case it would be natural to take it as the definition of nonperturbative quantum gravity.Goncalo Araujo-Regado, Rifath Khan, Aron C. Wallwork_hbtgpbboxrh33dkgscmrsbyp2aWed, 21 Sep 2022 00:00:00 GMT