IA Scholar Query: Probabilistic interpolative decomposition.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgThu, 29 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Workshop Numerische Methoden in der Geotechnik : 12th & 13th of September 2022 Hamburg, Germany : conference proceedings
https://scholar.archive.org/work/lixdp5rcefdbjft3thqnvuwfp4
Numerische Verfahren sind zum Standardprozess in der Untersuchung von geotechnischen Bauwerken geworden. Mit stetig steigender Rechenleistung gewinnen Hybrid- und Kontinuumsansätze, die durch ausgefeilte Materialmodelle unterstützt werden, immer mehr an Bedeutung. Der Workshop "Numerische Methoden in der Geotechnik 2022" der Technischen Universität Hamburg (TUHH) unter Beteiligung des AK Numerik (DGGT) und der Bundesanstalt für Wasserbau (BAW) bringt internationale Wissenschaftlerinnen und Wissenschaftler und Fachleute zusammen, um neueste Erkenntnisse in Bezug auf die Entwicklung numerischer Methoden in der Geotechnik zu präsentieren und zu diskutieren. Dieser Tagungsband enthält die verschiedenen auf dem Workshop vorgetragenen Themen. Er soll die gewonnenen Erkenntnisse für zukünftige wissenschaftliche und praktische Anwendungen erhalten und sie mit der geotechnischen Community teilen.Jürgen Grabe, Sascha Henke, Marius Milatz, Gertraud Medicus, Torsten Wichtmann, Merita Tafili, Jan Machacek, Patrick Staubach, Luis Felipe Prada Sarmiento, Anne Stark, Michael Hicks, Ronald Brinkgreve, Sandro Brasile, Bart van Paassen, Thomas Nijssen, Salazar Rivera, Hauke Jürgens, Tim Pucker, Kristian Krabbenhoft, Hans-Peter Daxer, Franz Tschuchnigg, Helmut Schweiger, Antonia Nitsch, Carlos Eduardo Grandas Tavera, Alba Yerro, Alexander Chmelnizkij, Christoph Goniva, Marcel Kwakkel, Giovanni Viciconte, Christoph Kloss, Robert Seifried, Timo Hendrik Schmidt, Benedikt Kriegesmann, Elnaz Hadjiloo, Hatice Kaya-Sandt, Tobias Engel, Matthias Römer, Kurt-M. Borchert, Diaa Alkateeb, Thomas Meier, Jörg-Martin Hohberg, TUHH Universitätsbibliothekwork_lixdp5rcefdbjft3thqnvuwfp4Thu, 29 Sep 2022 00:00:00 GMTAn integrated geological-geophysical approach to subsurface interface reconstruction of muon tomography measurements in high alpine regions
https://scholar.archive.org/work/ricux5erhrhe7dhkcevl5l3iuy
Muon tomography is an imaging technique that emerged in the last decades. The principal concept is similar to X-ray tomography, where one determines the spatial distribution of material densities by means of penetrating photons. It differs from this well-known technology only by the type of particle. Muons are continuously produced in the Earth's atmosphere when primary cosmic rays (mostly protons) interact with the atmosphere's molecules. Depending on their energies these muons can penetrate materials up to several hundreds of metres (or even kilometres). Consequently, they have been used for the imaging of larger objects, including large geological objects such as volcanoes, caves and fault systems. This research project aimed at applying this technology to an alpine glacier in Central Switzerland to determine its bedrock geometry, and if possible, to gain information on the bedrock erosion mechanism. To this end, two major experimental studies have been conducted with the aim to reconstruct bedrock geometries of two different glaciers. Given this framework, I present in this thesis my contribution to the project in which I worked for 5 years. Most of the technological know-how of muon tomography still lies within physics institutes who were the key drivers in the development of this method. As the geophysical/geological community is nowadays an important user of this technology, it is important that also non-physicists familiarise themselves with the theory and concepts behind muon tomography. This can be seen as an effective method to bring more geoscientists to utilize this new technology for their own research. The first part of this thesis is designed to tackle this problem with a review article on the principles of muon tomography and a guide to best practice. A second important aspect is the reconstruction of the bedrock topography given muon flux measurements at various locations. Many to-date reconstruction algorithms include supplementary geological information such as density and/or compositional me [...]Alessandro Diego Lechmannwork_ricux5erhrhe7dhkcevl5l3iuyThu, 29 Sep 2022 00:00:00 GMTOn the Existence and Applicability of Extremal Principles in the Theory of Irreversible Processes: A Critical Review
https://scholar.archive.org/work/hdmh4rxgfrdwnebbsxdl57fxia
A brief review of the development of ideas on extremal principles in the theory of heat and mass transfer processes (including those in reacting media) is given. The extremal principles of non-equilibrium thermodynamics are critically examined. Examples are shown in which the mechanical use of entropy production-based principles turns out to be inefficient and even contradictory. The main problem of extremal principles in the theory of irreversible processes is the impossibility of their generalization, often even within the framework of a class of problems. Alternative extremal formulations are considered: variational principles for heat and mass transfer equations and other dissipative systems. Several extremal principles are singled out, which make it possible to simplify the numerical solution of the initial equations. Criteria are proposed that allow one to classify extremal principles according to their areas of applicability. Possible directions for further research in the search for extremal principles in the theory of irreversible processes are given.Igor Donskoywork_hdmh4rxgfrdwnebbsxdl57fxiaWed, 28 Sep 2022 00:00:00 GMTArtificial Intelligence and Advanced Materials
https://scholar.archive.org/work/tkf566mg6zf77a7xan6anloxvu
Artificial intelligence is gaining strength and materials science can both contribute to and profit from it. In a simultaneous progress race, new materials, systems and processes can be devised and optimized thanks to machine learning techniques and such progress can be turned into in-novative computing platforms. Future materials scientists will profit from understanding how machine learning can boost the conception of advanced materials. This review covers aspects of computation from the fundamentals to directions taken and repercussions produced by compu-tation to account for the origins, procedures and applications of artificial intelligence. Machine learning and its methods are reviewed to provide basic knowledge on its implementation and its potential. The materials and systems used to implement artificial intelligence with electric charges are finding serious competition from other information carrying and processing agents. The impact these techniques are having on the inception of new advanced materials is so deep that a new paradigm is developing where implicit knowledge is being mined to conceive materi-als and systems for functions instead of finding applications to found materials. How far this trend can be carried is hard to fathom as exemplified by the power to discover unheard of mate-rials or physical laws buried in data.Cefe Lópezwork_tkf566mg6zf77a7xan6anloxvuWed, 28 Sep 2022 00:00:00 GMTConstrained Polynomial Likelihood
https://scholar.archive.org/work/uubdfmzryrdujldvzrlrlbhgw4
We develop a non-negative polynomial minimum-norm likelihood ratio (PLR) of two distributions of which only moments are known under shape restrictions. The PLR converges to the true, unknown, likelihood ratio under mild conditions. We establish asymptotic theory for the PLR coefficients and present two empirical applications. The first develops a PLR for the unknown transition density of a jump-diffusion process. The second modifies the Hansen-Jagannathan pricing kernel framework to accommodate non-negative polynomial return models consistent with no-arbitrage while simultaneously nesting the linear return model. In both cases, we show the value of implementing the non-negative restriction.Paul Schneider, Caio Almeidawork_uubdfmzryrdujldvzrlrlbhgw4Wed, 28 Sep 2022 00:00:00 GMTHitchhiker's Guide to Super-Resolution: Introduction and Recent Advances
https://scholar.archive.org/work/mbqbywkyonhhxmiwn6xpa6jh7y
With the advent of Deep Learning (DL), Super-Resolution (SR) has also become a thriving research area. However, despite promising results, the field still faces challenges that require further research e.g., allowing flexible upsampling, more effective loss functions, and better evaluation metrics. We review the domain of SR in light of recent advances, and examine state-of-the-art models such as diffusion (DDPM) and transformer-based SR models. We present a critical discussion on contemporary strategies used in SR, and identify promising yet unexplored research directions. We complement previous surveys by incorporating the latest developments in the field such as uncertainty-driven losses, wavelet networks, neural architecture search, novel normalization methods, and the latests evaluation techniques. We also include several visualizations for the models and methods throughout each chapter in order to facilitate a global understanding of the trends in the field. This review is ultimately aimed at helping researchers to push the boundaries of DL applied to SR.Brian Moser, Federico Raue, Stanislav Frolov, Jörn Hees, Sebastian Palacio, Andreas Dengelwork_mbqbywkyonhhxmiwn6xpa6jh7yTue, 27 Sep 2022 00:00:00 GMTHuman-controllable and structured deep generative models
https://scholar.archive.org/work/2feku6i5y5dbbcdwe7p66fkkai
Deep generative models are a class of probabilistic models that attempts to learn the underlying data distribution. These models are usually trained in an unsupervised way and thus, do not require any labels. Generative models such as Variational Autoencoders and Generative Adversarial Networks have made astounding progress over the last years. These models have several benefits: eased sampling and evaluation, efficient learning of low-dimensional representations for downstream tasks, and better understanding through interpretable representations. However, even though the quality of these models has improved immensely, the ability to control their style and structure is limited. Structured and human-controllable representations of generative models are essential for human-machine interaction and other applications, including fairness, creativity, and entertainment. This thesis investigates learning human-controllable and structured representations with deep generative models. In particular, we focus on generative modelling of 2D images. For the first part, we focus on learning clustered representations. We propose semi-parametric hierarchical variational autoencoders to estimate the intensity of facial action units. The semi-parametric model forms a hybrid generative-discriminative model and leverages both parametric Variational Autoencoder and non-parametric Gaussian Process autoencoder. We show superior performance in comparison with existing facial action unit estimation approaches. Based on the results and analysis of the learned representation, we focus on learning Mixture-of-Gaussians representations in an autoencoding framework. We deviate from the conventional autoencoding framework and consider a regularized objective with the Cauchy-Schwarz divergence. The Cauchy-Schwarz divergence allows a closed-form solution for Mixture-of-Gaussian distributions and, thus, efficiently optimizing the autoencoding objective. We show that our model outperforms existing Variational Autoencoders in density estimation, clu [...]Dieu Linh Tran, Maja Panticwork_2feku6i5y5dbbcdwe7p66fkkaiTue, 27 Sep 2022 00:00:00 GMTQuantifying and visualizing model similarities for multi-model methods
https://scholar.archive.org/work/hakolbblw5azlk7osjx5afunim
Modeling environmental systems is typically limited by an incomplete system understanding due to scarce and imprecise measurements. This leads to different types of uncertainties, among which conceptual uncertainty plays a key role, but is difficult to address. Conceptual uncertainty refers to the problem of finding the most appropriate model representation of the physical system. This includes the problem of choosing from several plausible model hypotheses, but also the problem that the true system description might not even be among this set of hypotheses. In this thesis, I address the first of these issues, the uncertainty of choosing a model from a finite set. To account for this uncertainty of model choice, modelers typically use multi-model methods. This means that they consider not only one but several models and apply statistical methods to either combine them or select the most appropriate one. For any of these methods, it is crucial to know how similar the individual models are. But even though multi-model methods have become increasingly popular, no methods were available that quantify the similarities between models and visualize them intuitively. This dissertation aims at closing these gaps. In particular, it tackles the challenges of judging whether simplified models are a suitable replacement for a more detailed model, and of visualizing model similarities in a way that helps modelers to gain an intuitive understanding of the model set. I defined three research questions that address these challenges and form the basis of this thesis. 1. How can we systematically assess how similar conceptually simplified model versions are compared to an original, more detailed model? 2. How can we extend the similarity analysis so it is suitable for computationally expensive models? 3. How can we visualize the similarities between probabilistic model predictions? With the first contribution, I show that the so-called model confusion matrix can be used to quantify model similarities and thus identify the best conc [...]Aline Schäfer Rodrigues Silva, Universität Stuttgartwork_hakolbblw5azlk7osjx5afunimTue, 27 Sep 2022 00:00:00 GMTStochastic Future Prediction in Real World Driving Scenarios
https://scholar.archive.org/work/uiik7tbssjabbgavmt5ilauyim
Uncertainty plays a key role in future prediction. The future is uncertain. That means there might be many possible futures. A future prediction method should cover the whole possibilities to be robust. In autonomous driving, covering multiple modes in the prediction part is crucially important to make safety-critical decisions. Although computer vision systems have advanced tremendously in recent years, future prediction remains difficult today. Several examples are uncertainty of the future, the requirement of full scene understanding, and the noisy outputs space. In this thesis, we propose solutions to these challenges by modeling the motion explicitly in a stochastic way and learning the temporal dynamics in a latent space.Adil Kaan Akanwork_uiik7tbssjabbgavmt5ilauyimTue, 27 Sep 2022 00:00:00 GMTA Survey on Graph Neural Networks and Graph Transformers in Computer Vision: A Task-Oriented Perspective
https://scholar.archive.org/work/hrto4mikbnfltaqwtehpjnojky
Graph Neural Networks (GNNs) have gained momentum in graph representation learning and boosted the state of the art in a variety of areas, such as data mining (e.g., social network analysis and recommender systems), computer vision (e.g., object detection and point cloud learning), and natural language processing (e.g., relation extraction and sequence learning), to name a few. With the emergence of Transformers in natural language processing and computer vision, graph Transformers embed a graph structure into the Transformer architecture to overcome the limitations of local neighborhood aggregation while avoiding strict structural inductive biases. In this paper, we present a comprehensive review of GNNs and graph Transformers in computer vision from a task-oriented perspective. Specifically, we divide their applications in computer vision into five categories according to the modality of input data, i.e., 2D natural images, videos, 3D data, vision + language, and medical images. In each category, we further divide the applications according to a set of vision tasks. Such a task-oriented taxonomy allows us to examine how each task is tackled by different GNN-based approaches and how well these approaches perform. Based on the necessary preliminaries, we provide the definitions and challenges of the tasks, in-depth coverage of the representative approaches, as well as discussions regarding insights, limitations, and future directions.Chaoqi Chen, Yushuang Wu, Qiyuan Dai, Hong-Yu Zhou, Mutian Xu, Sibei Yang, Xiaoguang Han, Yizhou Yuwork_hrto4mikbnfltaqwtehpjnojkyTue, 27 Sep 2022 00:00:00 GMTSparse Bayesian Learning for Complex-Valued Rational Approximations
https://scholar.archive.org/work/lw47eslxibf3lka6r3n3d7vwim
Surrogate models are used to alleviate the computational burden in engineering tasks, which require the repeated evaluation of computationally demanding models of physical systems, such as the efficient propagation of uncertainties. For models that show a strongly non-linear dependence on their input parameters, standard surrogate techniques, such as polynomial chaos expansion, are not sufficient to obtain an accurate representation of the original model response. Through applying a rational approximation instead, the approximation error can be efficiently reduced for models whose non-linearity is accurately described through a rational function. Specifically, our aim is to approximate complex-valued models. A common approach to obtain the coefficients in the surrogate is to minimize the sample-based error between model and surrogate in the least-square sense. In order to obtain an accurate representation of the original model and to avoid overfitting, the sample set has be two to three times the number of polynomial terms in the expansion. For models that require a high polynomial degree or are high-dimensional in terms of their input parameters, this number often exceeds the affordable computational cost. To overcome this issue, we apply a sparse Bayesian learning approach to the rational approximation. Through a specific prior distribution structure, sparsity is induced in the coefficients of the surrogate model. The denominator polynomial coefficients as well as the hyperparameters of the problem are determined through a type-II-maximum likelihood approach. We apply a quasi-Newton gradient-descent algorithm in order to find the optimal denominator coefficients and derive the required gradients through application of ℂℝ-calculus.Felix Schneider and Iason Papaioannou and Gerhard Müllerwork_lw47eslxibf3lka6r3n3d7vwimTue, 27 Sep 2022 00:00:00 GMTShallow shadows: Expectation estimation using low-depth random Clifford circuits
https://scholar.archive.org/work/reqhx6dizrhprovpst4bsujfyy
We provide practical and powerful schemes for learning many properties of an unknown n-qubit quantum state using a sparing number of copies of the state. Specifically, we present a depth-modulated randomized measurement scheme that interpolates between two known classical shadows schemes based on random Pauli measurements and random Clifford measurements. These can be seen within our scheme as the special cases of zero and infinite depth, respectively. We focus on the regime where depth scales logarithmically in n and provide evidence that this retains the desirable properties of both extremal schemes whilst, in contrast to the random Clifford scheme, also being experimentally feasible. We present methods for two key tasks; estimating expectation values of certain observables from generated classical shadows and, computing upper bounds on the depth-modulated shadow norm, thus providing rigorous guarantees on the accuracy of the output estimates. We consider observables that can be written as a linear combination of poly(n) Paulis and observables that can be written as a low bond dimension matrix product operator. For the former class of observables both tasks are solved efficiently in n. For the latter class, we do not guarantee efficiency but present a method that works in practice; by variationally computing a heralded approximate inverses of a tensor network that can then be used for efficiently executing both these tasks.Christian Bertoni, Jonas Haferkamp, Marcel Hinsche, Marios Ioannou, Jens Eisert, Hakop Pashayanwork_reqhx6dizrhprovpst4bsujfyyMon, 26 Sep 2022 00:00:00 GMTBest-Response dynamics in two-person random games with correlated payoffs
https://scholar.archive.org/work/s6tblxobzzdphlsjfusraommhm
We consider finite two-player normal form games with random payoffs. Player A's payoffs are i.i.d. from a uniform distribution. Given p in [0, 1], for any action profile, player B's payoff coincides with player A's payoff with probability p and is i.i.d. from the same uniform distribution with probability 1-p. This model interpolates the model of i.i.d. random payoff used in most of the literature and the model of random potential games. First we study the number of pure Nash equilibria in the above class of games. Then we show that, for any positive p, asymptotically in the number of available actions, best response dynamics reaches a pure Nash equilibrium with high probability.Hlafo Alfie Mimun, Matteo Quattropani, Marco Scarsiniwork_s6tblxobzzdphlsjfusraommhmMon, 26 Sep 2022 00:00:00 GMTA Monte-Carlo based relativistic radiation hydrodynamics code with a higher-order scheme
https://scholar.archive.org/work/aasdi4zurzfh7gdrmepjyrptau
We develop a new relativistic radiation hydrodynamics code based on the Monte-Carlo algorithm. In this code, we implement a new scheme to achieve the second-order accuracy in time in the limit of a large packet number for solving the interaction between matter and radiation. This higher-order time integration scheme is implemented in the manner to guarantee the energy-momentum conservation to the precision of the geodesic integrator. The spatial dependence of radiative processes, such as the packet propagation, emission, absorption, and scattering, are also taken into account up to the second-order accuracy. We validate our code by solving various test-problems following the previous studies; one-zone thermalization, dynamical diffusion, radiation dragging, radiation mediated shock-tube, shock-tube in the optically thick limit, and Eddington limit problems. We show that our code reproduces physically appropriate results with reasonable accuracy and also demonstrate that the second-order accuracy in time and space is indeed achieved with our implementation for one-zone and one-dimensional problems.Kyohei Kawaguchi, Sho Fujibayashi, Masaru Shibatawork_aasdi4zurzfh7gdrmepjyrptauMon, 26 Sep 2022 00:00:00 GMTBypassing the quadrature exactness assumption of hyperinterpolation on the sphere
https://scholar.archive.org/work/x434fcndvzaqpl5xagcw57vmze
This paper focuses on the approximation of continuous functions on the unit sphere by spherical polynomials of degree n via hyperinterpolation. Hyperinterpolation of degree n is a discrete approximation of the L^2-orthogonal projection of degree n with its Fourier coefficients evaluated by a positive-weight quadrature rule that exactly integrates all spherical polynomials of degree at most 2n. This paper aims to bypass this quadrature exactness assumption by replacing it with the Marcinkiewicz–Zygmund property proposed in a previous paper. Consequently, hyperinterpolation can be constructed by a positive-weight quadrature rule (not necessarily with quadrature exactness). This scheme is referred to as unfettered hyperinterpolation. This paper provides a reasonable error estimate for unfettered hyperinterpolation. The error estimate generally consists of two terms: a term representing the error estimate of the original hyperinterpolation of full quadrature exactness and another introduced as compensation for the loss of exactness degrees. A guide to controlling the newly introduced term in practice is provided. In particular, if the quadrature points form a quasi-Monte Carlo (QMC) design, then there is a refined error estimate. Numerical experiments verify the error estimates and the practical guide.Congpei An, Hao-Ning Wuwork_x434fcndvzaqpl5xagcw57vmzeMon, 26 Sep 2022 00:00:00 GMTA unified framework for dataset shift diagnostics
https://scholar.archive.org/work/ich7cgxymbforkh2vvhkyb64am
Most machine learning (ML) methods assume that the data used in the training phase comes from the target population. However, in practice one often faces dataset shift, which, if not properly taken into account, may decrease the predictive performance of the ML models. In general, if the practitioner knows which type of shift is taking place -- e.g., covariate shift or label shift -- they may apply transfer learning methods to obtain better predictions. Unfortunately, current methods for detecting shift are only designed to detect specific types of shift or cannot formally test their presence. We introduce a general and unified framework that gives insights on how to improve prediction methods by detecting the presence of different types of shift and quantifying how strong they are. Our approach can be used for any data type (tabular/image/text) and both for classification and regression tasks. Moreover, it uses formal hypotheses tests that controls false alarms. We illustrate how our framework is useful in practice using both artificial and real datasets, including an example of how our framework leads to insights that indeed improve the predictive power of a supervised model. Our package for dataset shift detection can be found in https://github.com/felipemaiapolo/detectshift.Felipe Maia Polo, Rafael Izbicki, Evanildo Gomes Lacerda Jr, Juan Pablo Ibieta-Jimenez, Renato Vicentework_ich7cgxymbforkh2vvhkyb64amMon, 26 Sep 2022 00:00:00 GMTDeep generative model super-resolves spatially correlated multiregional climate data
https://scholar.archive.org/work/rs6jnvkvmvhs5knrif3yfyysky
Super-resolving the coarse outputs of global climate simulations, termed downscaling, is crucial in making political and social decisions on systems requiring long-term climate change projections. Existing fast super-resolution techniques, however, have yet to preserve the spatially correlated nature of climatological data, which is particularly important when we address systems with spatial expanse, such as the development of transportation infrastructure. Herein, we show an adversarial network-based machine learning enables us to correctly reconstruct the inter-regional spatial correlations in downscaling with high magnification up to fifty, while maintaining the pixel-wise statistical consistency. Direct comparison with the measured meteorological data of temperature and precipitation distributions reveals that integrating climatologically important physical information is essential for the accurate downscaling, which prompts us to call our approach πSRGAN (Physics Informed Super-Resolution Generative Adversarial Network). The present method has a potential application to the inter-regionally consistent assessment of the climate change impact.Norihiro Oyama, Noriko N. Ishizaki, Satoshi Koide, Hiroaki Yoshidawork_rs6jnvkvmvhs5knrif3yfyyskyMon, 26 Sep 2022 00:00:00 GMTSelf-supervised Denoising via Low-rank Tensor Approximated Convolutional Neural Network
https://scholar.archive.org/work/2rcpczhbkfdmfjhupxdykzrqne
Noise is ubiquitous during image acquisition. Sufficient denoising is often an important first step for image processing. In recent decades, deep neural networks (DNNs) have been widely used for image denoising. Most DNN-based image denoising methods require a large-scale dataset or focus on supervised settings, in which single/pairs of clean images or a set of noisy images are required. This poses a significant burden on the image acquisition process. Moreover, denoisers trained on datasets of limited scale may incur over-fitting. To mitigate these issues, we introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation. With the proposed design, we are able to characterize our denoiser with fewer parameters and train it based on a single image, which considerably improves the model generalizability and reduces the cost of data acquisition. Extensive experiments on both synthetic and real-world noisy images have been conducted. Empirical results show that our proposed method outperforms existing non-learning-based methods (e.g., low-pass filter, non-local mean), single-image unsupervised denoisers (e.g., DIP, NN+BM3D) evaluated on both in-sample and out-sample datasets. The proposed method even achieves comparable performances with some supervised methods (e.g., DnCNN).Chenyin Gao, Shu Yang, Anru R. Zhangwork_2rcpczhbkfdmfjhupxdykzrqneMon, 26 Sep 2022 00:00:00 GMTConvergence of score-based generative modeling for general data distributions
https://scholar.archive.org/work/4dscc3ehkvfqribm3r4lbud4iq
We give polynomial convergence guarantees for denoising diffusion models that do not rely on the data distribution satisfying functional inequalities or strong smoothness assumptions. Assuming a L^2-accurate score estimate, we obtain Wasserstein distance guarantees for any distributions of bounded support or sufficiently decaying tails, as well as TV guarantees for distributions with further smoothness assumptions.Holden Lee, Jianfeng Lu, Yixin Tanwork_4dscc3ehkvfqribm3r4lbud4iqMon, 26 Sep 2022 00:00:00 GMTBayesian Fixed-domain Asymptotics for Covariance Parameters in a Gaussian Process Model
https://scholar.archive.org/work/wi3m43pzn5fblhvut64pnxrdy4
Gaussian process models typically contain finite dimensional parameters in the covariance function that need to be estimated from the data. We study the Bayesian fixed-domain asymptotics for the covariance parameters in a universal kriging model with an isotropic Matern covariance function, which has many applications in spatial statistics. We show that when the dimension of domain is less than or equal to three, the joint posterior distribution of the microergodic parameter and the range parameter can be factored independently into the product of their marginal posteriors under fixed-domain asymptotics. The posterior of the microergodic parameter is asymptotically close in total variation distance to a normal distribution with shrinking variance, while the posterior distribution of the range parameter does not converge to any point mass distribution in general. Our theory allows an unbounded prior support for the range parameter and flexible designs of sampling points. We further study the asymptotic efficiency and convergence rates in posterior prediction for the Bayesian kriging predictor with covariance parameters randomly drawn from their posterior distribution. In the special case of one-dimensional Ornstein-Uhlenbeck process, we derive explicitly the limiting posterior of the range parameter and the posterior convergence rate for asymptotic efficiency in posterior prediction. We verify these asymptotic results in numerical experiments.Cheng Liwork_wi3m43pzn5fblhvut64pnxrdy4Sun, 25 Sep 2022 00:00:00 GMT