Filters








44,340 Hits in 4.3 sec

Interpretable deep Gaussian processes with moments [article]

Chi-Ken Lu, Scott Cheng-Hsin Yang, Xiaoran Hao, Patrick Shafto
2019 arXiv   pre-print
Deep Gaussian Processes (DGPs) combine the expressiveness of Deep Neural Networks (DNNs) with quantified uncertainty of Gaussian Processes (GPs).  ...  Consequently, our approach admits interpretation as both NNs with specified activation functions and as a variational approximation to DGP.  ...  The straightforward way deals with the expectation of second moment from output to input layers. With approximation, Eq. (17) is obtained.  ... 
arXiv:1905.10963v3 fatcat:7vrmyucvajhyljxdc6zvlqxdvq

New scaling model for variables and increments with heavy-tailed distributions

Monica Riva, Shlomo P. Neuman, Alberto Guadagnini
2015 Water Resources Research  
We express the zero-mean random fluctuation Y '( is a zero-mean stationary Gaussian random field (or process) and the subordinator U is an independent non-negative random variable (Samorodnitsky and Taqqu  ...  We propose a new model that does so upon treating Y (x) as a random function of a coordinate x in the Euclidean (spatial) domain or time, forming a stationary random field (or process) with constant ensemble  ...  In the classical sub-Gaussian form, Y '(x) = UG(x), where G(x) is a zero-mean stationary Gaussian random field (or process) and the subordinator U is an independent non-negative random variable (Samorodnitsky  ... 
doi:10.1002/2015wr016998 fatcat:mmqyrnn72namndkmhnif6cd4ia

The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective [article]

Geoff Pleiss, John P. Cunningham
2021 arXiv   pre-print
Our analysis in this paper decouples capacity and width via the generalization of neural networks to Deep Gaussian Processes (Deep GP), a class of nonparametric hierarchical models that subsume neural  ...  Surprisingly, we prove that even nonparametric Deep GP converge to Gaussian processes, effectively becoming shallower without any increase in representational power.  ...  Acknowledgments and Disclosure of Funding We would like to thank Elliott Gordon-Rodriguez for his help with the proofs.  ... 
arXiv:2106.06529v2 fatcat:my7nbo52yzgp5h2fkg76hlvcje

Understanding Priors in Bayesian Neural Networks at the Unit Level [article]

Mariia Vladimirova, Jakob Verbeek, Pablo Mesejo, Julyan Arbel
2019 arXiv   pre-print
We investigate deep Bayesian neural networks with Gaussian weight priors and a class of ReLU-like nonlinearities.  ...  Bayesian neural networks with Gaussian priors are well known to induce an L2, "weight decay", regularization.  ...  Acknowledgements We would like to thank Stéphane Girard for fruitful discussions on Weibull-like distributions and Cédric Févotte for pointing out the potential relationship of our heavy-tail result with  ... 
arXiv:1810.05193v2 fatcat:sllyypl3gjh5fgvs27o2pciybe

Quantitative Gaussian Approximation of Randomly Initialized Deep Neural Networks [article]

Andrea Basteri, Dario Trevisan
2022 arXiv   pre-print
Given any deep fully connected neural network, initialized with random Gaussian parameters, we bound from above the quadratic Wasserstein distance between its output distribution and a suitable Gaussian  ...  process.  ...  With the above notation for the deep neural network outputs f (ℓ) [X ] L ℓ=1 with random weights and biases, and the associated Gaussian processes G (ℓ) [X ] L ℓ=1 evaluated at k inputs X = {x i } k i=  ... 
arXiv:2203.07379v1 fatcat:i65suplofbhgvj5dvb2ra5hsja

Higher-Order Spectrum in Understanding Nonlinearity in EEG Rhythms

Cauchy Pradhan, Susant K. Jena, Sreenivasan R. Nadar, N. Pradhan
2012 Computational and Mathematical Methods in Medicine  
Linear stochastic models and spectral estimates are the most common methods for the analysis of EEG because of their robustness, simplicity of interpretation, and apparent association with rhythmic behavioral  ...  The higher-order spectrum is an extension Fourier spectrum that uses higher moments for spectral estimates.  ...  The use of higher-order moments nullifies all Gaussian random effects of the process, and the bicoherence can then quantify the degree of the remaining nonlinear coupling.  ... 
doi:10.1155/2012/206857 pmid:22400046 pmcid:PMC3287025 fatcat:65vxw5el5bfnfp7kup3bdnlpw4

Gaussian Process Behaviour in Wide Deep Neural Networks [article]

Alexander G. de G. Matthews, Mark Rowland, Jiri Hron, Richard E. Turner, Zoubin Ghahramani
2018 arXiv   pre-print
In this paper, we study the relationship between random, wide, fully connected, feedforward networks with more than one hidden layer and Gaussian processes with a recursive kernel definition.  ...  We then compare finite Bayesian deep networks from the literature to Gaussian processes in terms of the key predictive quantities of interest, finding that in some cases the agreement can be very close  ...  Empirical Comparison of Bayesian Deep Networks to Gaussian Processes In this section we compare the behaviour of finite Bayesian deep networks of the form considered in this paper with their Gaussian process  ... 
arXiv:1804.11271v2 fatcat:n5rcge5kf5dnbfyytipjickjbm

Testing the minimum variance method for estimating large-scale velocity moments

Shankar Agarwal, Hume A. Feldman, Richard Watkins
2012 Monthly notices of the Royal Astronomical Society  
Previously, we have developed an optimal 'minimum variance' (MV) weighting scheme for using peculiar velocity data to estimate bulk flow moments for idealized, dense and isotropic surveys with Gaussian  ...  These moments are designed to be easy to interpret and are comparable between surveys. In this paper, we test the robustness of our MV estimators using numerical simulations.  ...  We are also grateful to Róman Scocci-marro and the LasDamas collaboration and Changbom Park and the Horizon Run collaboration for providing us with the simulations.  ... 
doi:10.1111/j.1365-2966.2012.21345.x fatcat:ymt53uxvgrgw7irmpr25ioqua4

Static Activation Function Normalization [article]

Pierre H. Richemond, Yike Guo
2019 arXiv   pre-print
Recent seminal work at the intersection of deep neural networks practice and random matrix theory has linked the convergence speed and robustness of these networks with the combination of random weight  ...  Building on those principles, we introduce a process to transform an existing activation function into another one with better properties. We term such transform static activation normalization.  ...  Hopefully, that work will enable pro-active activation scoring from such properties as their Gaussian moments.  ... 
arXiv:1905.01369v1 fatcat:hkhjmvgxuja5bp4cqmnlfa4tb4

Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning

Chi-Ken Lu, Patrick Shafto
2021 Entropy  
It is desirable to combine the expressive power of deep learning with Gaussian Process (GP) in one expressive Bayesian learning model.  ...  Here, we propose the conditional deep Gaussian process (DGP) in which the intermediate GPs in hierarchical composition are supported by the hyperdata and the exposed GP remains zero mean.  ...  Deep hierarchical SVMs and PCAs were introduced in [37] . Moment matching is a way to approximate a complex distribution with, for instance, a Gaussian by capturing the mean and the second moment.  ... 
doi:10.3390/e23111387 pmid:34828085 pmcid:PMC8618322 fatcat:ozm3uarzdnfcfpvncva2omd6c4

Conditional Deep Gaussian Processes: empirical Bayes hyperdata learning [article]

Chi-Ken Lu, Patrick Shafto
2021 arXiv   pre-print
It is desirable to combine the expressive power of deep learning with Gaussian Process (GP) in one expressive Bayesian learning model.  ...  Here, we propose the conditional Deep Gaussian Process (DGP) in which the intermediate GPs in hierarchical composition are supported by the hyperdata and the exposed GP remains zero mean.  ...  Deep hierarchical SVMs and PCAs were introduced in [37] . Moment matching is a way to approximate a complex distribution with, for instance, a Gaussian by capturing the mean and the second moment.  ... 
arXiv:2110.00568v1 fatcat:2sk2uzdre5aztjb3hvzufzvrl4

A Comparative Study of Object Classification Methods Using 3D Zernike Moment on 3D Point Clouds

Erdal Özbay, Ahmet Çınar
2019 Traitement du signal  
Fine Gaussian SVM gives the best results in accuracy (96.0%) according to the results obtained with built-in cross-validation results.  ...  Object classification has been applied to a dataset with labeled 3D Zernike Moment features inferences obtained from the 3D point cloud.  ...  The reconstruction of a group of 3D Zernike Moment with Point Clouds is a simple and efficient process.  ... 
doi:10.18280/ts.360610 fatcat:2xsybrxrrbdwxf36n5fpdetrzu

Online Algorithms for Sum-Product Networks with Continuous Variables

Priyank Jaini, Abdullah Rashwan, Han Zhao, Yue Liu, Ershad Banijamali, Zhitang Chen, Pascal Poupart
2016 European Workshop on Probabilistic Graphical Models  
Sum-product networks (SPNs) have recently emerged as an attractive representation due to their dual interpretation as a special type of deep neural network with clear semantics and a tractable probabilistic  ...  More specifically, we consider SPNs with Gaussian leaf distributions and show how to derive an online Bayesian moment matching algorithm to learn from streaming data.  ...  Most deep neural networks are function approximators without any generative capability and cannot be interpreted as probabilistic graphical models.  ... 
dblp:conf/pgm/JainiRZLBCP16 fatcat:oegmhwckqfcrbjhk22dkdawp3u

Interpretable and Differentially Private Predictions [article]

Frederik Harder, Matthias Bauer, Mijung Park
2020 arXiv   pre-print
This raises the central question addressed in this paper: Can models be interpretable without compromising privacy?  ...  Interpretable predictions, where it is clear why a machine learning model has made a particular decision, can compromise privacy by revealing the characteristics of individual data points.  ...  Park is grateful to Gilles Barthe for the inspiration of studying the trade-off between interpretability and privacy.  ... 
arXiv:1906.02004v4 fatcat:gp7o6tq6vbhoris2mtbk6rvpdm

Analytical Probability Distributions and Exact Expectation-Maximization for Deep Generative Networks

Randall Balestriero, Sebastien Paris, Richard G. Baraniuk
2020 Neural Information Processing Systems  
Deep Generative Networks (DGNs) with probabilistic modeling of their output and latent space are currently trained via Variational Autoencoders (VAEs).  ...  We exploit the Continuous Piecewise Affine property of modern DGNs to derive their posterior and marginal distributions as well as the latter's first two moments.  ...  The Gaussian integral on a region ω (and its moments) cannot in general be obtained by direct integration unless ω is a rectangular region [46, 47] or is polytopal with at most S faces [48] .  ... 
dblp:conf/nips/BalestrieroPB20 fatcat:6w43a4khhvaotd3qcvbp7fupcq
« Previous Showing results 1 — 15 out of 44,340 results