Filters








1,120 Hits in 4.9 sec

A Scalable Method for Exact Sampling from Kronecker Family Models

Sebastian Moreno, Joseph J. Pfeiffer, Jennifer Neville, Sergey Kirshner
2014 2014 IEEE International Conference on Data Mining  
Notably, our mKPGM algorithm is the first available scalable sampling method for this model and our KPGM algorithm is both faster and more accurate than previous scalable methods.  ...  To address this issue, we develop a new representation that exploits the structure of Kronecker models and facilitates the development of novel grouped sampling methods that are provably correct.  ...  In the future, we will apply the GP sampling ideas to develop scalable sampling methods for other statistical network models that sample edges from a probability matrix.  ... 
doi:10.1109/icdm.2014.148 dblp:conf/icdm/MorenoPNK14 fatcat:aiiqggnqubfufbueydlp2bwffe

Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP) [article]

Andrew Gordon Wilson, Hannes Nickisch
2015 arXiv   pre-print
We introduce a new structured kernel interpolation (SKI) framework, which generalises and unifies inducing point methods for scalable Gaussian processes (GPs).  ...  SKI also provides a mechanism to create new scalable kernel methods, through choosing different kernel interpolation strategies.  ...  To test SKI and FITC for kernel learning, we sample data from a GP which uses a known ground truth kernel, and then attempt to learn this kernel from the data.  ... 
arXiv:1503.01057v1 fatcat:axtiojwk4na2xdpaq663sy5pb4

Estimating Model Uncertainty of Neural Networks in Sparse Information Form [article]

Jongseok Lee, Matthias Humt, Jianxiang Feng, Rudolph Triebel
2020 arXiv   pre-print
As a result, we show that the information form can be scalably applied to represent model uncertainty in DNNs.  ...  We present a sparse representation of model uncertainty for Deep Neural Networks (DNNs) where the parameter posterior is approximated with an inverse formulation of the Multivariate Normal Distribution  ...  Jianxiang Feng is supported by the Munich School for Data Science (MUDS) and Rudolph Triebel is a member of MUDS.  ... 
arXiv:2006.11631v1 fatcat:2fwwrpi7ere2djavxzcz627xmy

A Scalable Laplace Approximation for Neural Networks

Hippolyt Ritter, Aleksandar Botev, David Barber
2018 International Conference on Learning Representations  
We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network.  ...  We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network.  ...  We thank the anonymous reviewers for their feedback and Harshil Shah for his comments on an earlier draft of this paper.  ... 
dblp:conf/iclr/RitterBB18 fatcat:jxqccjfezfedvflrl22e4xau7e

Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition [article]

Pavel Izmailov, Alexander Novikov, Dmitry Kropotov
2018 arXiv   pre-print
We propose a method (TT-GP) for approximate inference in Gaussian Process (GP) models.  ...  A neural network learns a multidimensional embedding for the data, which is used by the GP to make the final prediction.  ...  Discussion We proposed TT-GP method for scalable inference in Gaussian process models for regression and classification.  ... 
arXiv:1710.07324v2 fatcat:bwtgs7udy5hrniyw5qidv5vxzu

Deep Kernel Learning [article]

Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, Eric P. Xing
2015 arXiv   pre-print
for a scalable kernel representation.  ...  On a large and diverse collection of applications, including a dataset with 2 million examples, we show improved performance over scalable Gaussian processes with flexible kernel learning models, and stand-alone  ...  methods) for a scalable kernel representation.  ... 
arXiv:1511.02222v1 fatcat:guzfr767yfaupjvorzo2rycmzy

Back to the Past: Source Identification in Diffusion Networks from Partially Observed Cascades [article]

Mehrdad Farajtabar and Manuel Gomez-Rodriguez and Nan Du and Mohammad Zamani and Hongyuan Zha and Le Song
2015 arXiv   pre-print
In this paper, we tackle this problem by developing a two-stage framework, which first learns a continuous-time diffusion network model based on historical diffusion traces and then identifies the source  ...  Being able to do so is critical for curtailing the spread of malicious information, and reducing the potential losses incurred.  ...  Acknowledgements This work was supported in part by NSF/NIH BIGDATA 1R01GM108341, NSF IIS-1116886, NSF CAREER IIS-1350983 and a Raytheon Faculty Fellowship to L.S.  ... 
arXiv:1501.06582v1 fatcat:jwev2e2xvzgn3o5effvl5tosaa

Scalable Large Near-Clique Detection in Large-Scale Networks via Sampling

Michael Mitzenmacher, Jakub Pachocki, Richard Peng, Charalampos Tsourakakis, Shen Chen Xu
2015 Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '15  
We believe that our work is a significant advance in routines with rigorous theoretical guarantees for scalable extraction of large near-cliques from networks.  ...  We also use our methods to study how the k-clique densest subgraphs change as a function of time in time-evolving networks for various small values of k.  ...  As an example of the utility of our method, we compare our collection of realworld networks against stochastic Kronecker graphs [44] , a popular random graph model that mimics real-world networks in certain  ... 
doi:10.1145/2783258.2783385 dblp:conf/kdd/MitzenmacherPPT15 fatcat:gmn7sydbezhx3etolrwjameehu

Discretely Relaxing Continuous Variables for tractable Variational Inference [article]

Trefor W. Evans, Prasanth B. Nair
2019 arXiv   pre-print
We explore a new research direction in Bayesian variational inference with discrete latent variable priors where we exploit Kronecker matrix algebra for efficient and exact computations of the evidence  ...  The DIRECT approach is not practical for all likelihoods, however, we identify a popular model structure which is practical, and demonstrate accurate inference using latent variables discretized as extremely  ...  Samples from the DIRECT models on the electric dataset are over 99.6% sparse.  ... 
arXiv:1809.04279v3 fatcat:ure3avdkujbszkrtluxjzyrt6y

Scalable Betweenness Centrality Maximization via Sampling

Ahmad Mahmoody, Charalampos E. Tsourakakis, Eli Upfal
2016 Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '16  
Then, we compare the sampling method used by the state-of-the-art algorithm with our method.  ...  Finally, we compare the performance of the stochastic Kronecker model [28] to real data, and observe that it generates a similar growth pattern.  ...  We also provide a comparison between the method in [40] and our sampling method. Applications. Our scalable algorithm enables us to study some interesting characteristics of the central nodes.  ... 
doi:10.1145/2939672.2939869 dblp:conf/kdd/MahmoodyTU16 fatcat:n624rdn2yfdvfh5elfjweyqdfe

Scaling Multidimensional Inference for Structured Gaussian Processes [article]

Elad Gilboa, Yunus Saatçi, John P. Cunningham
2012 arXiv   pre-print
We present new methods for additive GPs, showing a novel connection between the classic backfitting method and the Bayesian framework.  ...  Exact Gaussian Process (GP) regression has O(N^3) runtime for data size N, making it intractable for large N.  ...  We compare the exact GP-grid method from Section 2.3 to the naive Full-GP method and show an application for this method in image reconstruction.  ... 
arXiv:1209.4120v2 fatcat:w5tjhbxgmzdgxhzsmrmd37g3zm

Blitzkriging: Kronecker-structured Stochastic Gaussian Processes [article]

Thomas Nickson, Tom Gunter, Chris Lloyd, Michael A Osborne, Stephen Roberts
2015 arXiv   pre-print
We present Blitzkriging, a new approach to fast inference for Gaussian processes, applicable to regression, optimisation and classification.  ...  State-of-the-art (stochastic) inference for Gaussian processes on very large datasets scales cubically in the number of 'inducing inputs', variables introduced to factorise the model.  ...  Figure 2 :Figure 3 : 23 Periodic signal reconstruction from non-gridded data. Likelihood compared to run time for Blitzkriging and the SVGP from GPy on samples drawn from a GP.  ... 
arXiv:1510.07965v2 fatcat:25oti5vhbfftbblrntg4u2xh3q

Bayesian Optimization Meets Laplace Approximation for Robotic Introspection [article]

Matthias Humt, Jongseok Lee, Rudolph Triebel
2020 arXiv   pre-print
This impedes the potential deployments of DL methods for long-term autonomy.  ...  Therefore, in this paper we introduce a scalable Laplace Approximation (LA) technique to make Deep Neural Networks (DNNs) more introspective, i.e. to enable them to provide accurate assessments of their  ...  By exploiting Kronecker factorization of the Hessian, more expressive posterior families than the Bernoulli distribution or diagonal approximations of the covariance matrix can further be modelled, even  ... 
arXiv:2010.16141v1 fatcat:qsp4dbqttvfwphkiu4wjqkmd4y

Eigenvalue Corrected Noisy Natural Gradient [article]

Juhan Bae, Guodong Zhang, Roger Grosse
2018 arXiv   pre-print
The proposed method computes the full diagonal re-scaling factor in Kronecker-factored eigenbasis.  ...  A recently proposed method, noisy natural gradient, is a surprisingly simple method to fit expressive posteriors by adding weight noise to regular natural gradient updates.  ...  ⊗ Q U ) ) (18) Sampling from an eigenvalue corrected matrix-variate distribution is also a special case of sampling from a multivariate Gaussian distribution.  ... 
arXiv:1811.12565v1 fatcat:qp37ticqxbc65et4pzuhwk5kxu

Non-reversible Gaussian processes for identifying latent dynamical structure in neural data

Virginia Rutten, Alberto Bernacchia, Maneesh Sahani, Guillaume Hennequin
2020 Neural Information Processing Systems  
GPFADS also provides a probabilistic generalization of jPCA, a method originally developed for identifying latent rotational dynamics in neural data.  ...  This problem can be approached using Gaussian process (GP)-based methods which provide uncertainty quantification and principled model selection.  ...  Moreover, BMI algorithms often need to be run online which the scalability of our method would also permit.  ... 
dblp:conf/nips/RuttenBSH20 fatcat:3em6fsahxjbqvderl3p53ni4fa
« Previous Showing results 1 — 15 out of 1,120 results