3,546 Hits in 5.3 sec

Interpretable VAEs for nonlinear group factor analysis [article]

Samuel Ainsworth, Nicholas Foti, Adrian KC Lee, Emily Fox
2018 arXiv   pre-print
We combine a structured VAE comprised of group-specific generators with a sparsity-inducing prior.  ...  However, traditional LFMs are limited by assuming a linear correlation structure.  ...  The authors also gratefully acknowledge the support of NVIDIA Corporation for the donated GPU used for this research.  ... 
arXiv:1802.06765v1 fatcat:3pmdbre6avbcniijnbmpjccruu

Advances in Bayesian network modelling: Integration of modelling technologies

Bruce G. Marcot, Trent D. Penman
2019 Environmental Modelling & Software  
Increasingly, BN models are being integrated with: management decision networks; structural equation modeling of causal networks; Bayesian neural networks; combined discrete and continuous variables; object-oriented  ...  Advances include improving areas of Bayesian classifiers and machine-learning algorithms for model structuring and parameterization, and development of time-dynamic models.  ...  Acknowledgments Inspiration for this paper comes from a keynote address given by the senior author in 2017 at the Joint Conference of the Australasian Bayesian Network Modelling Society and the Society  ... 
doi:10.1016/j.envsoft.2018.09.016 fatcat:r3r75adpbva3lbqfl5fijmg7ki

Nonparametric Variational Auto-encoders for Hierarchical Representation Learning [article]

Prasoon Goyal, Zhiting Hu, Xiaodan Liang, Chenyu Wang, Eric Xing
2017 arXiv   pre-print
The recently developed variational autoencoders (VAEs) have proved to be an effective confluence of the rich representational power of neural networks with Bayesian methods.  ...  The resulting model induces a hierarchical structure of latent semantic concepts underlying the data corpus, and infers accurate representations of data instances.  ...  This work was partly funded by NSF IIS1563887, NSF IIS1447676, ONR N000141410684, and ONR N000141712463. Xiaodan Liang is supported by the Department of Defense under Contract No.  ... 
arXiv:1703.07027v2 fatcat:fllrzupobrfhjpjeejjgzgmve4

Probabilistic Data Analysis with Probabilistic Programming [article]

Feras Saad, Vikash Mansinghka
2016 arXiv   pre-print
Examples include hierarchical Bayesian models, multivariate kernel methods, discriminative machine learning, clustering algorithms, dimensionality reduction, and arbitrary probabilistic programs.  ...  First, CGPMs are used in an analysis that identifies satellite data records which probably violate Kepler's Third Law, by composing causal probabilistic programs with non-parametric Bayes in under 50 lines  ...  This research was supported by DARPA (PPAML program, contract number FA8750-14-2-0004), IARPA (under research contract 2015-15061000003), the Office of Naval Research (under research contract N000141310333  ... 
arXiv:1608.05347v1 fatcat:cy3ddgzb5rdzxctz7lfzoecm4u

Dense Uncertainty Estimation [article]

Jing Zhang, Yuchao Dai, Mochu Xiang, Deng-Ping Fan, Peyman Moghadam, Mingyi He, Christian Walder, Kaihao Zhang, Mehrtash Harandi, Nick Barnes
2021 arXiv   pre-print
Bayesian Neural Networks) or including latent variables (i.e. generative models) to explore the contribution of latent variables for model predictions, leading to stochastic predictions during testing.  ...  In this way, a specific weights set is estimated while ignoring any uncertainty that may occur in the proper weight space.  ...  Bayesian Neural Networks aim to learn a distribution over each of the network parameters by placing a prior probability distribution over network weights.  ... 
arXiv:2110.06427v1 fatcat:a4f3tcjyz5ftdokazwntegqojm

Bayesian Structure Adaptation for Continual Learning [article]

Abhishek Kumar, Sunabha Chatterjee, Piyush Rai
2020 arXiv   pre-print
We present a novel Bayesian approach to continual learning based on learning the structure of deep neural networks, addressing the shortcomings of both these approaches.  ...  The proposed model learns the deep structure for each task by learning which weights to be used, and supports inter-task transfer through the overlapping of different sparse subsets of weights learned  ...  LEARNED NETWORK STRUCTURES In this section, we analyse the network structures that were learned after training our model.  ... 
arXiv:1912.03624v2 fatcat:bk4nd7dazjhizhaj2dbzogo23i

Review of Statistical Learning Methods in Integrated Omics Studies (An Integrated Information Science)

Irene Sui Lan Zeng, Thomas Lumley
2018 Bioinformatics and Biology Insights  
The penalty term induces sparsity in the weighting matrix for the latent variables and achieves simplicity of the clusters.  ...  Conesa et al 7 proposed a multiway approach to identify the underlying components that interconnect with different omics variables, with explicit modeling of 3-way latent structure.  ...  Author Contributions IZ conducted the review and writing of the first draft of the paper. TL provided insightful suggestions to the structure and edited the paper.  ... 
doi:10.1177/1177932218759292 pmid:29497285 pmcid:PMC5824897 fatcat:nbknjl4qq5awrldy7natmg3h6y

Towards Robust and Versatile Causal Discovery for Business Applications

Giorgos Borboudakis, Ioannis Tsamardinos
2016 Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '16  
Causal discovery algorithms can induce some of the causal relations from the data, commonly in the form of a causal network such as a causal Bayesian network.  ...  ETIO is compared against the state-of-the-art and is shown to be more effective in terms of speed, with only a slight degradation in terms of learning accuracy, while incorporating all the features above.The  ...  This work was funded by the ERC Consolidator Grant No 617393 CAUSALPATH.  ... 
doi:10.1145/2939672.2939872 dblp:conf/kdd/BorboudakisT16 fatcat:6sjw5zcgjzc2bfqo3hh66xbwze

Learning Representation for Bayesian Optimization with Collision-free Regularization [article]

Fengxue Zhang, Brian Nord, Yuxin Chen
2022 arXiv   pre-print
Recent works attempt to handle such input by applying neural networks ahead of the classical Gaussian process to learn a latent representation.  ...  We show that even with proper network design, such learned representation often leads to collision in the latent space: two points with significantly different observations collide in the learned latent  ...  The project was supported in part by NSF grant #2037026 and a JTFI AI + Science Grant provided by the Center for Data and Computing (CDAC) at the University of Chicago.  ... 
arXiv:2203.08656v1 fatcat:idtgwpvndnchjlgn2544i75f6a

Bayesian Blind Source Separation for Data with Network Structure

Katrin Illner, Christiane Fuchs, Fabian J. Theis
2014 Journal of Computational Biology  
In this work we focus on multivariate signaling data, where the structure of the data is induced by a known regulatory network.  ...  To extract signals of interest we assume a blind source separation (BSS) model, and we capture the structure of the source signals in terms of a Bayesian network.  ...  The strength of dependence is parameterized by the graph-delayed covariance, and we learn the mixing parameters in a Bayesian framework.  ... 
doi:10.1089/cmb.2014.0117 pmid:25302766 pmcid:PMC4224047 fatcat:vz7aa26j3bgntfaxjm2vqby2iy

Probabilistic Logic Learning [chapter]

2014 Encyclopedia of Social Network Analysis and Mining  
This paper provides an introductory survey and overview of the stateof-the-art in probabilistic logic learning through the identification of a number of important probabilistic, logical and learning concepts  ...  A rich variety of different formalisms and learning techniques have been developed.  ...  The authors thank Lise Getoor for providing the graphical representation of the PRM in Figure 10 .  ... 
doi:10.1007/978-1-4614-6170-8_100530 fatcat:3wfefnqcavhopd4xewydxsutya

Probabilistic logic learning

Luc De Raedt, Kristian Kersting
2003 SIGKDD Explorations  
This paper provides an introductory survey and overview of the stateof-the-art in probabilistic logic learning through the identification of a number of important probabilistic, logical and learning concepts  ...  A rich variety of different formalisms and learning techniques have been developed.  ...  The authors thank Lise Getoor for providing the graphical representation of the PRM in Figure 10 .  ... 
doi:10.1145/959242.959247 fatcat:m7xplwu6effohcvrrspqhbqbcq

Bayesian networks in biomedicine and health-care

Peter J.F. Lucas, Linda C. van der Gaag, Ameen Abu-Hanna
2004 Artificial Intelligence in Medicine  
We are thankful to all of them for their devotion to achieving success for both the workshop and this special issue.  ...  The discovery of latent, or hidden, variables in data for inclusion in Bayesian networks is the topic of the paper by Nevin Zhang, Thomas Nielsen and Finn Jensen, which is titled ''Latent variable discovery  ...  As developing a Bayesian network is a creative process, the various stages are iterated in a cyclic fashion where each stage may, on each iteration, induce further refinement of the network under construction  ... 
doi:10.1016/j.artmed.2003.11.001 pmid:15081072 fatcat:r52evw5rofdozp5tzs24sft46m

Bayesian network–response regression

Lu Wang, Daniele Durante, Rex E Jung, David B Dunson, Robert Murphy
2017 Bioinformatics  
We develop a Bayesian semiparametric model, which combines low-rank factorizations and flexible Gaussian process priors to learn changes in the conditional expectation of a network-valued random variable  ...  The model is applied to learn how human brain networks vary across individuals with different intelligence scores.  ...  Vogelstein for getting us interested in statistical and computational methods for analysis of brain networks.  ... 
doi:10.1093/bioinformatics/btx050 pmid:28165112 fatcat:ul23aegfyvdizmkbdm3n72qws4

A Bayesian Model of node interaction in networks [article]

Ingmar Schuster
2015 arXiv   pre-print
We are concerned with modeling the strength of links in networks by taking into account how often those links are used.  ...  As priors for latent attributes of network nodes we explore the Chinese Restaurant Process (CRP) and a multivariate Gaussian with fixed dimensionality.  ...  This could be achieved by replacing the linear function represented by the weight matrix W by a Gaussian Process with a kernel based on a matrix-norm induced distance measure between the latent representations  ... 
arXiv:1402.4279v2 fatcat:2eaoa4tn4bbylmebt4qthrj2te
« Previous Showing results 1 — 15 out of 3,546 results