A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Design of Multithreaded Estimation of Distribution Algorithms
[chapter]
2003
Lecture Notes in Computer Science
More specifically, the paper describes a method for parallel construction of Bayesian networks with local structures in form of decision trees in the Mixed Bayesian Optimization Algorithm. ...
Estimation of Distribution Algorithms (EDAs) use a probabilistic model of promising solutions found so far to obtain new candidate solutions of an optimization problem. ...
This research had been partially supported by the Grant Agency of Czech Republic from research grant GA 102/02/0503 "Parallel system performance prediction and tuning". ...
doi:10.1007/3-540-45110-2_1
fatcat:5lcgosmya5edvhwkcfknqxyage
Parallel Mixed Bayesian Optimization Algorithm: A Scaleup Analysis
[article]
2004
arXiv
pre-print
More specifically, the paper discusses how to predict performance of parallel Mixed Bayesian Optimization Algorithm (MBOA) that is based on parallel construction of Bayesian networks with decision trees ...
Estimation of Distribution Algorithms have been proposed as a new paradigm for evolutionary optimization. This paper focuses on the parallelization of Estimation of Distribution Algorithms. ...
., X n−1 ) = n−1 ∏ i=0 p(X i |Π i ). (4) The well known EDA instances with Bayesian network are the Bayesian Optimization Algorithm (BOA) [5] , the Estimation of Bayesian Network Algorithm (EBNA) [12 ...
arXiv:cs/0406007v1
fatcat:62ybfeyb6vdulnvrwt5ms5t52m
Surrogate-assisted parallel tempering for Bayesian neural learning
[article]
2020
arXiv
pre-print
Markov Chain Monte-Carlo (MCMC) methods typically implement Bayesian inference which faces several challenges given a large number of parameters, complex and multimodal posterior distributions, and computational ...
The method has applications for a Bayesian inversion and uncertainty quantification for a broad range of numerical models. ...
Dietmar Muller and Danial Azam for discussions and support during the course of this research project. We sincerely thank the editors and anonymous reviewers for their valuable comments. ...
arXiv:1811.08687v3
fatcat:yzsduvrojjaajihutzyrcnz5fy
High-Performance FPGA-based Accelerator for Bayesian Neural Networks
[article]
2021
arXiv
pre-print
Considering partial Bayesian inference, an automatic framework is proposed, which explores the trade-off between hardware and algorithmic performance. ...
In comparison, Bayesian neural networks (BNNs) are able to express uncertainty in their prediction via a mathematical grounding. ...
trade-off and uncertainty estimation provided by partial Bayesian neural network design (Section IV). • A comprehensive evaluation of algorithmic and hardware performance on different datasets with respect ...
arXiv:2105.09163v3
fatcat:eqw4qypfe5gp3e4t44k7amihke
SBI – A toolkit for simulation-based inference
[article]
2020
arXiv
pre-print
We present sbi, a PyTorch-based package that implements SBI algorithms based on neural networks. sbi facilitates inference on black-box simulators for practising scientists and engineers by providing a ...
In contrast to conventional Bayesian inference, SBI is also applicable when one can run model simulations, but no formula or algorithm exists for evaluating the probability of data given parameters, i.e ...
flexible choice of network architectures and flow-based density estimators. ...
arXiv:2007.09114v2
fatcat:es3xbur3bjaehlq374dvy7j7qi
PBPI: a High Performance Implementation of Bayesian Phylogenetic Inference
2006
ACM/IEEE SC 2006 Conference (SC'06)
This paper describes the implementation and performance of PBPI, a parallel implementation of Bayesian phylogenetic inference method for DNA sequence data. ...
We evaluated the performance and accuracy of PBPI using a simulated dataset on System X, a terascale supercomputer at Virginia Tech. ...
We emphasize that the contribution of this work includes implementation of improved versions of our algorithm (MCMC strategies and sample algorithms), design and validation of a framework for Bayesian ...
doi:10.1109/sc.2006.47
fatcat:4hihqgsl3rdaddl676syatmfbu
The Parallel Bayesian Optimization Algorithm
[chapter]
2000
The State of the Art in Computational Intelligence
The Bayesian Optimization Algorithm incorporates methods for learning Bayesian networks and uses these to model the promising solutions and generate new ones. ...
In the last few years there has been a growing interest in the field of Estimation of Distribution Algorithms (EDAs), where crossover and mutation genetic operators are replaced by probability estimation ...
Introduction The proposed algorithm belongs to an EDA class of algorithm (Estimation of Distribution Algorithm) [1] , based on probability theory and statistics. ...
doi:10.1007/978-3-7908-1844-4_11
fatcat:tyux3a2vxvghxhcvdcmqws2lau
CMA-ES for Hyperparameter Optimization of Deep Neural Networks
[article]
2016
arXiv
pre-print
We provide a toy example comparing CMA-ES and state-of-the-art Bayesian optimization algorithms for tuning the hyperparameters of a convolutional neural network for the MNIST dataset on 30 GPUs in parallel ...
CMA-ES has some useful invariance properties and is friendly to parallel evaluations of solutions. ...
TPE with Gaussian priors showed the best performance.
Figure 2 (right) shows the results of all tested algorithms when solutions are evaluated in parallel
on 30 GPUs. ...
arXiv:1604.07269v1
fatcat:7hdozj4bevhtrdmwysjiw3daje
Langevin-gradient parallel tempering for Bayesian neural learning
[article]
2018
arXiv
pre-print
Bayesian neural learning feature a rigorous approach to estimation and uncertainty quantification via the posterior distribution of weights that represent knowledge of the neural network. ...
First, parallel tempering is used used to explore multiple modes of the posterior distribution and implemented in multi-core computing architecture. ...
Acknowledgements We would like to thanks Artemis high performance computing support at University of Sydney and Arpit Kapoor for providing technical support. ...
arXiv:1811.04343v1
fatcat:lxgnqwhjurcb7o45m4mlneh7ku
sbi: A toolkit for simulation-based inference
2020
Journal of Open Source Software
We present sbi, a PyTorch-based package that implements SBI algorithms based on neural networks. sbi facilitates inference on black-box simulators for practising scientists and engineers by providing a ...
In contrast to conventional Bayesian inference, SBI is also applicable when one can run model simulations, but no formula or algorithm exists for evaluating the probability of data given parameters, i.e ...
Acknowledgements We are grateful to Artur Bekasov, George Papamakarios and Iain Murray for making nflows (Durkan et al., 2019) available, a package for normalizing flow-based density estimation which ...
doi:10.21105/joss.02505
fatcat:vsz4bpdbeva4zpdeg6egrbyf5y
Parallel Algorithms for Bayesian Indoor Positioning Systems
2007
Proceedings of the International Conference on Parallel Processing
We present two parallel algorithms and their Unified Parallel C implementations for Bayesian indoor positioning systems. Our approaches are founded on Markov Chain Monte Carlo simulations. ...
We used the LogGP model to analyze our algorithms and predict their performance. ...
Alan George and Adam Leko from UFL for letting us use a 16-processor SMP, as well as the Berkeley UPC group for their help. This work was supported in part by NSF grant CNS-0448062. ...
doi:10.1109/icpp.2007.64
dblp:conf/icpp/KleisourisM07
fatcat:yqdyomgi6jb4bor6doebbqu7yi
Parallel Bayesian Network Structure Learning for Genome-Scale Gene Networks
2014
SC14: International Conference for High Performance Computing, Networking, Storage and Analysis
Learning Bayesian networks is NP-hard. Even with recent progress in heuristic and parallel algorithms, modeling capabilities still fall short of the scale of the problems encountered. ...
In this paper, we present a massively parallel method for Bayesian network structure learning, and demonstrate its capability by constructing genome-scale gene networks of the model plant Arabidopsis thaliana ...
We wish to acknowledge Yutong Lu from the National University of Defense Technology and Bill Barth from the Texas Advanced Computing Center for their invaluable assistance with our performance evaluations ...
doi:10.1109/sc.2014.43
dblp:conf/sc/MisraVPCDXAA14
fatcat:vkg3jcd52feqvk6yubnvg3tajm
New Heuristics for Parallel and Scalable Bayesian Optimization
[article]
2018
arXiv
pre-print
However, due to the dynamic nature of research in Bayesian approaches, and the evolution of computing technology, using Bayesian optimization in a parallel computing environment remains a challenge for ...
In addition, I propose practical ways to avoid a few of the pitfalls of Bayesian optimization, such as oversampling of edge parameters and over-exploitation of high performance parameters. ...
This work was supported by the Center for Theoretical Neuroscience's NeuroNex award and by the charitable contribution of the Gatsby Foundation. ...
arXiv:1807.00373v2
fatcat:otctzpnrcbatfheizjhru3mppi
BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search
[article]
2020
arXiv
pre-print
Using all of our analyses, we develop a final algorithm called BANANAS, which achieves state-of-the-art performance on NAS search spaces. ...
Recent work has proposed different instantiations of this framework, for example, using Bayesian neural networks or graph convolutional networks as the predictive model within BO. ...
Acknowledgments We thank Jeff Schneider, Naveen Sundar Govindarajulu, and Liam Li for their help with this project. ...
arXiv:1910.11858v3
fatcat:pgrwhrstw5fjtoce2xayohydvq
DOSA: design optimizer for scientific applications
2008
Proceedings, International Parallel and Distributed Processing Symposium (IPDPS)
As an illustration we demonstrate speed up for two applications: Parallel Exact Inference and Community Identification in large-scale networks. ...
for speed (or power) at design-time and use a run-time optimizer. ...
In recent work, we present new parallel algorithms and implementations that enable the design of several high-performance complex graph analysis kernels for small-world networks. ...
doi:10.1109/ipdps.2008.4536426
dblp:conf/ipps/BaderP08
fatcat:sayugcp2gncefhenbt5k5f73mm
« Previous
Showing results 1 — 15 out of 38,103 results