Filters








82 Hits in 2.9 sec

Distributed Bayesian optimization of deep reinforcement learning algorithms

M. Todd Young, Jacob Hinkle, Ramakrishnan Kannan, Arvind Ramanathan
2020 Journal of Parallel and Distributed Computing  
We provide an open source, distributed Bayesian model-based optimization algorithm, HyperSpace, and show that it consistently outperforms standard hyperparameter optimization techniques across three DRL  ...  , our distributed hyperparameter optimization software.  ...  Conclusion We have developed a distributed Bayesian SMBO algorithm, HyperSpace, for optimizing machine learning model hyperparameters.  ... 
doi:10.1016/j.jpdc.2019.07.008 fatcat:gwibvufibjhwjojaiqa7pj2xde

Hyperparameter Optimization Using Sustainable Proof of Work in Blockchain

Anshul Mittal, Swati Aggarwal
2020 Frontiers in Blockchain  
We address this aspect through the framework of Bayesian optimization which is an effective methodology for the global optimization of functions with expensive evaluations.  ...  We call our work, Proof of Deep Learning with Hyperparameter Optimization (PoDLwHO).  ...  Optimization Using Bayesian Constructs and Optimization Algorithms) for generating superior set of hyperparameters.  ... 
doi:10.3389/fbloc.2020.00023 fatcat:fuebtfkxjbdmfnwd3zdc7tb7le

On the Value of Oversampling for Deep Learning in Software Defect Prediction [article]

Rahul Yedida, Tim Menzies
2021 arXiv   pre-print
Specifically, when we preprocess data with a novel oversampling technique called fuzzy sampling, as part of a larger pipeline called GHOST (Goal-oriented Hyper-parameter Optimization for Scalable Training  ...  However, while Bayesian optimization in practice does find an optimal set of hyperparameters for deep learners, it takes a long time to run. For example, Feurer et al.  ...  For example, "AutoML" methods seek the best combination of preprocessors and hyperparameters for a given dataset. These are typically based on Bayesian optimization, as in [47] , [48] .  ... 
arXiv:2008.03835v3 fatcat:prw35p6trbgefe34qg422k4z5a

CertainNet: Sampling-free Uncertainty Estimation for Object Detection [article]

Stefano Gasperini, Jan Haug, Mohammad-Ali Nikouei Mahani, Alvaro Marcos-Ramiro, Nassir Navab, Benjamin Busam, Federico Tombari
2021 arXiv   pre-print
Another group of works is that of Bayesian methods, which infer the probability distribution of the model parameters [1] .  ...  W c f θ (x t,i ) are the predicted hyperspace coordinates, with f θ being the feature extractor, and W c the hyperspace transformation for class c. γ is a hyperparameter, defined as the centroid momentum  ... 
arXiv:2110.01604v1 fatcat:76anf3xm3faldbieq4yip4swhy

Data Assimilation and Online Parameter Optimization in Groundwater Modeling using Nested Particle Filters

M. Ramgraber, C. Albert, M. Schirmer
2019 Water Resources Research  
Among these, many are capable not only of assimilating real-time data to correct their predictive shortcomings but also of improving their future performance through self-optimization.  ...  The performance of the resulting optimizer is demonstrated in a synthetic test case for three such geological configurations and compared to two Ensemble Kalman Filter setups.  ...  We made use of hyperparameterized field generators to reduce the dimensionality of the optimization problem and to guarantee conformance to a prescribed geology throughout the optimization process, using  ... 
doi:10.1029/2018wr024408 fatcat:ajd4e7jszvdkblb3a6e6wlcrmu

Probabilistic optimized ranking for multimedia semantic concept detection via RVM

Yan-Tao Zheng, Shi-Yong Neo, Tat-Seng Chua, Qi Tian
2008 Proceedings of the 2008 international conference on Content-based image and video retrieval - CIVR '08  
To tackle this problem, we exploit the sparse Bayesian kernel model, namely the relevance vector machine (RVM), as the classifier for semantic concept detection.  ...  This inference output is optimal for ranking the target video shots, according to the Probabilistic Ranking Principle.  ...  However, in practice, the distribution of shots in feature space is fairly complicated and the shot samples are not linearly separable, even in the hyperspace projected through non-linear kernel functions  ... 
doi:10.1145/1386352.1386378 dblp:conf/civr/ZhengNCT08 fatcat:66as6qbcqzdpfahxlqkcy2k6vm

Graph Warp Module: an Auxiliary Module for Boosting the Power of Graph Neural Networks in Molecular Graph Analysis [article]

Katsuhiko Ishiguro, Shin-ichi Maeda, Masanori Koyama
2019 arXiv   pre-print
Bayesian optimization is employed to select the best set of hyperparameters for each experiment using a package in Optuna 4 .  ...  However, for the experiments with BO-optimized hyperparametes (i.e.  ... 
arXiv:1902.01020v4 fatcat:fku42r4m5vgwtgrlmsrk6u5j54

Probabilistic optimization of engineering system with prescribed target design in a reduced parameter space

A. Kundu, H.G. Matthies, M.I. Friswell
2018 Computer Methods in Applied Mechanics and Engineering  
A novel probabilistic robust design optimization framework is presented here using a Bayesian inference framework.  ...  The posterior probabilities on the reduced or important parameters conditioned on prescribed target distributions of the output quantities of interest is derived using the Bayesian inference framework.  ...  . , ξ n } in the ndimensional hyperspace.  ... 
doi:10.1016/j.cma.2018.03.041 fatcat:i4un4kqalvhxdgnwxxtml4mrhq

A scalable constructive algorithm for the optimization of neural network architectures [article]

Massimiliano Lupo Pasini, Junqi Yin, Ying Wai Li, Markus Eisenbach
2021 arXiv   pre-print
by the selected neural network architecture, and time-to-solution for the hyperparameter optimization to complete.  ...  Numerical results performed on benchmark datasets show that, for these datasets, our method outperforms state-of-the-art hyperparameter optimization algorithms in terms of attainable predictive performance  ...  Section 4 presents numerical experiments where we compare the performance of our HPO algorithm with Bayesian Optimization and Tree-Parzen Estimator.  ... 
arXiv:1909.03306v3 fatcat:5z6yt2i4pvhvpo2pgokt5gvfsa

Advanced Algorithms of Bayesian Network Learning and Probabilistic Inference from Inconsistent Prior Knowledge and Sparse Data with Applications in Computational Biology and Computer Vision [chapter]

Rui Chang
2010 Bayesian Network  
The learned single best model is called Maximum-a-Posterior (MAP) estimation which is computed from data likelihood and prior distribution.  ...  Bayesian Network 54 evaluation in top-down methods). This score function is often the posterior probability function of a Bayesian network structure and parameters given the training data.  ...  These knowledge components define multiple Bayesian model classes in the hyperspace. Within each class, a set of constraints on the ground Bayesian model space can be generated.  ... 
doi:10.5772/46967 fatcat:ijic5ya535bzdhk6vgv4hunyia

parSMURF, a High Performance Computing tool for the genome-wide detection of pathogenic variants [article]

Alessandro Petrini, Marco Mesiti, Max Schubach, Marco Frasca, Daniel Danis, Matteo Re, Giuliano Grossi, Luca Cappelletti, Tiziana Castrignano', Peter N. Robinson, Giorgio Valentini
2020 biorxiv/medrxiv   pre-print
The synergy between Bayesian optimization techniques and the parallel nature of parSMURF enables efficient and user-friendly automatic tuning of the hyper-parameters of the algorithm, and allows specific  ...  The other strategy is Bayesian optimization-based and aims to find a near-optimal hyperparameter combination in a fraction of the time compared to the grid search strategy.  ...  with the default, grid-optimized, and Bayesian-optimized set of hyper-parameters.  ... 
doi:10.1101/2020.03.18.994079 fatcat:66yzfmrwwrcbboklkfzei2vzwy

parSMURF, a high-performance computing tool for the genome-wide detection of pathogenic variants

Alessandro Petrini, Marco Mesiti, Max Schubach, Marco Frasca, Daniel Danis, Matteo Re, Giuliano Grossi, Luca Cappelletti, Tiziana Castrignanò, Peter N Robinson, Giorgio Valentini
2020 GigaScience  
The synergy between Bayesian optimization techniques and the parallel nature of parSMURF enables efficient and user-friendly automatic tuning of the hyper-parameters of the algorithm, and allows specific  ...  The other strategy is Bayesian optimization-based and aims to find a near-optimal hyperparameter combination in a fraction of the time compared to the grid search strategy.  ...  with the default, grid-optimized, and Bayesian-optimized set of hyper-parameters.  ... 
doi:10.1093/gigascience/giaa052 pmid:32444882 fatcat:ch2pblup6fdsvaiqtrh4jdw6hi

Input Beam Matching and Beam Dynamics Design Optimization of the IsoDAR RFQ using Statistical and Machine Learning Techniques [article]

Daniel Koser, Loyd Waites, Daniel Winklehner, Matthias Frey, Andreas Adelmann, Janet Conrad
2021 arXiv   pre-print
These could potentially be used as on-line feedback tools during beam commissioning and operation, and to optimize the RFQ beam dynamics design prior to construction.  ...  Ultimately, this presents a computationally inexpensive and time efficient method to perform sensitivity studies and an optimization of the crucial RFQ beam output parameters like transmission and emittances  ...  The SwissFEL was tuned using Bayesian optimization [6, 7] . Bayesian optimization, using Gaussian Process models was also used for the Linac Coherent Light Source (LCLS) [8] .  ... 
arXiv:2112.02579v1 fatcat:mdnunw7fqfcjrobb3ahw6okvby

Genealogical Population-Based Training for Hyperparameter Optimization [article]

Scardigli Antoine and Fournier Paul and Vilucchio Matteo and Naccache David
2021 arXiv   pre-print
Hyperparameter optimization aims at finding more rapidly and efficiently the best hyperparameters (HPs) of learning models such as neural networks.  ...  GPBT significantly outperforms all other approaches of HP Optimization, on all supervised learning experiments tested in terms of speed and performances.  ...  Model-based algorithms, such as BO (Bayesian Optimization), aim at modelling performances of HPs on the search space.  ... 
arXiv:2109.14925v1 fatcat:macrldkmezhxpf6574dnvggppa

Custom Convolutional Neural Network with Data Augmentation and Bayesian Optimization for Gram-Negative Bacteria Classification

Budi Satoto, University of Airlangga Surabaya, Utoyo Mohammad, Riries Rulaningtyas, Eko Koendhori, University of Airlangga Surabaya, University of Airlangga Surabaya, University of Airlangga Surabaya
2020 International Journal of Intelligent Engineering and Systems  
Bayesian optimization helps to find out momentum values and initial learning rate. The data source of this research is the primary image of Gram-negative bacteria from pneumonia patients.  ...  Data distribution includes training, validation, and testing divided by percentage and proportional distribution of the number of files.  ...  optimization The Bayesian optimization approach is to use a Gaussian distribution.  ... 
doi:10.22266/ijies2020.1031.46 fatcat:37hk4hkl4rbjfejuncv4jdlhva
« Previous Showing results 1 — 15 out of 82 results