Filters








5,207 Hits in 3.4 sec

A System for Massively Parallel Hyperparameter Tuning [article]

Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar
2020 arXiv   pre-print
for massive parallelism, as demonstrated on a task with 500 workers.  ...  tuning as a service.  ...  ., hyperparameters. In these modern regimes, four trends motivate the need for production-quality systems that support massively parallel for hyperparameter tuning: 1. High-dimensional search spaces.  ... 
arXiv:1810.05934v5 fatcat:n4353ue7jzehnif6qtnychynzi

Accelerating Hyperparameter Tuning in Machine Learning for Alzheimer's Disease With High Performance Computing

Fan Zhang, Melissa Petersen, Leigh Johnson, James Hall, Sid E. O'Bryant
2021 Frontiers in Artificial Intelligence  
The high performance hyperparameter tuning model can also be applied to other ML algorithms such as random forest, logistic regression, xgboost, etc.  ...  This work reports a multicore high performance support vector machine (SVM) hyperparameter tuning workflow with 100 times repeated 5-fold cross-validation for speeding up ML for AD.  ...  parallel SVM hyperparameter tuning.  ... 
doi:10.3389/frai.2021.798962 pmid:34957393 pmcid:PMC8692864 fatcat:ejd4wdlivjhunikyl2nn6iua44

Hyperparameter Tuning with High Performance Computing Machine Learning for Imbalanced Alzheimer's Disease Data

Fan Zhang, Melissa Petersen, Leigh Johnson, James Hall, Sid E. O'Bryant
2022 Applied Sciences  
We applied a single-node multicore parallel mode to hyperparameter tuning of gamma, cost, and class weight using a support vector machine (SVM) model with 10 times repeated fivefold cross-validation.  ...  Our results show that a single-node multicore parallel structure and high-performance SVM hyperparameter tuning model can deliver efficient and fast computation and achieve outstanding agility, simplicity  ...  R Pseudocode for Parallel SVM Hyperparameter Tuning Table 2 . 2 Description for the 17 variables.  ... 
doi:10.3390/app12136670 doaj:0bd2a92ff341451e80de217ed653e1d4 fatcat:ksuqb23ocbhjjb3u7txclwyyou

Autotune

Patrick Koch, Oleg Golovidov, Steven Gardner, Brett Wujek, Joshua Griffin, Yan Xu
2018 Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining - KDD '18  
For hyperparameter tuning, machine learning algorithms are complex black-boxes.  ...  Machine learning applications often require hyperparameter tuning. The hyperparameters usually drive both the efficiency of the model training process and the resulting model quality.  ...  CONCLUSIONS In this paper, we have presented the hybrid derivative-free optimization framework Autotune for automated parallel hyperparameter tuning.  ... 
doi:10.1145/3219819.3219837 dblp:conf/kdd/KochGGWGX18 fatcat:m5bpqabztjbxpdqdv32qg46boi

Distributed tuning of machine learning algorithms using MapReduce Clusters

Yasser Ganjisaffar, Thomas Debeauvais, Sara Javanmardi, Rich Caruana, Cristina Videira Lopes
2011 Proceedings of the Third Workshop on Large Scale Data Mining Theory and Applications - LDMTA '11  
Parameter optimization is computationally challenging for learning methods with many hyperparameters.  ...  In this paper we show that MapReduce Clusters are particularly well suited for parallel parameter optimization.  ...  In section 2, we describe how easy it is to perform massive parallel grid search on MapReduce clusters.  ... 
doi:10.1145/2002945.2002947 fatcat:z7kvlcqqi5g63ca56ziaa6qfdq

Fluid: Resource-aware Hyperparameter Tuning Engine

Peifeng Yu, Jiachen Liu, Mosharaf Chowdhury
2021 Conference on Machine Learning and Systems  
By abstracting a hyperparameter tuning job as a sequence of TrialGroup, Fluid can boost the performance of diverse hyperparameter tuning solutions.  ...  In this paper, we present Fluid, a generalized hyperparameter tuning execution engine, that coordinates between hyperparameter tuning jobs and cluster resources.  ...  Runtime 2 81.20% 2356 4 63.00% 1432 8 45.80% 1073 16 47.00% 475 32 25.20% 432 2, relying on creating massive amount of trials as the single source of parallelism is neither enough to saturate individual  ... 
dblp:conf/mlsys/YuLC21 fatcat:4nvwv4dbavbbzaef5qc3sedigi

Scalable Artificial Intelligence for Earth Observation Data Using Hopsworks

Desta Haileselassie Hagos, Theofilos Kakantousis, Sina Sheikholeslami, Tianze Wang, Vladimir Vlassov, Amir Hossein Payberah, Moritz Meister, Robin Andersson, Jim Dowling
2022 Remote Sensing  
We show that using Maggy for hyperparameter tuning results in roughly half the wall-clock time required to execute the same number of hyperparameter tuning trials using Spark while providing linear scalability  ...  Furthermore, we demonstrate how AutoAblation facilitates the definition of ablation studies and enables the asynchronous parallel execution of ablation trials.  ...  Hyperparameter tuning. Parameters that define and control a model architecture are called hyperparameters. Hyperparameter tuning is critical to achieving the best accuracy for ML and DL models.  ... 
doi:10.3390/rs14081889 fatcat:jozdfnpulbfgljbcngn4ys4phe

Scalable Bayesian Optimization Using Deep Neural Networks [article]

Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Prabhat, Ryan P. Adams
2015 arXiv   pre-print
This allows us to achieve a previously intractable degree of parallelism, which we apply to large scale hyperparameter optimization, rapidly finding competitive models on benchmark object recognition tasks  ...  However, since GPs scale cubically with the number of observations, it has been challenging to handle objectives whose optimization requires many evaluations, and as such, massively parallelizing the optimization  ...  We demonstrate empirically that it is especially well suited to massively parallel hyperparameter optimization.  ... 
arXiv:1502.05700v2 fatcat:ehxfos7ylnanrf53e742wjx5qe

A Deep Learning Based Fault Diagnosis Method with Hyperparameter Optimization by Using Parallel Computing

Chaozhong Guo, Lin Li, Yuanyuan Hu, Jihong Yan
2020 IEEE Access  
This paper proposes an intelligent fault diagnosis method of rolling bearings based on deep belief network (DBN) with hyperparameter optimization by using parallel computing.  ...  INDEX TERMS Deep belief network, hyperparameter optimization, parallel computing, fault diagnosis. 131248 This work is licensed under a Creative Commons Attribution 4.0 License.  ...  of parallel computation in the parameter optimization of deep learning model for massive data.  ... 
doi:10.1109/access.2020.3009644 fatcat:fo5kvym6jnbxhpt6y34gstqjse

SHADHO: Massively Scalable Hardware-Aware Distributed Hyperparameter Optimization [article]

Jeff Kinnison, Nathaniel Kremer-Herman, Douglas Thain, Walter Scheirer
2018 arXiv   pre-print
In this paper, we introduce a framework for massively Scalable Hardware-Aware Distributed Hyperparameter Optimization (SHADHO).  ...  Existing hyperparameter optimization methods are highly parallel but make no effort to balance the search across heterogeneous hardware or to prioritize searching high-impact spaces.  ...  Tuning a neural network architecture for a particular learning problem presents a different set of challenges to standard hyperparameter tuning, including selecting neural network layers and connections  ... 
arXiv:1707.01428v2 fatcat:tyxbvw2vs5hplk722kya7p65za

Hyperparameter Tuning for Deep Reinforcement Learning Applications [article]

Mariam Kiran, Melis Ozyildirim
2022 arXiv   pre-print
In comparison to other neural network architectures, deep RL has not witnessed much hyperparameter tuning, due to its algorithm complexity and simulation platforms needed.  ...  In this paper, we propose a distributed variable-length genetic algorithm framework to systematically tune hyperparameters for various RL applications, improving training time and robustness of the architecture  ...  Other hyperparameter tuning libraries such as Ray, HyperBand or Oputuna allow hyperparameter tuning, but not for deep RL applications.  ... 
arXiv:2201.11182v1 fatcat:ilhx5djtlzbcdcohcax6mj5dda

Convolutional neural network hyperparameter optimization applied to land cover classification

Vladyslav Yaloveha, Andrii Podorozhniak, Heorhii Kuchuk
2022 RADIOELECTRONIC AND COMPUTER SYSTEMS  
The automated hyperparameter tuning techniques are divided into two groups: black-box optimization techniques (such as Grid Search, Random Search) and multi-fidelity optimization techniques (HyperBand,  ...  Therefore, the hyperparameters optimization problem becomes a key issue in a deep learning algorithm.  ...  It often requires a massive amount of time to optimize the objective function of a model with a reasonable number of hyperparameter configurations [12] .  ... 
doi:10.32620/reks.2022.1.09 fatcat:tw4l4qqo6zdiffywsclaapdzda

Efficient State-space Exploration in Massively Parallel Simulation Based Inference [article]

Sourabh Kulkarni, Csaba Andras Moritz
2021 arXiv   pre-print
The key idea is to replace the notion of a single 'step-size' hyperparameter, which governs how the state space of parameters is explored during learning, with step-sizes sampled from a tuned Beta distribution  ...  While primarily run on CPUs in HPC clusters, these algorithms have been shown to scale in performance when developed to be run on massively parallel architectures such as GPUs.  ...  Parallel ABC-SMC with MCMC Tuned Step-Size In earlier work [14] , [15] , we explored the possibility of massively parallel simulations of the epidemiology model for COVID-19.  ... 
arXiv:2106.15508v1 fatcat:2wwff3mkcrhqvnarjluamhey2i

Training-free hyperparameter optimization of neural networks for electronic structures in matter [article]

Lenz Fiedler, Nils Hoffmann, Parvez Mohammed, Gabriel A. Popoola, Tamar Yovell, Vladyslav Oles, J. Austin Ellis, Siva Rajamanickam, Attila Cangi
2022 arXiv   pre-print
This, however, causes a massive computational overhead in addition to that of data generation.  ...  The grand challenge therein is finding a suitable machine-learning model during a process called hyperparameter optimization.  ...  Method Optimized Hyperparameters Parallelization Direct search Architecture and Optimization. Upper limit for parallelization is number of potential values per hyperparameter.  ... 
arXiv:2202.09186v3 fatcat:z6ywisf56ndefibehxdobjz4em

Performance Benchmarking of Data Augmentation and Deep Learning for Tornado Prediction

Carlos A. Barajas, Matthias K. Gobbert, Jianwu Wang
2019 2019 IEEE International Conference on Big Data (Big Data)  
PARALLELISM OF HYPERPARAMETER TUNING A.  ...  learning hyperparameter tuning.  ... 
doi:10.1109/bigdata47090.2019.9006531 dblp:conf/bigdataconf/BarajasGW19 fatcat:eppg72yqnzby7c3pcpzuitdaie
« Previous Showing results 1 — 15 out of 5,207 results