Filters








59,293 Hits in 7.5 sec

Bayesian optimization of hyper-parameters in reservoir computing [article]

Jan Yperman, Thijs Becker
2017 arXiv   pre-print
In addition to a set of optimal hyper-parameters, the method also provides a probability distribution of the cost function as a function of the hyper-parameters.  ...  Due to its automated nature, this method significantly reduces the need for expert knowledge when optimizing the hyper-parameters in reservoir computing.  ...  We checked that these small differences do not lead to qualitative differences in behaviour.  ... 
arXiv:1611.05193v3 fatcat:ayx6f5tukbhbncfdsoqdaiduhi

Genetic CFL: Optimization of Hyper-Parameters in Clustered Federated Learning [article]

Shaashwat Agrawal, Sagnik Sarkar, Mamoun Alazab, Praveen Kumar Reddy Maddikunta, Thippa Reddy Gadekallu, Quoc-Viet Pham
2021 arXiv   pre-print
Then, we introduce an algorithm that drastically increases the individual cluster accuracy by integrating the density-based clustering and genetic hyper-parameter optimization.  ...  To overcome these limitations, we propose a novel hybrid algorithm, namely genetic clustered FL (Genetic CFL), that clusters edge devices based on the training hyper-parameters and genetically modifies  ...  Since optimization of each client model parameters is not feasible, we propose to do so for each cluster.  ... 
arXiv:2107.07233v2 fatcat:7wv4s2nk3ffk7ijhl5kzfyfbju

Hyper-parameter Selection in Convolutional Neural Networks Using Microcanonical Optimization Algorithm

Ayla Gulcu, Zeki Kus
2020 IEEE Access  
In this study, Microcanonical Optimization algorithm which is a variant of Simulated Annealing method is used for hyper-parameter optimization and architecture selection for Convolutional Neural Networks  ...  It is shown that the proposed method is able to achieve competitive classification results with the state-of-the-art architectures.  ...  In this study, an architecture that is allowed to extend or shrink dynamically is adopted and a wide range of hyper-parameters are selected for optimization.  ... 
doi:10.1109/access.2020.2981141 fatcat:v7txal3yybebbhmpcyyugrqdmi

Deep Learning Hyper-Parameter Optimization for Video Analytics in Clouds

Muhammad Usman Yaseen, Ashiq Anjum, Omer Rana, Nikolaos Antonopoulos
2018 IEEE Transactions on Systems, Man & Cybernetics. Systems  
A key focus in this work is on tuning hyper-parameters associated with the deep learning algorithm used to construct the model.  ...  Subsequently, the parameters that contribute towards the most optimal performance are selected for the video object classification pipeline.  ...  Some of the researchers also proposed methods to perform automatic hyper-parameter optimization.  ... 
doi:10.1109/tsmc.2018.2840341 fatcat:y6fp7xy6ufen5pae4qjctuc6au

Multi-objective simulated annealing for hyper-parameter optimization in convolutional neural networks

Ayla Gülcü, Zeki Kuş
2021 PeerJ Computer Science  
In this study, we model a CNN hyper-parameter optimization problem as a bi-criteria optimization problem, where the first objective being the classification accuracy and the second objective being the  ...  For this bi-criteria optimization problem, we develop a Multi-Objective Simulated Annealing (MOSA) algorithm for obtaining high-quality solutions in terms of both objectives.  ...  In this study, we present a single-stage HPO method to optimize the hyper-parameters of CNNs for object recognition problems considering two competing objectives, classification accuracy and the computational  ... 
doi:10.7717/peerj-cs.338 pmid:33816989 pmcid:PMC7924536 fatcat:vsq5q2raujgqtivpswymsicram

Stage-based Hyper-parameter Optimization for Deep Learning [article]

Ahnjae Shin, Dong-Jin Shin, Sungwoo Cho, Do Yoon Kim, Eunji Jeong, Gyeong-In Yu, Byung-Gon Chun
2019 arXiv   pre-print
As deep learning techniques advance more than ever, hyper-parameter optimization is the new major workload in deep learning clusters.  ...  Our preliminary experiment results show that applying stage-based execution to hyper-parameter optimization algorithms outperforms the original trial-based method, saving required GPU-hours and end-to-end  ...  Instead, it is common to parameterize the sequence and set the parameters as hyper-parameters.  ... 
arXiv:1911.10504v1 fatcat:67ya4nnvnjgd5ntiorivapxw7e

Performance-Estimation Properties of Cross-Validation-Based Protocols with Simultaneous Hyper-Parameter Optimization [chapter]

Ioannis Tsamardinos, Amin Rakhshani, Vincenzo Lagani
2014 Lecture Notes in Computer Science  
Combining the two tasks is not trivial because when one selects the set of hyper-parameters that seem to provide the best estimated performance, this estimation is optimistic (biased / overfitted) due  ...  hyper-parameters (e.g., K in K-NN), also called model selection, and (b) provide an estimate of the performance of the final, reported model.  ...  Acknowledgements This work was conducted partially in the framework of the BIOSYS research project, Action KRIPIS, project No MIS-448301 (2013SE01380036) by the Greek General Secretariat for Research and  ... 
doi:10.1007/978-3-319-07064-3_1 fatcat:vqncwyxfyfckdntd52venmjgsy

Speeding up Hyper-parameter Optimization by Extrapolation of Learning Curves Using Previous Builds [chapter]

Akshay Chandrashekaran, Ian R. Lane
2017 Lecture Notes in Computer Science  
This can be used to terminate poorly performing builds and thus, speed up hyper-parameter search with performance comparable to non-augmented hyper-parameter optimization techniques.  ...  We incorporate this termination criterion into an existing hyper-parameter optimization toolkit.  ...  Although we have shown only one run of hyper-parameter optimization for each, we do not expect to see a large variance between multiple runs.  ... 
doi:10.1007/978-3-319-71249-9_29 fatcat:lncd553ecrfjjigkv4o7cvgz7y

Deep Autoencoding GMM-based Unsupervised Anomaly Detection in Acoustic Signals and its Hyper-parameter Optimization [article]

Harsh Purohit, Ryo Tanabe, Takashi Endo, Kaori Suefusa, Yuki Nikaido, Yohei Kawaguchi
2020 arXiv   pre-print
In addition, the DAGMM-HO solves the hyper-parameter sensitivity problem of the conventional DAGMM by performing hyper-parameter optimization based on the gap statistic and the cumulative eigenvalues.  ...  In this work, we propose a new method based on a deep autoencoding Gaussian mixture model with hyper-parameter optimization (DAGMM-HO).  ...  HYPER-PARAMETER OPTIMIZATION In order to develop an acoustic anomaly detection method based on DAGMM, we found that two important hyper-parameters should be tuned accurately.  ... 
arXiv:2009.12042v1 fatcat:h2agxnyrpbhg7k5hvxni5v7whu

Efficient Hyper-parameter Optimization for NLP Applications

Lidan Wang, Minwei Feng, Bowen Zhou, Bing Xiang, Sridhar Mahadevan
2015 Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing  
Hyper-parameter optimization is an important problem in natural language processing (NLP) and machine learning.  ...  In this paper, we explore this problem from a different angle, and propose a multi-stage hyper-parameter optimization that breaks the problem into multiple stages with increasingly amounts of data.  ...  The common theme is to perform a set of iterative hyper-parameter optimizations, where in each round, these methods fit a hyper-parameter response surface using a probabilistic regression function such  ... 
doi:10.18653/v1/d15-1253 dblp:conf/emnlp/WangFZXM15 fatcat:pxgjjon7yjb6jjgsu2mkrdgt6y

Effect of Hyper-Parameter Optimization on the Deep Learning Model Proposed for Distributed Attack Detection in Internet of Things Environment [article]

Md Mohaimenuzzaman, Zahraa Said Abdallah, Joarder Kamruzzaman, Bala Srinivasan
2018 arXiv   pre-print
This paper studies the effect of various hyper-parameters and their selection for the best performance of the deep learning model proposed in [1] for distributed attack detection in the Internet of Things  ...  As a consequence, this study shows that the model's accuracy as reported in the paper is not achievable, based on the best selections of parameters, which is also supported by another recent publication  ...  The values for the parameters marked with asterisk (*) are not specified in the paper [1] and are chosen by us after our rigorous hyper-parameter optimization.  ... 
arXiv:1806.07057v1 fatcat:625fcbhkbzcdxiqoxkf7ai57vq

Selecting Hyper-Parameters of Gaussian Process Regression Based on Non-Inertial Particle Swarm Optimization in Internet of Things

Lanlan Kang, Ruey-Shun Chen, Naixue Xiong, Yeh-Cheng Chen, Yu-Xi Hu, Chien-Ming Chen
2019 IEEE Access  
In the study, a non-inertial particle swarm optimization with elite mutation-Gaussian process regression (NIPSO-GPR) is proposed to optimize the hyper-parameters of GRP.  ...  When compared with several frequently used algorithms of hyper-parameters optimization on linear and nonlinear time series sample data, experimental results indicate that GPR after hyper-parameters optimized  ...  It should be noted that on one hand, the GPR optimized hyper-parameters by NIPSO-GPR do not obtain a better fitting error, which can be shown whether in Table 2 or Fig. 5 .  ... 
doi:10.1109/access.2019.2913757 fatcat:gwauhclpubeynmukiqfodvngvy

Designing Adaptive Neural Networks for Energy-Constrained Image Classification [article]

Dimitrios Stamoulis, Ting-Wu Chin, Anand Krishnan Prakash, Haocheng Fang, Sribhuvan Sajja, Mitchell Bognar, Diana Marculescu
2018 arXiv   pre-print
To address this limitation, we pursue a more powerful design paradigm where the architecture settings of the CNNs are treated as hyper-parameters to be globally optimized.  ...  We cast the design of adaptive CNNs as a hyper-parameter optimization problem with respect to energy, accuracy, and communication constraints imposed by the mobile device.  ...  To enable energy-aware hyper-parameter optimization, we use Bayesian optimization.  ... 
arXiv:1808.01550v2 fatcat:jrhnte3u35d4zczt2aj7g5uyk4

Methods for Pattern Classification [chapter]

Yizhang Guan
2010 New Advances in Machine Learning  
And in the feature space, the optimal decision function, namely, the optimal hyper-plane is determined.  ...  This is because the conventional classifiers do not have the mechanism to maximize the margins of class boundaries.  ...  /methods-for-pattern-classification  ... 
doi:10.5772/9378 fatcat:omc2p26zejhqpnyx3d7twwkdze

Toward an Efficient Multi-class Classification in an Open Universe [article]

Wajdi Dhifli, Abdoulaye Baniré Diallo
2018 arXiv   pre-print
However, a more realistic scenario that fits real-world applications is to consider the possibility of encountering instances that do not belong to any of the training classes, i.e., an open-set classification  ...  Classification is a fundamental task in machine learning and data mining. Existing classification methods are designed to classify unknown instances within a set of previously known training classes.  ...  Galaxy-X introduces a softening parameter for the adjustment of the minimum bounding hyper-spheres to add more generalization or specialization to the classification models.  ... 
arXiv:1511.00725v3 fatcat:rokddeywyjg6nb3ldfw6v4qo3y
« Previous Showing results 1 — 15 out of 59,293 results