20,671 Hits in 4.5 sec

Efficient Hyperparameter Optimization for Differentially Private Deep Learning [article]

Aman Priyanshu, Rakshit Naidu, Fatemehsadat Mireshghallah, Mohammad Malekzadeh
2021 arXiv   pre-print
framework: evolutionary, Bayesian, and reinforcement learning.  ...  Therefore, there is an essential need for algorithms that, within a given search space, can find near-optimal hyperparameters for the best achievable privacy-utility tradeoffs efficiently.  ...  Acknowledgement Mohammad Malekzadeh was partially supported by the UK EPSRC (grant no. EP/T023600/1) within the CHIST-ERA program.  ... 
arXiv:2108.03888v1 fatcat:gomewnongbei3dptgmyniiniz4

Hyperparameter Tuning for Deep Reinforcement Learning Applications [article]

Mariam Kiran, Melis Ozyildirim
2022 arXiv   pre-print
However, setting the right hyperparameters can have a huge impact on the deployed solution performance and reliability in the inference models, produced via RL, used for decision-making.  ...  Reinforcement learning (RL) applications, where an agent can simply learn optimal behaviors by interacting with the environment, are quickly gaining tremendous success in a wide variety of applications  ...  Based on the example, with 4 blocks of individuals, each set is trained with the chosen gym environment and the model is saved.  ... 
arXiv:2201.11182v1 fatcat:ilhx5djtlzbcdcohcax6mj5dda

Quantity vs. Quality: On Hyperparameter Optimization for Deep Reinforcement Learning [article]

Lars Hertel, Pierre Baldi, Daniel L. Gillen
2020 arXiv   pre-print
From our experiments we conclude that Bayesian optimization with a noise robust acquisition function is the best choice for hyperparameter optimization in reinforcement learning tasks.  ...  In particular, we benchmark whether it is better to explore a large quantity of hyperparameter settings via pruning of bad performers, or if it is better to aim for quality of collected results by using  ...  From comparing algorithms on two reinforcement learning tasks we have found that model-based search in the form of GP-based Bayesian optimization performs best in terms of the average achieved across optimization  ... 
arXiv:2007.14604v2 fatcat:wiulxfb4c5ha7ink2limugggfe

Tabular Benchmarks for Joint Architecture and Hyperparameter Optimization [article]

Aaron Klein, Frank Hutter
2019 arXiv   pre-print
Based on this data, we performed an in-depth analysis to gain a better understanding of the properties of the optimization problem, as well as of the importance of different types of hyperparameters.  ...  Second, we exhaustively compared various different state-of-the-art methods from the hyperparameter optimization literature on these benchmarks in terms of performance and robustness.  ...  Based on the data we generated for these benchmarks, we had a closer look at the difficulty of the optimization problem and the importance of different hyperparameters.  ... 
arXiv:1905.04970v1 fatcat:zdwgjyko35bmta4hlxryh34u4q

Reinforcement Learning for Hyperparameter Tuning in Deep Learning-based Side-channel Analysis [article]

Jorai Rijsdijk, Lichao Wu, Guilherme Perin, Stjepan Picek
2021 IACR Cryptology ePrint Archive  
We mount an investigation on three commonly used datasets and two leakage models where the results show that reinforcement learning can find convolutional neural networks exhibiting top performance while  ...  In this paper, we propose to use reinforcement learning to tune the convolutional neural network hyperparameters.  ...  More precisely, our main contributions are: • We propose the reinforcement learning framework for hyperparameter tuning for deep learning-based SCA.  ... 
dblp:journals/iacr/RijsdijkWPP21 fatcat:spmtmvdq6vb4xbmf3ksrlsx2h4

Deep-Learning Model Selection and Parameter Estimation from a Wind Power Farm in Taiwan

Wen-Hui Lin, Ping Wang, Kuo-Ming Chao, Hsiao-Chung Lin, Zong-Yu Yang, Yu-Huang Lai
2022 Applied Sciences  
Therefore, the present study focuses on determining the proper hyperparameters for DLN models using a Q-learning scheme for four developed models.  ...  In evaluating the effectiveness of selection of hyperparameters for the proposed model, the performance of four DLN-based prediction models for power forecasting—TCN, long short-term memory (LSTM), recurrent  ...  Hyperparameter Optimization for Machine Learning Models The performance of deep learning models strongly depends on choosing a set of optimal hyperparameters.  ... 
doi:10.3390/app12147067 fatcat:nidft2wdkngpxiafllawogiyvi

Hyperparameter Optimization for the LSTM Method of AUV Model Identification Based on Q-Learning

Dianrui Wang, Junhe Wan, Yue Shen, Ping Qin, Bo He
2022 Journal of Marine Science and Engineering  
As hyper-parameter values have a significant impact on LSTM performance, it is important to select the optimal combination of hyper-parameters.  ...  Q-Learning approach can optimize the network hyperparameters in the LSTM.  ...  Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/jmse10081002 fatcat:z55nqzcqlnavfieadmezbw6sru

Self-Learning Tuning for Post-Silicon Validation [article]

Peter Domanski, Dirk Pflüger, Jochen Rivoir, Raphaël Latty
2022 arXiv   pre-print
Therefore, we propose a novel approach based on learn-to-optimize and reinforcement learning in order to solve complex and mixed-type tuning tasks in a efficient and robust way.  ...  Existing approaches are not able anymore to cope with the complexity of tasks such as robust performance tuning in post-silicon validation.  ...  Whereas most investigations focus on continuous optimization, we make use of reinforcement learning in the training procedure to enable learning of optimization methods for mixed-type problems.  ... 
arXiv:2111.08995v3 fatcat:crppzo5vmbcj7bq52sm55exnle

AutoHAS: Efficient Hyperparameter and Architecture Search [article]

Xuanyi Dong, Mingxing Tan, Adams Wei Yu, Daiyi Peng, Bogdan Gabrys, Quoc V. Le
2021 arXiv   pre-print
AutoHAS learns to alternately update the shared network weights and a reinforcement learning (RL) controller, which learns the probability distribution for the architecture candidates and HP candidates  ...  Efficient hyperparameter or architecture search methods have shown remarkable results, but each of them is only applicable to searching for either hyperparameters (HPs) or architectures.  ...  Acknowledgements We want to thank Gabriel Bender, Hanxiao Liu, Hieu Pham, Ruoming Pang, Barret Zoph and Yanqi Zhou for their help and feedback.  ... 
arXiv:2006.03656v3 fatcat:y3socjtfh5bvrpukno5kmasmwy

ETM: Effective Tuning Method based on Multi-objective and Knowledge Transfer in Image Recognition

Weichun Liu, Chenglin Zhao
2021 IEEE Access  
Especially in terms of latency performance, the proposed method performs best on all the tasks (57 data sets) on the three algorithms to be optimized.  ...  However, there are still huge challenges in the use of machine learning and deep learning. The tuning processes of algorithms are critical and challenging for their performance.  ...  optimization methods based on reinforcement learning.  ... 
doi:10.1109/access.2021.3062366 fatcat:qx4wkbh6jjfzzabjxh3pycevny

Development of prediction model of steel fiber-reinforced concrete compressive strength using random forest algorithm combined with hyperparameter tuning and k-fold cross-validation

Nadia Moneem Al-Abdaly, Salwa R. Al-Taai, Hamza Imran, Majed Ibrahim
2021 Eastern-European Journal of Enterprise Technologies  
The proposed models were developed using ten important material parameters for steel fiber-reinforced concrete characterization.  ...  To determine the optimal hyperparameters for the Random Forest algorithm, the Grid Search Cross-Validation approach was utilized.  ...  In this case, mtry is an important optimization parameter for the RF model.  ... 
doi:10.15587/1729-4061.2021.242986 fatcat:lntqfwkiurhqhe4ngrg2ewvt4i

Reinforcement Learning for Hyperparameter Tuning in Deep Learning-based Side-channel Analysis

Jorai Rijsdijk, Lichao Wu, Guilherme Perin, Stjepan Picek
2021 Transactions on Cryptographic Hardware and Embedded Systems  
We mount an investigation on three commonly used datasets and two leakage models where the results show that reinforcement learning can find convolutional neural networks exhibiting top performance while  ...  Deep learning represents a powerful set of techniques for profiling sidechannel analysis.  ...  More precisely, our main contributions are: • We propose the reinforcement learning framework for hyperparameter tuning for deep learning-based SCA.  ... 
doi:10.46586/tches.v2021.i3.677-707 fatcat:i7pw54x7fbgbjmtulawuofbhiy

Reinforcement Learning with Videos: Combining Offline Observations with Interaction [article]

Karl Schmeckpeper, Oleh Rybkin, Kostas Daniilidis, Sergey Levine, Chelsea Finn
2021 arXiv   pre-print
In this paper, we consider the question: can we perform reinforcement learning directly on experience collected by humans?  ...  Videos of humans, on the other hand, are a readily available source of broad and interesting experiences.  ...  Joint Optimization We jointly optimize the domain adaptation loss with the inverse model loss, L a and the optimization objective of the chosen reinforcement learning algorithm, L RL , according to the  ... 
arXiv:2011.06507v2 fatcat:twqtxfonrjcpteab4i4z6wmwqq

On Effective Scheduling of Model-based Reinforcement Learning [article]

Hang Lai, Jian Shen, Weinan Zhang, Yimin Huang, Xing Zhang, Ruiming Tang, Yong Yu, Zhenguo Li
2022 arXiv   pre-print
Model-based reinforcement learning has attracted wide attention due to its superior sample efficiency.  ...  Despite its impressive success so far, it is still unclear how to appropriately schedule the important hyperparameters to achieve adequate performance, such as the real data ratio for policy optimization  ...  Acknowledgments The SJTU team is supported by "New Generation of AI 2030" Major Project (2018AAA0100900), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and National Natural Science  ... 
arXiv:2111.08550v3 fatcat:v6jabk3t7fbzldbixiqzba4b3i

Deep Reinforcement Learning using Cyclical Learning Rates [article]

Ralf Gulde, Marc Tuscher, Akos Csiszar, Oliver Riedel, Alexander Verl
2020 arXiv   pre-print
One of the most influential parameters in optimization procedures based on stochastic gradient descent (SGD) is the learning rate.  ...  Deep Reinforcement Learning (DRL) methods often rely on the meticulous tuning of hyperparameters to successfully resolve problems.  ...  Non-Stationarity of RL Problems Deep reinforcement learning and supervised deep learning differ in an important aspect: while the general optimization methods and models can be quite similar, the data  ... 
arXiv:2008.01171v1 fatcat:o4e7qjws4rdehgaq3idrdreivm
« Previous Showing results 1 — 15 out of 20,671 results