Filters








25,614 Hits in 5.1 sec

Hyperparameter Transfer Learning with Adaptive Complexity [article]

Samuel Horváth, Aaron Klein, Peter Richtárik, Cédric Archambeau
2021 arXiv   pre-print
Bayesian optimization (BO) is a sample efficient approach to automatically tune the hyperparameters of machine learning models.  ...  In this work, we enable multi-task BO to compensate for this mismatch, such that the transfer learning procedure is able to handle different data regimes in a principled way.  ...  We introduced ABRAC, a probabilistic model for Bayesian optimization with adaptive complexity.  ... 
arXiv:2102.12810v1 fatcat:nlj6s6i4mvh6villxindirb24m

Hyperparameter Transfer Learning through Surrogate Alignment for Efficient Deep Neural Network Training [article]

Ilija Ilievski, Jiashi Feng
2016 arXiv   pre-print
To address this challenging issue, we propose a method that learns to transfer optimal hyperparameter values for a small source dataset to hyperparameter values with comparable performance on a dataset  ...  Instead, it uses surrogates to model the hyperparameter-error distributions of the two datasets and trains a neural network to learn the transfer function.  ...  To address this problem, we propose a hyperparameter transfer learning algorithm that automatically adapts the hyperparameter values from one dataset to another.  ... 
arXiv:1608.00218v1 fatcat:7ozyfwtsvzct5j7sk3yun6awa4

Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start [article]

Valerio Perrone, Rodolphe Jenatton, Matthias Seeger, Cedric Archambeau
2017 arXiv   pre-print
The multiple Bayesian linear regression models are coupled through a shared feedforward neural network, which learns a joint representation and transfers knowledge across machine learning problems.  ...  We develop a multiple adaptive Bayesian linear regression model as a scalable alternative whose complexity is linear in the number of observations.  ...  We compared single task ABLR-based and standard GP-based hyperparameter optimization (HPO), both denoted by plain, with their transfer learning counterparts, both denoted by transfer.  ... 
arXiv:1712.02902v1 fatcat:fybbni2izfdlnjthmnrwzgpcme

Easy Transfer Learning By Exploiting Intra-domain Structures [article]

Jindong Wang, Yiqiang Chen, Han Yu, Meiyu Huang, Qiang Yang
2019 arXiv   pre-print
Transfer learning aims at transferring knowledge from a well-labeled domain to a similar but different domain with limited or no labels.  ...  In this paper, we propose a practically Easy Transfer Learning (EasyTL) approach which requires no model selection and hyperparameter tuning, while achieving competitive performance.  ...  Transfer learning (TL), or domain adaptation [12] is a promising strategy to enhance the learning performance on a target domain with few or none labels by leveraging knowledge from a well-labeled source  ... 
arXiv:1904.01376v2 fatcat:7d7agelvd5dibcgtr7cero7nqm

ETM: Effective Tuning Method based on Multi-objective and Knowledge Transfer in Image Recognition

Weichun Liu, Chenglin Zhao
2021 IEEE Access  
With the widespread application of machine learning and deep learning, image recognition has been continuously developed.  ...  INDEX TERMS Image recognition, machine learning, deep learning, tuning, multi-objective, knowledge transfer. This work is licensed under a Creative Commons Attribution 4.0 License.  ...  This indicates that the proposed method can be well adapted to complex optimization tasks.  ... 
doi:10.1109/access.2021.3062366 fatcat:qx4wkbh6jjfzzabjxh3pycevny

Guest Editorial Evolutionary Computation Meets Deep Learning

Weiping Ding, Witold Pedrycz, Gary G. Yen, Bing Xue
2021 IEEE Transactions on Evolutionary Computation  
While dealing with complex tasks, deep learning adopts a number of transformation stages to deliver the in-depth description and interpretation of the data.  ...  Deep learning approaches with evolutionary computation have frequently been used in a large variety of applications and have started to address complex and challenging issues in deep learning.  ... 
doi:10.1109/tevc.2021.3096336 fatcat:ajhq2kzvkbf7zkswroldxrufp4

Optimal Kernel Selection Based on GPR for Adaptive Learning of Mean Throughput Rates in LTE Networks

Joseph Isabona, Agbotiname Lucky Imoize
2021 Journal of Technological Advancements  
One promising computationally efficient and adaptive machine learning method is the Gaussian Process Regression (GPR).  ...  Results indicate that the GPR training with the mertern52 kernel function achieved the best user throughput data learning among the ten contending Kernel functions.  ...  stochastic processes of varied intricacies and complexities (iii) proficiency in adaptive learning of noisy and non-noisy data (iv) adeptness in handling uncertainties in datasets (v) expert knowledge  ... 
doi:10.4018/jta.290350 fatcat:xrknd4lqx5fgrowmtrmczc2ip4

Multi-target Unsupervised Domain Adaptation without Exactly Shared Categories [article]

Huanhuan Yu, Menglei Hu, Songcan Chen
2018 arXiv   pre-print
Unsupervised domain adaptation (UDA) aims to learn the unlabeled target domain by transferring the knowledge of the labeled source domain.  ...  A key ingredient of PA-1SmT is to transfer knowledge through adaptive learning of a common model parameter dictionary, which is completely different from existing popular methods for UDA, such as subspace  ...  Therefore, we propose a model parameter adaptation framework (PA-1SmT) for this scenario to transfer knowledge through adaptive learning of a common model parameter dictionary, and in turn, use the common  ... 
arXiv:1809.00852v2 fatcat:loowptfrxngcnel3bvj6qrr5jm

Development of a Multilayer Perception Neural Network for Optimal Predictive Modeling in Urban Microcellular Radio Environments

Joseph Isabona, Agbotiname Lucky Imoize, Stephen Ojo, Olukayode Karunwi, Yongsung Kim, Cheng-Chi Lee, Chun-Ta Li
2022 Applied Sciences  
The hyperparameters examined include the neuron number, learning rate and hidden layers number.  ...  In detail, the developed MLP model prediction accuracy level using different learning and training algorithms with the tuned best values of the hyperparameters have been applied for extensive path loss  ...  Identify the adaptive learning algorithms for MLP neural network model training and testing e. Identify the modelling hyperparameters f.  ... 
doi:10.3390/app12115713 fatcat:b434kacgkbcn3pluuzw7lbcv64

Handwritten Digit Recognition: Hyperparameters-Based Analysis

Saleh Albahli, Fatimah Alhassan, Waleed Albattah, Rehan Ullah Khan
2020 Applied Sciences  
Choosing the optimal values of hyperparameters requires experience and mastery of the machine learning paradigm.  ...  Neural networks have several useful applications in machine learning.  ...  The hyperparameters tuned for each classification algorithm are: learning rate dynamics, momentum dynamics, learning rate Decay, Drop learning rate on the plateau and adaptive learning rates.  ... 
doi:10.3390/app10175988 fatcat:sfw44dpzj5bmjd2isjbkehv2qm

A Quantile-based Approach for Hyperparameter Transfer Learning [article]

David Salinas, Huibin Shen, Valerio Perrone
2021 arXiv   pre-print
In this work, we introduce a novel approach to achieve transfer learning across different datasets as well as different objectives.  ...  The main idea is to regress the mapping from hyperparameter to objective quantiles with a semi-parametric Gaussian Copula distribution, which provides robustness against different scales or outliers that  ...  As in 3 https://github.com/geoalgo/ A-Quantile-based-Approach-for-Hyperparameter-Transfer-Learning/ 4 https://github.com/icdishb/ hyperparameter-transfer-learning-evaluations/ REINFORCE.  ... 
arXiv:1909.13595v2 fatcat:7uns37deybfihfcptgucjvilf4

CycleStyleGAN-Based Knowledge Transfer for a Machining Digital Twin

Evgeny Zotov, Visakan Kadirkamanathan
2021 Frontiers in Artificial Intelligence  
The novel CycleStyleGAN architecture extends the CycleGAN model with a style-based signal encoding.  ...  The proposed approach is implemented as a domain adaptation algorithm based on the generative adversarial network model.  ...  Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization.  ... 
doi:10.3389/frai.2021.767451 pmid:34901838 pmcid:PMC8657946 fatcat:apu4arpzn5au5mrzqqlo2r7lba

Domain-independent Dominance of Adaptive Methods [article]

Pedro Savarese and David McAllester and Sudarshan Babu and Michael Maire
2020 arXiv   pre-print
We observe that the power of our method is partially explained by a decoupling of learning rate and adaptability, greatly simplifying hyperparameter search.  ...  In light of this observation, we demonstrate that, against conventional wisdom, Adam can also outperform SGD on vision tasks, as long as the coupling between its learning rate and adaptability is taken  ...  We train a ResNet-50 (He et al., 2016b) with SGD and different adaptive methods, transferring the hyperparameters from our original CIFAR-10 results.  ... 
arXiv:1912.01823v3 fatcat:a6oghnsz3jbu5pqxmcu5iondfu

Learning hyperparameter optimization initializations

Martin Wistuba, Nicolas Schilling, Lars Schmidt-Thieme
2015 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA)  
We propose to transfer knowledge by means of an initialization strategy for hyperparameter optimization.  ...  Then, we are able to learn the initial hyperparameter configuration sequence by applying gradient-based optimization techniques. Extensive experiments are conducted on two meta-data sets.  ...  Adaptive Learning Initializations (aLI): Adaptive Learning Initializations is presented in Section IV-G.  ... 
doi:10.1109/dsaa.2015.7344817 dblp:conf/dsaa/WistubaSS15 fatcat:u36imjmhjjcyzmleeytqedmybq

Multi-Stage Transfer Learning with an Application to Selection Process [article]

Andre Mendes, Julian Togelius, Leandro dos Santos Coelho
2020 arXiv   pre-print
Experiments using real-world data demonstrate the efficacy of our approach as it outperforms other state-of-the-art methods for transfer learning and regularization.  ...  By transferring weights from simpler neural networks trained in larger datasets, we able to fine-tune more complex neural networks in the latter stages without overfitting due to the small sample size.  ...  For domain adaptation methods, we use Domain Adversarial Neural Network (DANN) [12] , Transferable Adversarial Training (TAT) [21] , and Transfer Kernel Learning (TKL) [23] with an SVM as the base  ... 
arXiv:2006.01276v1 fatcat:qapr6vwdbnhlxmwzo55k3u2s3e
« Previous Showing results 1 — 15 out of 25,614 results