Filters








1,075 Hits in 7.2 sec

Guided evolutionary strategies: Augmenting random search with surrogate gradients [article]

Niru Maheswaranathan, Luke Metz, George Tucker, Dami Choi, Jascha Sohl-Dickstein
2019 arXiv   pre-print
We propose Guided Evolutionary Strategies, a method for optimally using surrogate gradient directions along with random search.  ...  We define a search distribution for evolutionary strategies that is elongated along a guiding subspace spanned by the surrogate gradients.  ...  DISCUSSION We have introduced guided evolutionary strategies (Guided ES), an optimization algorithm which combines the benefits of first-order methods and random search, when we have access to surrogate  ... 
arXiv:1806.10230v4 fatcat:ws5s7ciry5am3o6idymzhxx6ty

Online Hyper-parameter Learning for Auto-Augmentation Strategy [article]

Chen Lin, Minghao Guo, Chuming Li, Yuan Xin, Wei Wu, Dahua Lin, Wanli Ouyang, Junjie Yan
2019 arXiv   pre-print
Unlike previous methods on auto-augmentation that search augmentation strategies in an offline manner, our method formulates the augmentation policy as a parameterized probability distribution, thus allowing  ...  In this paper, we propose Online Hyper-parameter Learning for Auto-Augmentation (OHL-Auto-Aug), an economical solution that learns the augmentation policy distribution along with network training.  ...  Previous state-of-theart methods resort to sampling augmentation strategies with a surrogate model, and then solve the inner optimization problem exactly for each sampled strategy, which raises the time-consuming  ... 
arXiv:1905.07373v2 fatcat:tknjbqyk6bhtfikdmecx7uxjqa

Online Hyper-Parameter Learning for Auto-Augmentation Strategy

Chen Lin, Minghao Guo, Chuming Li, Xin Yuan, Wei Wu, Junjie Yan, Dahua Lin, Wanli Ouyang
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
Unlike previous methods on auto-augmentation that search augmentation strategies in an offline manner, our method formulates the augmentation policy as a parameterized probability distribution, thus allowing  ...  In this paper, we propose Online Hyper-parameter Learning for Auto-Augmentation (OHL-Auto-Aug), an economical solution that learns the augmentation policy distribution along with network training.  ...  Previous state-of-theart methods resort to sampling augmentation strategies with a surrogate model, and then solve the inner optimization problem exactly for each sampled strategy, which raises the time-consuming  ... 
doi:10.1109/iccv.2019.00668 dblp:conf/iccv/LinGLYWYLO19 fatcat:phqk7plgf5h45cmini2et47mqi

Alternative infill strategies for expensive multi-objective optimisation

Alma A. M. Rahat, Richard M. Everson, Jonathan E. Fieldsend
2017 Proceedings of the Genetic and Evolutionary Computation Conference on - GECCO '17  
State-of-theart algorithms therefore construct surrogate model(s) of the parameter space to objective functions mapping to guide the choice of the next solution to expensively evaluate.  ...  We investigated the performance of these novel strategies on standard multi-objective test problems, and compared them with the popular SMS-EGO and ParEGO methods.  ...  reference vector guided evolutionary (RVEA) framework [4] , etc. e only mono-surrogate approach used within the Bayesian EGO framework is ParEGO [20] .  ... 
doi:10.1145/3071178.3071276 dblp:conf/gecco/RahatEF17 fatcat:4gliautez5eehcos6vkuybvtvu

Deep Reinforcement Learning Versus Evolution Strategies: A Comparative Survey [article]

Amjad Yousef Majid, Serge Saaybi, Tomas van Rietbergen, Vincent Francois-Lavet, R Venkatesha Prasad, Chris Verhoeven
2021 arXiv   pre-print
Deep Reinforcement Learning (DRL) and Evolution Strategies (ESs) have surpassed human-level control in many sequential decision-making problems, yet many open challenges still exist.  ...  [83] proposed Guided ES: a random search that is augmented using surrogate gradients which are correlated with the true gradient.  ...  The authors showed how to optimally combine the surrogate gradient directions with random search directions and how to iteratively approach the true gradient for linear functions.  ... 
arXiv:2110.01411v1 fatcat:nw47ududyndyljlh4nx2gm73jq

Multilevel optimization strategies based on metamodel-assisted evolutionary algorithms, for computationally expensive problems

I.C. Kampolis, A.S. Zymaris, V .G. Asouti, K.C. Giannakoglou
2007 2007 IEEE Congress on Evolutionary Computation  
algorithms where the adjoint method computes the objective function gradient and (c) airfoil parameterizations with different numbers of Bézier control points.  ...  They are all based on the same general-purpose search platform, which employs Hierarchical, Distributed Metamodel-Assisted Evolutionary Algorithms (HDMAEAs).  ...  In this paper, the hybridization of the distributed MAEAs with gradient-based search methods (steepest-descent, conjugate gradients or Newton-like methods supported by the adjoint method to compute the  ... 
doi:10.1109/cec.2007.4425008 dblp:conf/cec/KampolisZAG07 fatcat:i4n4kdafvbf35jff4575s6aohe

Optimal marker placement in hadrontherapy: Intelligent optimization strategies with augmented Lagrangian pattern search

Cristina Altomare, Raffaella Guglielmann, Marco Riboldi, Riccardo Bellazzi, Guido Baroni
2015 Journal of Biomedical Informatics  
Comparison has been performed with randomly selected marker configuration and with the GETS algorithm (Genetic Evolutionary Taboo Search), also taking into account the presence of organs at risk.  ...  The main purpose of our work is to propose a new algorithm based on simulated annealing and augmented Lagrangian pattern search (SAPS), which is able to take into account prior knowledge, such as spatial  ...  [22] , who combined genetic algorithm (GA) with Taboo search (TS) in a method called Genetic Evolutionary Taboo Search (GETS).  ... 
doi:10.1016/j.jbi.2014.09.001 pmid:25220865 fatcat:ravhc2mwprdl5m2vsimqnjmzrm

A Survey on Neural Architecture Search [article]

Martin Wistuba and Ambrish Rawat and Tejaswini Pedapati
2019 arXiv   pre-print
algorithms along with approaches that incorporate surrogate and one-shot models.  ...  Additionally, we address the new research directions which include constrained and multi-objective architecture search as well as automated data augmentation, optimizer and activation function search.  ...  different augmentation strategies.  ... 
arXiv:1905.01392v2 fatcat:px7iiwwdjzamfhecynvafcimqu

Using a thousand optimization tasks to learn hyperparameter search strategies [article]

Luke Metz, Niru Maheswaranathan, Ruoxi Sun, C. Daniel Freeman, Ben Poole, Jascha Sohl-Dickstein
2020 arXiv   pre-print
By learning this hyperparameter list from data generated using TaskSet we achieve large speedups in sample efficiency over random search.  ...  Resnet50 and LM1B language modeling with transformers.  ...  First, the resulting search strategies are much more efficient, resulting in large speedups in sample efficiency on unseen tasks over a random search baseline.  ... 
arXiv:2002.11887v3 fatcat:qtir4juehrhbbn5625a3y7jupu

Effective Reinforcement Learning through Evolutionary Surrogate-Assisted Prescription [article]

Olivier Francon, Santiago Gonzalez, Babak Hodjat, Elliot Meyerson, Risto Miikkulainen, Xin Qiu, Hormoz Shahrzad
2020 arXiv   pre-print
The surrogate is, for example, a random forest or a neural network trained with gradient descent, and the strategy is a neural network that is evolved to maximize the predictions of the surrogate model  ...  Using this data, it is possible to learn a surrogate model, and with that model, evolve a decision strategy that optimizes the outcomes.  ...  Even with the surrogate, the problem of finding effective decision strategies is still challenging.  ... 
arXiv:2002.05368v2 fatcat:dhl44puysfe7ned4fw3axsdriu

AutoML: A Survey of the State-of-the-Art [article]

Xin He, Kaiyong Zhao, Xiaowen Chu
2020 arXiv   pre-print
First, we introduce AutoML methods according to the pipeline, covering data preparation, feature engineering, hyperparameter optimization, and neural architecture search (NAS).  ...  Gradient Descent The search strategies above sample neural architectures from a discrete search space.  ...  RL, EA, GD, RS, BO indicate reinforcement learning, evolution-based algorithm, gradient descent, random search, and surrogate model-based optimization, respectively.  ... 
arXiv:1908.00709v5 fatcat:zwlhvujqnzgxja42t2yk75bsx4

Evolution of neural networks

Risto Miikkulainen
2022 Proceedings of the Genetic and Evolutionary Computation Conference Companion  
I Each neuron part of 2-3 subtasks I Robust coding of behavior during search 12/62 Advanced NE 2: Evolutionary Strategies I Evolving complete networks with ES (CMA-ES 26 ) I Small populations, no crossover  ...  I Allows building on stepping stones I How to guide novelty search towards useful solutions?  ... 
doi:10.1145/3520304.3533656 fatcat:tirtetn4mrd5peb62ew52ahdae

Better call Surrogates: A hybrid Evolutionary Algorithm for Hyperparameter optimization [article]

Subhodip Biswas, Adam D Cobb, Andreea Sistrunk, Naren Ramakrishnan, Brian Jalaian
2020 arXiv   pre-print
In this paper, we propose a surrogate-assisted evolutionary algorithm (EA) for hyperparameter optimization of machine learning (ML) models.  ...  estimates the objective function landscape using RadialBasis Function interpolation, and then transfers the knowledge to an EA technique called Differential Evolution that is used to evolve new solutions guided  ...  Motivated by this, we devise a hybrid search algorithm for adapting EAs to low budget optimization problems, like HPO, by balancing the randomness of EA search moves with strategically generated search  ... 
arXiv:2012.06453v1 fatcat:4vvdswgusjhbtk2xeqjh2rmvzu

Generalize Robot Learning From Demonstration to Variant Scenarios With Evolutionary Policy Gradient

Junjie Cao, Weiwei Liu, Yong Liu, Jian Yang
2020 Frontiers in Neurorobotics  
With demonstration guiding the evolutionary process, robot can accelerate the goal oriented exploration to generalize its capability to variant scenarios.  ...  Our Evolutionary Policy Gradient combines parameter perturbation with policy gradient method in the framework of Evolutionary Algorithms (EAs) and can fuse the benefits of both, achieving effective and  ...  The objective of imitation learning can be augmented with that of behavior clone to guide the evolutionary process with demonstration.  ... 
doi:10.3389/fnbot.2020.00021 pmid:32372940 pmcid:PMC7188386 fatcat:lodwo6wq2ngvlcfccuhzaa5fay

Combining Evolution and Deep Reinforcement Learning for Policy Search: a Survey [article]

Olivier Sigaud
2022 arXiv   pre-print
., 2019) combines ddpg with Augmented Random Search (ars), a finite difference algorithm which can be seen as as simplified version of evolution strategies (Mania et al., 2018) . fidi-rl uses the erl  ...  Evolution improved with RL mechanisms Without using a full RL part, a few algorithms augment an evolutionary approach with components taken from RL.  ... 
arXiv:2203.14009v5 fatcat:5vqkpzmmmvfvpgifnnd4hoxtle
« Previous Showing results 1 — 15 out of 1,075 results