Filters








21,878 Hits in 3.6 sec

Explicit Gradient Learning for Black-Box Optimization

Elad Sarafian, Mor Sinay, Yoram Louzoun, Noa Agmon, Sarit Kraus
2020 International Conference on Machine Learning  
Black-Box Optimization (BBO) methods can find optimal policies for systems that interact with complex environments with no analytical representation.  ...  Here we present a BBO method, termed Explicit Gradient Learning (EGL), that is designed to optimize highdimensional ill-behaved functions.  ...  Black-Box Optimization (BBO) algorithms (Audet & Hare, 2017; Golovin et al., 2017) are designed to solve such problems, when the analytical formulation is missing, by repeatedly querying the Black-Box  ... 
dblp:conf/icml/SarafianSLAK20 fatcat:btvvzpjzffap7ovzn4ywgcxqoi

Learning to Learn by Zeroth-Order Oracle [article]

Yangjun Ruan, Yuanhao Xiong, Sashank Reddi, Sanjiv Kumar, Cho-Jui Hsieh
2020 arXiv   pre-print
Our learned optimizer outperforms hand-designed algorithms in terms of convergence rate and final solution on both synthetic and practical ZO optimization tasks (in particular, the black-box adversarial  ...  In this paper, we extend the L2L framework to zeroth-order (ZO) optimization setting, where no explicit gradient information is available.  ...  Furthermore, it is not suitable for solving black-box optimization problems of high dimensions.  ... 
arXiv:1910.09464v2 fatcat:2hdpy6qqtvf4vbxz3swkruydcm

Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth Optimization

Feihu Huang, Bin Gu, Zhouyuan Huo, Songcan Chen, Heng Huang
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
However, in some machine learning problems such as the bandit model and the black-box learning problem, proximal gradient method could fail because the explicit gradients of these problems are difficult  ...  Proximal gradient method has been playing an important role to solve many machine learning tasks, especially for the nonsmooth problems.  ...  In fact, there exist many nonconvex machine learning tasks, whose explicit gradients are not available, such as the nonconvex black-box learning problems (Chen et al., 2017; Liu et al., 2018c) .  ... 
doi:10.1609/aaai.v33i01.33011503 fatcat:jaudjs4vobbo3fitemtbzkqvdu

OPT-GAN: A Broad-Spectrum Global Optimizer for Black-box Problems by Learning Distribution [article]

Minfang Lu, Shuai Ning, Shuangrong Liu, Fengyang Sun, Bo Zhang, Bo Yang, Lin Wang
2022 arXiv   pre-print
Black-box optimization (BBO) algorithms are concerned with finding the best solutions for problems with missing analytical details.  ...  Most classical methods for such problems are based on strong and fixed a priori assumptions, such as Gaussianity.  ...  Explicit Gradient Learning (EGL) [55] directly estimates the gradient ∇f by learning the parametric weights.  ... 
arXiv:2102.03888v5 fatcat:jasjmzqjkbhdzgi5zk5qmd2doy

A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning [article]

Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred Hero, Pramod K. Varshney
2020 arXiv   pre-print
Moreover, we demonstrate promising applications of ZO optimization, such as evaluating robustness and generating explanations from black-box deep learning models, and efficient online sensor management  ...  It is used for solving optimization problems similarly to gradient-based methods. However, it does not require the gradient, using only function evaluations.  ...  Optimization corresponding to these types of problems falls into the category of zeroth-order (ZO) optimization with respect to black-box models, where explicit expressions of the gradients are difficult  ... 
arXiv:2006.06224v2 fatcat:fx624eqhifbqpp5hbd5a5cmsny

Bayesian optimization

Ivo Couckuyt, Sebastian Rojas Gonzalez, Juergen Branke
2022 Proceedings of the Genetic and Evolutionary Computation Conference Companion  
Efficient global optimization of expensive black-box functions. Journal of Global Optimization] Efficient Global Optimization ̶ Example: Knowledge Gradient [Frazier, Powell, and Dayanik (2009).  ...  The knowledge-gradient policy for correlated normal beliefs.  ...  EXPECTED HYPERVOLUME IMPROVEMENT CONCLUSION ̶ Black box optimization requires balancing exploitation and exploration ̶ Bayesian optimization ̶ is using a surrogate model that predicts mean and uncertainty  ... 
doi:10.1145/3520304.3533654 fatcat:ylythh7kebgwpnbvf5x36z2qii

Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources [article]

Yun-Yun Tsai and Pin-Yu Chen and Tsung-Yi Ho
2020 arXiv   pre-print
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses without knowing the model architecture or changing  ...  reprogramming (BAR), that repurposes a well-trained black-box ML model (e.g., a prediction API or a proprietary software) for solving different ML tasks, especially in the scenario with scarce data and  ...  Pin-Yu Chen would like to thank Payel Das at IBM Research for her inputs on the autism spectrum disorder classification task.  ... 
arXiv:2007.08714v2 fatcat:3aw6fwpatzfjrpbvzbpaabcba4

Learning to Learn without Gradient Descent by Gradient Descent [article]

Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando de Freitas
2017 arXiv   pre-print
We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process  ...  We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent.  ...  et al., 2011) Learning Black-box Optimization A black-box optimization algorithm can be summarized by the following loop: 1.  ... 
arXiv:1611.03824v6 fatcat:cxjizfyrzze3jn5z7ys4gu7i54

Adversarial Distributional Training for Robust Deep Learning [article]

Yinpeng Dong, Zhijie Deng, Tianyu Pang, Hang Su, Jun Zhu
2020 arXiv   pre-print
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.  ...  ADT is formulated as a minimax optimization problem, where the inner maximization aims to learn an adversarial distribution to characterize the potential adversarial examples around a natural one under  ...  gradient) rather than the black-box setting.  ... 
arXiv:2002.05999v2 fatcat:eziluirfnrhnbgeyddofsiapou

Collective Model Fusion for Multiple Black-Box Experts

Quang Minh Hoang, Trong Nghia Hoang, Bryan Kian Hsiang Low, Carl Kingsford
2019 International Conference on Machine Learning  
This paper presents the first collective model fusion framework for multiple experts with heterogeneous black-box architectures.  ...  The proposed method will enable this by addressing the key issues of how black-box experts interact to understand the predictive behaviors of one another; how these understandings can be represented and  ...  Collective Learning via Black-Box Imitation This section will present our Collective Learning via Blackbox Imitation (COLBI) algorithm for transferring expertise from black-box models to white-box surrogates  ... 
dblp:conf/icml/HoangHLK19 fatcat:cab3xd4g4rd6jb7war4i7cjbai

On Training and Evaluation of Neural Network Approaches for Model Predictive Control [article]

Rebecka Winqvist, Arun Venkitaraman, Bo Wahlberg
2020 arXiv   pre-print
We consider the study of neural network architectures in PyTorch with the explicit MPC constraints implemented as a differentiable optimization layer using CVXPY.  ...  However, a general framework for characterization of learning approaches in terms of both model validation and efficient training data generation is lacking in literature.  ...  We consider the following three network architectures: 1) Black box NN (BBNN) which refers to the black box neural network which is agnostic to any MPC specific information but learns only from the inputoutput  ... 
arXiv:2005.04112v1 fatcat:nvxjf63xefa7jmjxltqhkujsvy

Meta-Learning for Black-box Optimization [article]

Vishnu TV, Pankaj Malhotra, Jyoti Narwariya, Lovekesh Vig, Gautam Shroff
2019 arXiv   pre-print
black-box function optimization.  ...  Recurrent neural networks (RNNs) trained to optimize a diverse set of synthetic non-convex differentiable functions via gradient descent have been effective at optimizing derivative-free black-box functions  ...  Our work builds upon the meta-learning approach for learning black-box optimizers proposed in [5] .  ... 
arXiv:1907.06901v2 fatcat:hquwfi365bh47brr56emm77ycy

MetricOpt: Learning to Optimize Black-Box Evaluation Metrics [article]

Chen Huang, Shuangfei Zhai, Pengsheng Guo, Josh Susskind
2021 arXiv   pre-print
The learned value function is easily pluggable into existing optimizers like SGD and Adam, and is effective for rapidly finetuning a pre-trained model.  ...  Our method, named MetricOpt, operates in a black-box setting where the computational details of the target metric are unknown.  ...  In [43] , gradient interpolation is performed via black-box differentiation.  ... 
arXiv:2104.10631v1 fatcat:xldmvsgmrzfilidugbsytvosnq

Poisoning Attacks on Algorithmic Fairness [article]

David Solans, Battista Biggio, Carlos Castillo
2020 arXiv   pre-print
In this work, we introduce an optimization framework for poisoning attacks against algorithmic fairness, and develop a gradient-based poisoning attack aimed at introducing classification disparities among  ...  We empirically show that our attack is effective not only in the white-box setting, in which the attacker has full access to the target model, but also in a more challenging black-box scenario in which  ...  Castillo thanks La Caixa project LCF/PR/PR16/11110009 for partial support. B.  ... 
arXiv:2004.07401v3 fatcat:eyk6uiwkqzgcdhc2eycfbh4k5y

Feature Prioritization and Regularization Improve Standard Accuracy and Adversarial Robustness [article]

Chihuang Liu, Joseph JaJa
2019 arXiv   pre-print
The regularizer encourages the model to extract similar features for the natural and adversarial images, effectively ignoring the added perturbation.  ...  In addition to evaluating the robustness of our model, we provide justification for the attention module and propose a novel experimental strategy that quantitatively demonstrates that our model is almost  ...  The improvement is nearly 3% for white box attack and 1% for black box attack.  ... 
arXiv:1810.02424v3 fatcat:kivfovilkbdelig33qme22kgyu
« Previous Showing results 1 — 15 out of 21,878 results