Black-Box Optimization (BBO) methods can find optimal policies for systems that interact with complex environments with no analytical representation. As such, they are of interest in many Artificial Intelligence (AI) domains. Yet classical BBO methods fall short in high-dimensional non-convex problems. They are thus often overlooked in real-world AI tasks. Here we present a BBO method, termed Explicit Gradient Learning (EGL), that is designed to optimize highdimensional ill-behaved functions. We derive EGL by finding weak spots in methods that fit the objective function with a parametric Neural Network (NN) model and obtain the gradient signal by calculating the parametric gradient. Instead of fitting the function, EGL trains a NN to estimate the objective gradient directly. We prove the convergence of EGL to a stationary point and its robustness in the optimization of integrable functions. We evaluate EGL and achieve state-ofthe- art results in two challenging problems: (1) the COCO test suite against an assortment of standard BBO methods; and (2) in a high-dimensional non-convex image generation task.
|Title of host publication||37th International Conference on Machine Learning, ICML 2020|
|Editors||Hal Daume, Aarti Singh|
|Publisher||International Machine Learning Society (IMLS)|
|Number of pages||11|
|State||Published - 2020|
|Event||37th International Conference on Machine Learning, ICML 2020 - Virtual, Online|
Duration: 13 Jul 2020 → 18 Jul 2020
|Name||37th International Conference on Machine Learning, ICML 2020|
|Conference||37th International Conference on Machine Learning, ICML 2020|
|Period||13/07/20 → 18/07/20|
Bibliographical noteFunding Information:
This work was supported in part by the Ministry of Science & Technology, Israel.
Copyright © 2020 by the Authors. All rights reserved.