A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Easing non-convex optimization with neural networks
2018
International Conference on Learning Representations
Despite being non-convex, deep neural networks are surprisingly amenable to optimization by gradient descent. In this note, we use a deep neural network with D parameters to parametrize the input space of a generic d-dimensional nonconvex optimization problem. Our experiments show that minimizing the overparametrized D ≫ d variables provided by the deep neural network eases and accelerates the optimization of various non-convex test functions.
dblp:conf/iclr/Lopez-PazS18
fatcat:vz2scphatndxbkxyogv4erzyz4