A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Approximating Continuous Functions by ReLU Nets of Minimal Width
[article]
2018
arXiv
pre-print
This article concerns the expressive power of depth in deep feed-forward neural nets with ReLU activations. Specifically, we answer the following question: for a fixed d_in≥ 1, what is the minimal width w so that neural nets with ReLU activations, input dimension d_in, hidden layer widths at most w, and arbitrary depth can approximate any continuous, real-valued function of d_in variables arbitrarily well? It turns out that this minimal width is exactly equal to d_in+1. That is, if all the
arXiv:1710.11278v2
fatcat:aoluxiwi5jgzzc7w5fv42jpqj4