Filters








5 Hits in 1.9 sec

Depth-Width Trade-offs for ReLU Networks via Sharkovsky's Theorem [article]

Vaggos Chatziafratis and Sai Ganesh Nagarajan and Ioannis Panageas and Xiao Wang
2019 arXiv   pre-print
In this work, we point to a new connection between DNNs expressivity and Sharkovsky's Theorem from dynamical systems, that enables us to characterize the depth-width trade-offs of ReLU networks for representing  ...  Understanding the representational power of Deep Neural Networks (DNNs) and how their structural properties (e.g., depth, width, type of activation unit) affect the functions they can compute, has been  ...  Ioannis Panageas would like to acknowledge SRG ISTD 2018 136, NRF for AI Fellowship and NRF2019-NRF-ANR095.  ... 
arXiv:1912.04378v1 fatcat:r2nftf2i6nehng3uge7q6iwkxm

Depth-Width Trade-offs for Neural Networks via Topological Entropy [article]

Kaifeng Bu, Yaobo Zhang, Qingxian Luo
2020 arXiv   pre-print
In this work, we show a new connection between the expressivity of deep neural networks and topological entropy from dynamical system, which can be used to characterize depth-width trade-offs of neural  ...  One of the central problems in the study of deep learning theory is to understand how the structure properties, such as depth, width and the number of nodes, affect the expressivity of deep neural networks  ...  DEPTH-WIDTH TRADE-OFFS FOR NEURAL NETWORKS VIA TOPOLOGICAL ENTROPY 7 DEPTH-WIDTH TRADE-OFFS FOR NEURAL NETWORKS VIA TOPOLOGICAL ENTROPY DEPTH-WIDTH TRADE-OFFS FOR NEURAL NETWORKS VIA TOPOLOGICAL  ... 
arXiv:2010.07587v1 fatcat:al6x7yugybhi7glqxvodc3qvem

Better Depth-Width Trade-offs for Neural Networks through the lens of Dynamical Systems [article]

Vaggos Chatziafratis and Sai Ganesh Nagarajan and Ioannis Panageas
2020 arXiv   pre-print
Recently, depth separation results for ReLU networks were obtained via a new connection with dynamical systems, using a generalized notion of fixed points of a continuous map f, called periodic points.  ...  A byproduct of our results is that there exists a universal constant characterizing the depth-width trade-offs, as long as f has odd periods.  ...  Ioannis Panageas would like to acknowledge SRG ISTD 2018 136, NRF for AI Fellowship and NRF2019-NRF-ANR095.  ... 
arXiv:2003.00777v2 fatcat:bdglrep4l5ghvkowqbb7tio2y4

Expressivity of Neural Networks via Chaotic Itineraries beyond Sharkovsky's Theorem [article]

Clayton Sanford, Vaggos Chatziafratis
2021 arXiv   pre-print
Recent works examine this basic question on neural network expressivity from the lens of dynamical systems and provide novel "depth-vs-width" tradeoffs for a large family of functions f.  ...  Our work, by further deploying dynamical systems concepts, illuminates a more subtle connection between periodicity and expressivity: we prove that periodic points alone lead to suboptimal depth-width  ...  Depth-width trade-offs for relu networks via sharkovsky's theorem. arXiv preprint arXiv:1912.04378, 2019. Vaggos Chatziafratis, Sai Ganesh Nagarajan, and Ioannis Panageas.  ... 
arXiv:2110.10295v1 fatcat:x7l5mlospnd5ddrxmbm7zub3qa

Depth separation beyond radial functions [article]

Luca Venturi, Samy Jelassi, Tristan Ozuch, Joan Bruna
2021 arXiv   pre-print
High-dimensional depth separation results for neural networks show that certain functions can be efficiently approximated by two-hidden-layer networks but not by one-hidden-layer ones in high-dimensions  ...  rate for any fixed error threshold.  ...  Depth-width trade-offs for relu networks via sharkovsky's theorem. arXiv preprint arXiv:1912.04378, 2019. Feng Dai and Yuan Xu.  ... 
arXiv:2102.01621v4 fatcat:am7gz6o5hjfu3cwzj2wq4ccwue