Filters








2 Hits in 1.5 sec

Learning Sub-Patterns in Piecewise Continuous Functions [article]

Anastasis Kratsios, Behnoosh Zamanlooy
2021 arXiv   pre-print
a degree of continuity which limits the neural network model's uniform approximation capacity to continuous functions.  ...  Most stochastic gradient descent algorithms can optimize neural networks that are sub-differentiable in their parameters; however, this implies that the neural network's activation function must exhibit  ...  We first identify approximation-theoretic limitations to commonly deployed feedforward neural networks (FFNNs); i.e.: with continuous activation functions, and then fill this gap with a new deep neural  ... 
arXiv:2010.15571v4 fatcat:lyn4wu3zvrhyjm7dhwp56x645u

A Canonical Transform for Strengthening the Local L^p-Type Universal Approximation Property [article]

Anastasis Kratsios, Behnoosh Zamanlooy
2021 arXiv   pre-print
Applications to feedforward networks, convolutional neural networks, and polynomial bases are explored.  ...  In the general case, where ℱ may contain non-analytic functions, we provide an abstract form of these results guaranteeing that there always exists some function space in which ℱ-tope is dense but ℱ is  ...  The model class modification, was motivated by the structure approximation problem of learning any composite pattern while preserving its composite pattern structure.  ... 
arXiv:2006.14378v3 fatcat:442jq46ya5gxjajfqvhfei4ieu