141,509 Hits in 4.0 sec

On Sharpness of Error Bounds for Multivariate Neural Network Approximation [article]

Steffen Goebbels
2020 arXiv   pre-print
The paper is based on univariate results in (Goebbels, St.: On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks.  ...  The paper deals with best non-linear approximation by such sums of ridge functions. Error bounds are presented in terms of moduli of smoothness.  ...  There are some published attempts to show sharpness of error bounds for neural network approximation in terms of moduli of smoothness based on inverse theorems.  ... 
arXiv:2004.02203v3 fatcat:uepovwg4arf6fkyctiwpwpfydq

On sharpness of error bounds for multivariate neural network approximation

Steffen Goebbels
2020 Ricerche di Matematica  
The paper deals with best non-linear approximation by such sums of ridge functions. Error bounds are presented in terms of moduli of smoothness.  ...  AbstractSingle hidden layer feedforward neural networks can represent multivariate functions that are sums of ridge functions.  ...  There are some published attempts to show sharpness of error bounds for neural network approximation in terms of moduli of smoothness based on inverse theorems.  ... 
doi:10.1007/s11587-020-00549-x fatcat:hfksfbrmbbbavoa2mrlgbjr36u

On Sharpness of Error Bounds for Univariate Approximation by Single Hidden Layer Feedforward Neural Networks

Steffen Goebbels
2020 Results in Mathematics  
Single hidden layer feedforward neural networks with one input node perform such operations.  ...  A new non-linear variant of a quantitative extension of the uniform boundedness principle is used to show sharpness of error bounds for univariate approximation by sums of sigmoid and ReLU functions.  ...  I would like to thank an anonymous reviewer, Michael Gref, Christian Neumann, Peer Ueberholz, and Lorens Imhof for their valuable comments. Open Access.  ... 
doi:10.1007/s00025-020-01239-8 fatcat:oht2l7dowvdo3a342lxg5p5uz4

Synthesis of feedforward networks in supremum error bound

K.J. Cios, J.P. Sacha, K. Ciesielski
2000 IEEE Transactions on Neural Networks  
The main result of this paper is a constructive proof of a formula for the upper bound of the approximation error in (supremum norm) of multidimensional functions by feedforward networks with one hidden  ...  The result can also be used to estimate complexity of the maximum-error network and/or to initialize that network weights. An example of the network synthesis is given.  ...  He gave the upper bound of approximation error for networks with one-dimensional (1-D) input using step and ramp functions.  ... 
doi:10.1109/72.883398 pmid:18249848 fatcat:4kyumdws3bg4davxrnpscxjj7y

Optimal Function Approximation with Relu Neural Networks [article]

Bo Liu, Yi Liang
2019 arXiv   pre-print
Relu neural network architectures are then presented to generate these optimal approximations.  ...  We consider in this paper the optimal approximations of convex univariate functions with feed-forward Relu neural networks.  ...  For instance, the approximation error in [10] is defined with 2 norm. 4 Relu neural network architectures that achieve optimal approximations 4 .1 The Relu neural network architecture In this section  ... 
arXiv:1909.03731v2 fatcat:p52lswioffg4bizjbzfzhminqu

On Deep Learning for Inverse Problems

Jaweria Amjad, Jure Sokolic, Miguel R.D. Rodrigues
2018 2018 26th European Signal Processing Conference (EUSIPCO)  
The proposed bounds show that the sparse approximation performance of deep neural networks can be potentially superior to that of classical sparse reconstruction algorithms, with reconstruction errors  ...  This paper analyses the generalization behaviour of a deep neural networks with a focus on their use in inverse problems.  ...  The following theorem -building upon the previous one -now characterizes a bound to the generalization error of a d-layer neural network. Theorem 3.  ... 
doi:10.23919/eusipco.2018.8553376 dblp:conf/eusipco/AmjadSR18 fatcat:uoza6mqulvaeth2r6m62dzvy3u

The Estimate for Approximation Error of Neural Network with Two Weights

Fanzi Zeng, Yuting Tang
2013 The Scientific World Journal  
For this neural network, the activation function is not confined to the odd functions.  ...  This extends the nonlinear approximation ability of traditional BP neural network and RBF neural network.  ...  Using these operators as approximation tools, the upper bounds of estimate errors were estimated.  ... 
doi:10.1155/2013/935312 pmid:24470796 pmcid:PMC3891538 fatcat:5gkffgsstvahtaoorxxizowpja

Why Deep Neural Networks for Function Approximation? [article]

Shiyu Liang, R. Srikant
2017 arXiv   pre-print
First, we consider univariate functions on a bounded interval and require a neural network to achieve an approximation error of ε uniformly over the interval.  ...  needed by a deep network for a given degree of function approximation.  ...  For this multilayer network, the approximation error is In Theorem 2, we have shown an upper bound on the size of multilayer neural network for approximating polynomials.  ... 
arXiv:1610.04161v2 fatcat:2s6w3rtulvbrlmsad77s4wtmve

(Not) Bounding the True Error

John Langford, Rich Caruana
2001 Neural Information Processing Systems  
In this paper we demonstrate the method on artificial neural networks with results of a ¢¡ ¤£ order of magnitude improvement vs. the best deterministic neural net bounds.  ...  We present a new approach to bounding the true error rate of a continuous valued classifier based upon PAC-Bayes bounds.  ...  We exhibit a technique which will likely give nontrivial true error rate bounds for Bayesian neural networks regardless of approximation or prior modeling errors.  ... 
dblp:conf/nips/LangfordC01 fatcat:ig7iqafx6vh7ri4oafr7erwcta

Monotone Approximation by Quadratic Neural Network of Functions in Lp Spaces for p<1

Hawraa Abbas Almurieb, Eman Samir Bhaya
2020 Iraqi Journal of Science  
We study the essential approximation rate of any Lebesgue-integrable monotone function by a neural network of quadratic activation functions.  ...  Some researchers are interested in using the flexible and applicable properties of quadratic functions as activation functions for FNNs.  ...  , and for some the following neural network operator is defined ( ) ∑ . / ( ) (4) We name to be the set of all neural networks of type Error!  ... 
doi:10.24996/ijs.2020.61.4.20 fatcat:d3wxkcgejrc7rkrttnufzwn25q

Error analysis for deep neural network approximations of parametric hyperbolic conservation laws [article]

Tim De Ryck, Siddhartha Mishra
2022 arXiv   pre-print
We derive rigorous bounds on the error resulting from the approximation of the solution of parametric hyperbolic scalar conservation laws with ReLU neural networks.  ...  In addition, we provide an explicit upper bound on the generalization error in terms of the training error, number of training samples and the neural network size.  ...  The main results on the approximation error for parametric hyperbolic conservation laws by ReLU neural networks can be found in Section 3.  ... 
arXiv:2207.07362v1 fatcat:cqlyz476yrar7gedqesjqvyvm4

Fuzzy Identification Using Fuzzy Neural Networks With Stable Learning Algorithms

W. Yu, X. Li
2004 IEEE transactions on fuzzy systems  
This paper suggests new learning laws for Mamdani and Takagi-Sugeno-Kang type fuzzy neural networks based on input-to-state stability approach.  ...  The calculation of the learning rate does not need any prior information such as estimation of the modeling error bounds.  ...  Let us define identification error vector as (12) We will use the modeling error to train the fuzzy neural networks (8) online such that can approximate .  ... 
doi:10.1109/tfuzz.2004.825067 fatcat:3uqhafvucrb5rlubiodihovdi4

When Neurons Fail

El Mahdi El Mhamdi, Rachid Guerraoui
2017 2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS)  
Our bound is on a quantity, we call the Forward Error Propagation, capturing how much error is propagated by a neural network when a given number of components is failing, computing this quantity only  ...  We show how our bound can be leveraged to quantify the effect of memory cost reduction on the accuracy of a neural network, to estimate the amount of information any neuron needs from its preceding layer  ...  Acknowledgment The authors would like to thank Julien Stainer for useful bibliographic pointers and comments on the manuscript.  ... 
doi:10.1109/ipdps.2017.66 dblp:conf/ipps/MhamdiG17 fatcat:uqkuqrvquze3hozg7vcq6evsti

On-line learning of dynamical systems in the presence of model mismatch and disturbances

Jun Wang, Danchi Jiang
2000 IEEE Transactions on Neural Networks  
Convergence properties are given to show that the weight parameters of the recurrent neural network are bounded and the state estimation error converges exponentially to a bounded set, which depends on  ...  the modeling error and the disturbance bound.  ...  It is very effective for the cases where the exact model of a dynamical system to be learned differs from the neural-network model and the approximation error is bounded.  ... 
doi:10.1109/72.883420 pmid:18249853 fatcat:zqpeecepozh5nk2xduv74eupuq

Analysis of Deep Neural Networks with Quasi-optimal polynomial approximation rates [article]

Joseph Daws, Clayton Webster
2019 arXiv   pre-print
The construction of the proposed neural network is based on a quasi-optimal polynomial approximation.  ...  We show the existence of a deep neural network capable of approximating a wide class of high-dimensional approximations.  ...  Acknowledgments We would like to Anton Dereventsov, Armenak Petrosyan, and Viktor Reshniak for many helpful discussions during the formulation of this work.  ... 
arXiv:1912.02302v1 fatcat:b57j3j3ow5dwblyaqdfspsrfiq
« Previous Showing results 1 — 15 out of 141,509 results