Filters








60,394 Hits in 4.1 sec

Approximation and Estimation Bounds for Artificial Neural Networks [chapter]

A.R. BARRON
1991 COLT  
For a common class of artificial neural networks, the mean integrated squared error between the estimated network and a target function / is shown to be bounded by where n is the number of nodes, d is  ...  Approximation error refers to the distance between the target function and the closest neural network function of a given architecture and estimation error refers to the distance between this ideal network  ...  Acknowledgements This work was supported by ONR contracts N00014-89-J-1811 and N00014-93-1-0085.  ... 
doi:10.1016/b978-1-55860-213-7.50025-0 fatcat:y7hzs4lwdfayrp4g7632dso53i

Approximation and estimation bounds for artificial neural networks

Andrew R. Barron
1994 Machine Learning  
For a common class of artificial neural networks, the mean integrated squared error between the estimated network and a target function f is shown to be bounded by where n is the number of nodes, d is  ...  Consequently, it is seen that for the class of functions considered here, (adaptive) neural network estimation has approximation and estimation properties that are superior to traditional linear expansions  ...  Acknowledgements This work was supported by ONR contracts N00014-89-J-1811 and N00014-93-1-0085.  ... 
doi:10.1007/bf00993164 fatcat:pdvlyivhqndfnjwa7nbrctnffu

Page 261 of Neural Computation Vol. 7, Issue 2 [page]

1995 Neural Computation  
In Symposium on the Interface: Statistics and Computing Science, Reston, Virginia. Barron, A. R. 1991. Approximation and estimation bounds for artificial neural networks. Tech.  ...  Approximation and estimation bounds for artificial neural networks. Machine Learn. 14, 115-133. Baum, E. B. 1988. On the capabilities of multilayer perceptrons. J. Complex. 4, 193-215. Baum, E.  ... 

Page 840 of Neural Computation Vol. 8, Issue 4 [page]

1996 Neural Computation  
Approximation and estimation bounds for artificial neural networks. Mach. Learn. 14, 115-133. Barron, A., and Cover, T. 1991. Minimum complexity density estimation. [EEE Trans. Theory 37(4). Baum, E.  ...  In Artificial Neural Networks for Speech and Vision, R. J. Mammone, ed., pp. 97-113. Chapman & Hall, London. Girosi, F., Jones, M., and Poggio, T. 1995.  ... 

Penalized least squares, model selection, convex hull classes and neural nets

Gerald H. L. Cheang, Andrew R. Barron
2001 The European Symposium on Artificial Neural Networks  
We d e v elop improved risk bounds for function estimation with models such as single hidden layer neural nets, using a penalized least squares criterion to select the size of the model.  ...  These results show the estimator achieves the best order of balance between approximation error and penalty relative to the sample size.  ...  We d e v elop risk bounds inESANN'2001 proceedings -European Symposium on Artificial Neural Networks Bruges (Belgium), 25-27 April 2001, D-Facto public., ISBN 2-930307-01-3, pp. 371-376 ESANN'2001  ... 
dblp:conf/esann/CheangB01 fatcat:2e53mpdb3fbn5kzqkc6zfo6cci

[Neural Networks: A Review from Statistical Perspective]: Comment

Andrew R. Barron
1994 Statistical Science  
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive.  ...  We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. .  ...  Many aspects of artificial neural networks are in need of further investigation. Here, I comment on approximation and computation issues and their impact on statistical estimation of functions.  ... 
doi:10.1214/ss/1177010640 fatcat:cd3thnaubrfz7llwrqfyspukfe

Rainfall estimation using artificial neural network group

Ming Zhang, John Fulcher, Roderick A. Scofield
1997 Neurocomputing  
We first develop artificial neural network group theory, then proceed to show how neural network groups are able to approximate any kind of piecewise continuous function, and to any degree of accuracy.  ...  Neural network group theory is a step towards bridging this gap between simple models and complex systems.  ...  An Artificial Neural network expert system for Satellite-derived Estimation of Rainfall (ANSER) has been developed for estimating rainfall in NOAA (National Oceanic and Atmospheric Administration, US Department  ... 
doi:10.1016/s0925-2312(96)00022-7 fatcat:vffezy77ojaehod4ydqzg2q3by

Page 6696 of Mathematical Reviews Vol. , Issue 94k [page]

1994 Mathematical Reviews  
This book is necessary reading for students of approximation and learning theory in neural networks, as well for those who are interested in the connections between statistics and neural networks.  ...  Rigorous approximation results are obtained for a variety of activation functions, which include the commonly used sigmoid, for networks with bounded weights, and for approximat- ing an unknown mapping  ... 

Precision and Approximate Flatness in Artificial Neural Networks

Maxwell B. Stinchcombe
1995 Neural Computation  
Communicated by Vera Kurkova Precision and Approximate Flatness in Artificial Neural Networks Maxwell B.  ...  Section 3 gives sufficient conditions for approximate flatness in a wide variety of artificial neural network contexts, and ends with a flatness-based comparison of artificial neu- ral networks and other  ... 
doi:10.1162/neco.1995.7.5.1021 fatcat:rkyutl7c4jbrdcml7c7wptkifi

CONSTRUCTIVE ESTIMATION OF APPROXIMATION FOR TRIGONOMETRIC NEURAL NETWORKS

JIANJUN WANG, WEIHUA XU, BIN ZOU
2012 International Journal of Wavelets, Multiresolution and Information Processing  
For the three-layer artificial neural networks with trigonometric weights coefficients, the upper bound and lower bound of approximating 2π-periodic pth-order Lebesgue integrable functions L p 2π are obtained  ...  Theorems we obtained provide explicit equational representations of these approximating networks, the specification for their numbers of hidden-layer units, the lower bound estimation of approximation,  ...  In order to get such an essential order of approximation of a neural network, besides upper bound estimation, a lower bound estimation that characterizes the worst approximation precision of the network  ... 
doi:10.1142/s021969131250021x fatcat:rtfnqndk6zfjblgcc7whbotyha

Statistical Guarantees for the Robustness of Bayesian Neural Networks

Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane, Matthew Wicker
2019 Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence  
We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction  ...  priori error and confidence bounds.  ...  A Experimental Settings We report details of the training procedure for the three inference methods analysed in the main text.  ... 
doi:10.24963/ijcai.2019/789 dblp:conf/ijcai/CardelliKLPPW19 fatcat:gculcybdrba2jaljkxbkctnp2e

Sample Complexity Bounds for RNNs with Application to Combinatorial Graph Problems (Student Abstract)

Nil-Jana Akpinar, Bernhard Kratzwald, Stefan Feuerriegel
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
While such derivations have been made earlier for feed-forward and convolutional neural networks, our work presents the first such attempt for recurrent neural networks.  ...  As demonstrated based on the NP-hard edge clique cover number, recurrent neural networks (RNNs) are particularly suited for this task and can even outperform state-of-the-art heuristics.  ...  Recent examples of bounding sample complexities involve, for instance, binary feed-forward neural networks (FNNs) (Harvey, Liaw, and Mehrabian 2017) and convolutional neural networks (Du et al. 2018  ... 
doi:10.1609/aaai.v34i10.7144 fatcat:x7c6gep3abgm3ll7lz4hwmou3i

Rethinking Influence Functions of Neural Networks in the Over-Parameterized Regime

Rui Zhang, Shihua Zhang
2022 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
To this end, we utilize the neural tangent kernel (NTK) theory to calculate IF for the neural network trained with regularized mean-square loss, and prove that the approximation error can be arbitrarily  ...  Understanding the black-box prediction for neural networks is challenging.  ...  Acknowledgments We thank reviewers for their helpful advice. Rui Zhang would like to thank Xingbo Du for his valuable and detailed feedback to improve the writing. This  ... 
doi:10.1609/aaai.v36i8.20893 fatcat:yjxo5xeo4fgbfcgcyoskjdsexe

Integral Equations and Machine Learning [article]

Alexander Keller, Ken Dahm
2019 arXiv   pre-print
In the light of the recent advances in reinforcement learning for playing games, we investigate the representation of an approximate solution of an integral equation by artificial neural networks and derive  ...  The resulting Monte Carlo and quasi-Monte Carlo methods train neural networks with standard information instead of linear information and naturally are able to generate an arbitrary number of training  ...  Acknowledgements The authors thank Anton Kaplanyan, Thomas Müller, and Fabrice Rouselle for profound discussions and advice.  ... 
arXiv:1712.06115v3 fatcat:co7l5padjbawjmsbgvkyeck2lu

Page 1029 of Neural Computation Vol. 7, Issue 5 [page]

1995 Neural Computation  
Precision and Approximate Flatness in Neural Networks 1029 The same set of conclusions about the power of tests to detect alter- natives arises in the literature on testing for arbitrary misspecifications  ...  This section concludes with a flatness-based comparison of artificial neural networks with other methods of finding functional relations between inputs and outputs. 3.1 Single Hidden Layer Feedforward  ... 
« Previous Showing results 1 — 15 out of 60,394 results