Filters








145,471 Hits in 2.7 sec

Layered Neural Networks with Gaussian Hidden Units as Universal Approximations

Eric J. Hartman, James D. Keeler, Jacek M. Kowalski
1990 Neural Computation  
Communicated by Halbert White Layered Neural Networks with Gaussian Hidden Units as Universal Approximations Eric J. Hartman James D. Keeler Microelectronics and Computer Technology Corp.  ...  Universal approximation using feedfor- ward networks with non-sigmoid hidden layer activation functions. Conf. Neural Networks, Washington, D.C., IEEE and INNS, 1, 613.  ... 
doi:10.1162/neco.1990.2.2.210 fatcat:olbe7vontrgw3nrcr34edosrlq

Intelligent reconfigurable universal fuzzy flip-flop

Essam Koshak, Afzel Noore, Rita Lovassy
2010 IEICE Electronics Express  
When integrated with a multi layer neural network, the resulting reconfigurable fuzzy-neural structure showed excellent learning ability.  ...  The sigmoid activation function of neurons in the hidden layers of the multilayer neural network was replaced by the quasi-sigmoidal transfer characteristics of the universal fuzzy flip-flop in the reconfigurable  ...  We integrated the reconfigurable universal fuzzy flip-flop in the hidden layers of a multilayer neural network.  ... 
doi:10.1587/elex.7.1119 fatcat:3kp45ztivbambhfh5qswfw7jhy

Why Deep Neural Networks: A Possible Theoretical Explanation [chapter]

Chitta Baral, Olac Fuentes, Vladik Kreinovich
2017 Studies in Systems, Decision and Control  
In the past, the most widely used neural networks were 3-layer ones.  ...  These networks were preferred, since one of the main advantages of the biological neural networks -which motivated the use of neural networks in computing -is their parallelism, and 3-layer networks provide  ...  It is also known that 2-layer neural networks do not have the universal approximation property. As a result, 3-layer networks used to be most frequently used.  ... 
doi:10.1007/978-3-319-61753-4_1 fatcat:jxfveprfmnetje6ohvrc53ga7a

Fourier Neural Networks for Function Approximation [article]

R Subhash Chandra Bose, Kakarla Yaswanth
2021 arXiv   pre-print
Insight in the working can be obtained by studying the universal approximation property of neural networks. It is proved extensively that neural networks are universal approximators.  ...  Further, we examined that Fourier neural network is able to perform fairly good with only two layers in the neural network.  ...  Networks as universal approximator unlike single hidden layer FNN which is a Universal Approximator.  ... 
arXiv:2111.08438v1 fatcat:7uxy35ww3rcl3peoiovqkq5u2m

On Approximation Capabilities of ReLU Activation and Softmax Output Layer in Neural Networks [article]

Behnam Asadi, Hui Jiang
2020 arXiv   pre-print
In this paper, we have extended the well-established universal approximator theory to neural networks that use the unbounded ReLU activation function and a nonlinear softmax output layer.  ...  Moreover, our theoretical results have shown that a large enough neural network using a nonlinear softmax output layer can also approximate any indicator function in L^1, which is equivalent to mutually-exclusive  ...  Theorem 1 (Universal Approximation with ReLU) When ReLU is used as the activation function, the output of a single-hidden-layer neural network g(x) = n j=1 α j · ReLU(w j x + b j ) (1) in dense in L 1  ... 
arXiv:2002.04060v1 fatcat:2etnq2kbqrfbzb6t2l7lg7ce3y

Approximation Capabilities of Neural Networks using Morphological Perceptrons and Generalizations [article]

William Chang, Hassan Hamad, Keith M. Chugg
2022 arXiv   pre-print
These neural networks are known to have universal function approximation capabilities.  ...  Standard artificial neural networks (ANNs) use sum-product or multiply-accumulate node operations with a memoryless nonlinear activation.  ...  A well known fact of neural networks is that they are universal approximators.  ... 
arXiv:2207.07832v1 fatcat:i75suhmyqbd27hnmuiqcrty6ru

On the Universal Approximation Property and Equivalence of Stochastic Computing-based Neural Networks and Binary Neural Networks [article]

Yanzhi Wang, Zheng Zhan, Jiayu Li, Jian Tang, Bo Yuan, Liang Zhao, Wujie Wen, Siyue Wang, Xue Lin
2018 arXiv   pre-print
Specific forms of binary neural networks (BNNs) and stochastic computing based neural networks (SCNNs) are particularly appealing to hardware implementations since they can be implemented almost entirely  ...  Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity.  ...  Universal Approximation Property For feedforward neural networks with one hidden layer, [24] and [25] have proved separately the universal approximation property, which guarantees that for any given  ... 
arXiv:1803.05391v2 fatcat:xoeqb25unzaxnh35qn4cwhrowq

Universal Approximation Property and Equivalence of Stochastic Computing-Based Neural Networks and Binary Neural Networks

Yanzhi Wang, Zheng Zhan, Liang Zhao, Jian Tang, Siyue Wang, Jiayu Li, Bo Yuan, Wujie Wen, Xue Lin
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Besides the universal approximation property, we also derive an appropriate bound for bit length M in order to provide insights for the actual neural network implementations.  ...  Hardware accelerations of deep neural networks have been extensively investigated.  ...  For multi-layer networks, the universal approximation still holds as a natural extension (because the previous layers can be considered as mapping).  ... 
doi:10.1609/aaai.v33i01.33015369 fatcat:evdcmil33jg5fiudmgpcmqqeui

Expressivity of Deep Neural Networks [article]

Ingo Gühring, Mones Raslan, Gitta Kutyniok
2020 arXiv   pre-print
Approximation rates for classical function spaces as well as benefits of deep neural networks over shallow ones for specifically structured function classes are discussed.  ...  In this review paper, we give a comprehensive overview of the large variety of approximation results for neural networks.  ...  Universality of Shallow Neural Networks The most famous types of expressivity results for neural networks state that shallow neural networks are universal approximators.  ... 
arXiv:2007.04759v1 fatcat:lpneojafcvfbrgx4qx4oo5k5na

Universal Adder Neural Networks [article]

Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, Chunjing Xu, Tong Zhang
2021 arXiv   pre-print
An approximation bound for AdderNets with a single hidden layer is also presented.  ...  In this paper, we present adder networks (AdderNets) to trade these massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper additions to reduce  ...  Toy Experiments of Approximation Capacity In the above subsections, we have proved that a two-layer adder neural network with a single hidden layer can be regarded as a universal approximator.  ... 
arXiv:2105.14202v5 fatcat:vrgm6swhqvhznosibccrahvqee

Two-hidden-layer Feedforward Neural Networks are Universal Approximators: A Constructive Approach [article]

Rocio Gonzalez-Diaz, Miguel A. Gutiérrez-Naranjo, Eduardo Paluzo-Hidalgo
2020 arXiv   pre-print
It is well known that Artificial Neural Networks are universal approximators.  ...  The classical result proves that, given a continuous function on a compact set on an n-dimensional space, then there exists a one-hidden-layer feedforward network which approximates the function.  ...  Introduction One of the first results in the development of neural networks is the Universal Approximation Theorem [5, 13] .  ... 
arXiv:1907.11457v2 fatcat:qluc6o5tzbcanmjlosgdklbd2m

Why neural networks apply to scientific computing?

Shaoqiang Tang, Yang Yang
2021 Theoretical and Applied Mechanics Letters  
We see three thrusts in neural networks that contribute to the answer. First, by universal approximation theorem, deep neural networks are capable to approximate functions.  ...  It is worth mentioning that a neural network with multiple hidden layers is usually called as a deep neural network. So this FFNN is a deep one.  ... 
doi:10.1016/j.taml.2021.100242 fatcat:vecnav6yxfhovltn53vqxcusp4

A Forecasting Method Based on Online Self-Correcting Single Model RBF Neural Network

Yanzhi Wang, Guixiong Liu
2012 Procedia Engineering  
New samples are used to improve the network approximation precision during work time.  ...  In order to enhance the stability of RBF neural network under constant changing conditions, we propose a new scheme of forecasting method based on single model structure.  ...  Acknowledgements The work is supported by the Program of New Century Excellent Talents in University (NCET-08-0211), Guangdong higher school high-level talents project and Guangzhou Technology Support  ... 
doi:10.1016/j.proeng.2012.01.343 fatcat:ccf53vdiizh5jiprvcjprdodhe

Page 1233 of Neural Computation Vol. 6, Issue 6 [page]

1994 Neural Computation  
Communicated by Vera Kurkova Approximation Capability of Layered Neural Networks with Sigmoid Units on Two Layers Yoshifusa Ito Toyohashi University of Technology, Toyohashi, Japan Using only an elementary  ...  known that three-layered feedforward neural networks can uni- formly approximate continuous functions if they have sigmoid units on the hidden layer.  ... 

Theoretical Properties for Neural Networks with Weight Matrices of Low Displacement Rank [article]

Liang Zhao, Siyu Liao, Yanzhi Wang, Zhe Li, Jian Tang, Victor Pan and Bo Yuan
2017 arXiv   pre-print
We then show that the error bounds of LDR neural networks are as efficient as general neural networks with both single-layer and multiple-layer structure.  ...  First, we prove the universal approximation property of LDR neural networks with a mild condition on the displacement operators.  ...  As this assumption is not true, we have the universal approximation property of LDR neural networks.  ... 
arXiv:1703.00144v4 fatcat:xf5gfazb7rcmpnxgvwggkfrgxe
« Previous Showing results 1 — 15 out of 145,471 results