Filters








4,292 Hits in 8.1 sec

Do ReLU Networks Have An Edge When Approximating Compactly-Supported Functions? [article]

Anastasis Kratsios, Behnoosh Zamanlooy
2022 arXiv   pre-print
We study the problem of approximating compactly-supported integrable functions while implementing their support set using feedforward neural networks.  ...  Conversely, we show that polynomial regressors and analytic feedforward networks are not universal in this space.  ...  The authors would equally like to thank Ivan Dokmanić and Hieu Nguyen of the University of Basel for their helpful references concerning pooling layers and certain activation functions.  ... 
arXiv:2204.11231v2 fatcat:yzbiwihjyvhvfd4c4rq42rxzba

A spiking neural network architecture for nonlinear function approximation

Nicolangelo Iannella, Andrew D. Back
2001 Neural Networks  
Spiking neural networks are of interest both from a biological point of view and in terms of a method of robust signaling in particularly noisy or dif®cult environments.  ...  In this paper, we propose a spiking neural network architecture using both integrate-and-®re units as well as delays, that is capable of approximating a real valued function mapping to within a speci®ed  ...  This led to a theorem that any feedforward or recurrent analog neural network, for example a multilayered perceptron, consisting of sigmoidal neurons that employ a piecewise linear gain function, can be  ... 
doi:10.1016/s0893-6080(01)00080-6 pmid:11665783 fatcat:zr3enaplc5gxlflsgpn76uue7q

Optimizing Objective Functions from Trained ReLU Neural Networks via Sampling [article]

Georgia Perakis, Asterios Tsiourvas
2022 arXiv   pre-print
This paper introduces scalable, sampling-based algorithms that optimize trained neural networks with ReLU activations.  ...  We first propose an iterative algorithm that takes advantage of the piecewise linear structure of ReLU neural networks and reduces the initial mixed-integer optimization problem (MIP) into multiple easy-to-solve  ...  Each solution of the LP often belongs to more than one hyperplane, as it is a corner point in a piecewise linear function.  ... 
arXiv:2205.14189v2 fatcat:neuftu3dnndzhi7gw7mvyajjbi

Chaotic Dynamics in Iterated Map Neural Networks with Piecewise Linear Activation Function [article]

Sitabhra Sinha
1999 arXiv   pre-print
The paper examines the discrete-time dynamics of neuron models (of excitatory and inhibitory types) with piecewise linear activation functions, which are connected in a network.  ...  These include using the network for auto-association, pattern classification, nonlinear function approximation and periodic sequence generation.  ...  Figure 1 The piecewise linear activation function F for a single neuron (gain parameter, a = 5).  ... 
arXiv:chao-dyn/9903009v1 fatcat:sfhabotfbvf2dlsezgco2is33a

Fourier Neural Networks as Function Approximators and Differential Equation Solvers [article]

Marieme Ngom, Oana Marin
2021 arXiv   pre-print
We validate this FNN on naturally periodic smooth functions and on piecewise continuous periodic functions.  ...  We present a Fourier neural network (FNN) that can be mapped directly to the Fourier decomposition.  ...  its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or  ... 
arXiv:2005.13100v2 fatcat:cgafcijew5dqvaa6cfe3nkd4wu

Functional Variational Bayesian Neural Networks [article]

Shengyang Sun, Guodong Zhang, Jiaxin Shi, Roger Grosse
2019 arXiv   pre-print
We introduce functional variational Bayesian neural networks (fBNNs), which maximize an Evidence Lower BOund (ELBO) defined directly on stochastic processes, i.e. distributions over functions.  ...  Variational Bayesian neural networks (BNNs) perform variational inference over weights, but it is difficult to specify meaningful priors and approximate posteriors in a high-dimensional weight space.  ...  To make the task more difficult for the fBNN, we used the tanh activation function, which is not well suited for piecewise constant or piecewise linear functions. 5 Posterior predictive samples and marginals  ... 
arXiv:1903.05779v1 fatcat:7t4wswgva5gpxlgiuhm3335eja

Variational Physics Informed Neural Networks: the role of quadratures and test functions [article]

Stefano Berrone, Claudio Canuto, Moreno Pintore
2022 arXiv   pre-print
a computed neural network.  ...  In this work we analyze how quadrature rules of different precisions and piecewise polynomial test functions of different degrees affect the convergence rate of Variational Physics Informed Neural Networks  ...  Similarly, in order to obtain an analytical expression of the extension u of the Dirichlet data g, one can train another neural network to (approximately) match the values of g on Γ D or use a data transfinite  ... 
arXiv:2109.02035v2 fatcat:hl3vgjaxarf77fq22uf3jadswy

Design Space Exploration of Neural Network Activation Function Circuits

Tao Yang, Yadong Wei, Zhijun Tu, Haolun Zeng, Michel A. Kinsy, Nanning Zheng, Pengju Ren
2018 IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems  
Our experiments demonstrate that neural networks are generally insensitive to the precision of the activation function.  ...  effective neural network accelerators, since these functions require lots of resources.  ...  ANNs consist of neurons, which sum incoming signals and apply an activation function, and connections, which amplify or inhibit passing signals.  ... 
doi:10.1109/tcad.2018.2871198 fatcat:ntbp2md43ngthemph7wyvswyxm

ReLU Networks Are Universal Approximators via Piecewise Linear or Constant Functions

Changcun Huang
2020 Neural Computation  
For univariate function f(x), we use the composite of ReLUs to produce a line segment; all of the subnetworks of line segments comprise a ReLU network, which is a piecewise linear approximation to f(x)  ...  This letter proves that a ReLU network can approximate any continuous function with arbitrary precision by means of piecewise linear or constant approximations.  ...  A ReLU network is equivalent to a piecewise linear or constant function. Proof. Suppose that the index of the last hidden layer is k, and layer k has n neural units.  ... 
doi:10.1162/neco_a_01316 pmid:32946706 fatcat:qu63marbmfeknfafku2uro44ce

Fourier neural networks as function approximators and differential equation solvers

Marieme Ngom, Oana Marin
2021 Statistical analysis and data mining  
We validate this FNN on naturally periodic smooth functions and on piecewise continuous periodic functions.  ...  We present a Fourier neural network (FNN) that can be mapped directly to the Fourier decomposition.  ...  We present a particular type of feedforward networks, Fourier neural networks (FNNs), which are shallow neural networks with a sinusoidal activation function.  ... 
doi:10.1002/sam.11531 fatcat:4eheerxspveepnb5zubjaj2st4

Computing Lyapunov functions using deep neural networks

Lars Grüne, ,Mathematical Institute, University of Bayreuth, 95440 Bayreuth, Germany
2019 Journal of Computational Dynamics  
We propose a deep neural network architecture and associated loss functions for a training algorithm for computing approximate Lyapunov functions of systems of nonlinear ordinary differential equations  ...  Under the assumption that the system admits a compositional Lyapunov function, we prove that the number of neurons needed for an approximation of a Lyapunov function with fixed accuracy grows only polynomially  ...  Here the fact that the neural network provides an explicit analytic, albeit complex, expression for W (·; θ * ) may be helpful.  ... 
doi:10.3934/jcd.2021006 fatcat:w7dxyp4zgfgn3aoxfzyzksp4de

Complex-valued neural networks with adaptive spline activation function for digital-radio-links nonlinear equalization

A. Uncini, L. Vecci, P. Campolucci, F. Piazza
1999 IEEE Transactions on Signal Processing  
In this paper, a new complex-valued neural network based on adaptive activation functions is proposed.  ...  By varying the control points of a pair of Catmull-Rom cubic splines, which are used as an adaptable activation function, this new kind of neural network can be implemented as a very simple structure that  ...  and can be simple real-valued sigmoids or more sophisticated adaptive functions.  ... 
doi:10.1109/78.740133 fatcat:zfvwctgfwnazxh52xgcb7ea7wy

Better Approximations of High Dimensional Smooth Functions by Deep Neural Networks with Rectified Power Units [article]

Bo Li, Shanshan Tang, Haijun Yu
2019 arXiv   pre-print
Deep neural networks with rectified linear units (ReLU) are getting more and more popular due to their universal representation power and successful applications.  ...  Math. 1:61-80, 1993], our constructions use less number of activation functions and numerically more stable, they can be served as good initials of deep RePU networks and further trained to break the limit  ...  [3] on efficient training of deep neural networks (DNNs), which pack up multi-layers of units with some nonlinear activation function.  ... 
arXiv:1903.05858v4 fatcat:pg7rlq5q3fbkflit6zsbhfsh6y

Small nonlinearities in activation functions create bad local minima in neural networks [article]

Chulhee Yun, Suvrit Sra, Ali Jadbabaie
2019 arXiv   pre-print
We also present a counterexample for more general activations (sigmoid, tanh, arctan, ReLU, etc.), for which there exists a bad local minimum.  ...  We complete our discussion by presenting a comprehensive characterization of global optimality for deep linear networks, which unifies other results on this topic.  ...  Remark: "ReLU-like" activation functions. Recall the piecewise linear nonnegative homogeneous activation functionh s + ,s − .  ... 
arXiv:1802.03487v4 fatcat:fuxyiuxmejem7o3tjd7xbejfb4

Neural Network Approximation of Refinable Functions [article]

Ingrid Daubechies, Ronald DeVore, Nadav Dym, Shira Faigenbaum-Golovin, Shahar Z. Kovalsky, Kung-Ching Lin, Josiah Park, Guergana Petrova, Barak Sober
2021 arXiv   pre-print
of neural networks.  ...  In the desire to quantify the success of neural networks in deep learning and other applications, there is a great interest in understanding which functions are efficiently approximated by the outputs  ...  Introduction Neural Network Approximation (NNA) is concerned with how efficiently a function, or a class of functions, is approximated by the outputs of neural networks.  ... 
arXiv:2107.13191v1 fatcat:u4rvyghcefeclclyd2bkmqkdhq
« Previous Showing results 1 — 15 out of 4,292 results