58,915 Hits in 3.3 sec

Input Invex Neural Network [article]

Suman Sapkota, Binod Bhattarai
2021 arXiv   pre-print
Our method outperforms linear NN and Input Convex Neural Network (ICNN) with a large margin. We publish our code and implementation details at github.  ...  We call it Input Invex Neural Networks (II-NN).  ...  Input Convex Neural Network (ICNN) [1] provides an elegant solution for constraining a Neural Network to output convex function.  ... 
arXiv:2106.08748v1 fatcat:q3lf7bohhfftbmcr56gop3ib2m

Input Convex Neural Networks [article]

Brandon Amos, Lei Xu, J. Zico Kolter
2017 arXiv   pre-print
This paper presents the input convex neural network architecture.  ...  These are scalar-valued (potentially deep) neural networks with constraints on the network parameters such that the output of the network is a convex function of (some of) the inputs.  ...  Figure 1 . 1 A fully input convex neural network (FICNN). Figure 2 . 2 A partially input convex neural network (PICNN).  ... 
arXiv:1609.07152v3 fatcat:fke2eonbyjbcbiqfvudec4ov3q

Easing non-convex optimization with neural networks

David Lopez-Paz, Levent Sagun
2018 International Conference on Learning Representations  
Despite being non-convex, deep neural networks are surprisingly amenable to optimization by gradient descent.  ...  In this note, we use a deep neural network with D parameters to parametrize the input space of a generic d-dimensional nonconvex optimization problem.  ...  Our method is inspired by the fact that deep neural networks, although non-convex, are surprisingly well-behaved when optimized by gradient descent.  ... 
dblp:conf/iclr/Lopez-PazS18 fatcat:vz2scphatndxbkxyogv4erzyz4

CDiNN -Convex Difference Neural Networks [article]

Parameswaran Sankaranarayanan, Raghunathan Rengaswamy
2021 arXiv   pre-print
A new neural network architecture called the Input Convex Neural Networks (ICNNs) learn the output as a convex function of inputs thereby allowing the use of efficient convex optimization methods.  ...  Use of ICNNs for determining the input for minimizing output has two major problems: learning of a non-convex function as a convex mapping could result in significant function approximation error, and  ...  Input Convex Neural Network -ICNN Input Convex Neural Network (ICNN) [1] is designed to learn function mapping from a class of convex functions generated using a neural network architecture.  ... 
arXiv:2103.17231v2 fatcat:ulehap4jajfjxnrz7fasopd7gi

Input Convex Neural Networks for Optimal Voltage Regulation [article]

Yize Chen, Yuanyuan Shi, Baosen Zhang
2020 arXiv   pre-print
In the training stage, the proposed input convex neural network learns the mapping between the power injections and the voltages.  ...  In the voltage regulation stage, such trained network can find the optimal reactive power injections by design. We also provide a practical distributed algorithm by using the trained neural network.  ...  INPUT CONVEX NEURAL NETWORK DESIGN In this section, starting from the standard neural networks architecture, we illustrate how to construct a neural network whose outputs are convex with respect to inputs  ... 
arXiv:2002.08684v2 fatcat:odg4hl2iarcgdba2jfkqwl2rsu

An Outer-approximation Guided Optimization Approach for Constrained Neural Network Inverse Problems [article]

Myun-Seok Cheon
2020 arXiv   pre-print
The constrained neural network inverse problems refer to an optimization problem to find the best set of input values of a given trained neural network in order to produce a predefined desired output in  ...  presence of constraints on input values.  ...  The gradient respect to input parameters can be computed with back-propagation of neural networks [Linden and Kindermann, 1989] .  ... 
arXiv:2002.10404v1 fatcat:dl4lgqaderhrree6rzav45gdp4

Deep Online Convex Optimization by Putting Forecaster to Sleep [article]

David Balduzzi
2016 arXiv   pre-print
Methods from convex optimization such as accelerated gradient descent are widely used as building blocks for deep learning algorithms.  ...  This paper develops the first rigorous link between online convex optimization and error backpropagation on convolutional networks.  ...  The input to the network is encoded in the weights of the source units.  ... 
arXiv:1509.01851v2 fatcat:c7vzvwe66reu5pvzacvfaortnu

Maxout Filter Networks Referencing Morphological Filters

Toru Ishikawa, Kiyoaki Itoi, Kei-ichiro Kobayashi, Makoto Nakashizuka
2018 Zenodo  
To update the parameter set, we employ a stochastic gradient descent method that is widely employed for training neural networks.  ...  For convolutional neural networks (CNNs) [3] [4] , the unit input of the first layer is obtained from a sliding window over the image.  ... 
doi:10.5281/zenodo.1160236 fatcat:qbtt4g35hfciver6kv4n3i4a5y

Deep Online Convex Optimization with Gated Games [article]

David Balduzzi
2016 arXiv   pre-print
This paper provides the first convergence rates for gradient descent on rectifier convnets.  ...  However, the reasons for their empirical success are unclear, since modern convolutional networks (convnets), incorporating rectifier units and max-pooling, are neither smooth nor convex.  ...  Neural networks Consider a neural network with L − 1 hidden layers. Let h 0 := x denote the input to the network.  ... 
arXiv:1604.01952v1 fatcat:oh62dtvztrbd7mendv7io5is5i

Parameter Convex Neural Networks [article]

Jingcheng Zhou, Wei Wei, Xing Li, Bowen Pang, Zhiming Zheng
2022 arXiv   pre-print
The lack of convexity for DNNs has been seen as a major disadvantage of many optimization methods, such as stochastic gradient descent, which greatly reduces the genelization of neural network applications  ...  We realize that the convexity make sense in the neural network and propose the exponential multilayer neural network (EMLP), a class of parameter convex neural network (PCNN) which is convex with regard  ...  As far as we know, this is the fisrt time that the convex neural network to parameters but not input is proposed.  ... 
arXiv:2206.05562v1 fatcat:y54qboqv6rg2rerjdgwdk6s36e

Why Learning of Large-Scale Neural Networks Behaves Like Convex Optimization [article]

Hui Jiang
2019 arXiv   pre-print
In this paper, we present some theoretical work to explain why simple gradient descent methods are so successful in solving non-convex optimization problems in learning large-scale neural networks (NN)  ...  If this full-rank condition holds, the learning of NNs behaves in the same way as normal convex optimization.  ...  prove that the gradient descent methods can solve this non-convex problem efficiently in the literal space in a similar way as solving normal convex optimization problems.  ... 
arXiv:1903.02140v1 fatcat:g4pekrkfebfwtenru5pspo4x5a

Gated Linear Networks [article]

Joel Veness, Tor Lattimore, David Budden, Avishkar Bhoopchand, Christopher Mattern, Agnieszka Grabska-Barwinska, Eren Sezener, Jianan Wang, Peter Toth, Simon Schmitt, Marcus Hutter
2020 arXiv   pre-print
Individual neurons can model nonlinear functions via the use of data-dependent gating in conjunction with online convex optimization.  ...  We show that this architecture gives rise to universal learning capabilities in the limit, with effective model capacity increasing as a function of network size in a manner comparable with deep ReLU networks  ...  This has led to the development of gradient-based methods for post-hoc network analysis [SVZ13] .  ... 
arXiv:1910.01526v2 fatcat:fbgnq4rfwzbspis4jmv6qeudgq

Piecewise convexity of artificial neural networks [article]

Blaine Rister, Daniel L Rubin
2016 arXiv   pre-print
Firstly, that the network is piecewise convex as a function of the input data.  ...  The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood.  ...  By repeatedly applying lemma 5.6, the whole network is multi-convex on a finite number of sets covering the input and parameter space.  ... 
arXiv:1607.04917v2 fatcat:zyzebi44fnhuxlipyjnrtrv734

Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks [article]

David Balduzzi, Brian McWilliams, Tony Butler-Yeoman
2018 arXiv   pre-print
Modern convolutional networks, incorporating rectifiers and max-pooling, are neither smooth nor convex; standard guarantees therefore do not apply.  ...  Nevertheless, methods from convex optimization such as gradient descent and Adam are widely used as building blocks for deep learning algorithms.  ...  To evaluate the Taylor loss, we record the input to the neuron/layer, its weights, the output of the network and the gradient tensor G l .  ... 
arXiv:1611.02345v3 fatcat:5ofkeia5mzbdjou5cypsj2z35i

Peformance Analysis of Mixture Approaches and Tracking Performance of Adaptive Filter using Adaptive Neural Network

A. Vijayalakshmi, D. Spoorthi
2013 International Journal of Computer Applications  
This paper mainly concentrates on different mixture structures which include affine and convex combinations of several parallel running adaptive filters.  ...  We describe an adaptive neural network model that updates the weights of the filter using nonlinear methods. General Terms Adaptive filter, mean square error ,correlation.  ...  The second set of experiments include training of a neural network with the input vectors calculated above as inputs to the neural network and the unknown system and the outputs obtained by the constituent  ... 
doi:10.5120/14121-2227 fatcat:kf76f24svncl7flor6eqiexfja
« Previous Showing results 1 — 15 out of 58,915 results