Filters








26,309 Hits in 4.7 sec

Training Neural Networks with Implicit Variance [chapter]

Justin Bayer, Christian Osendorfer, Sebastian Urban, Patrick van der Smagt
2013 Lecture Notes in Computer Science  
We present a novel method to train predictive Gaussian distributions p(z|x) for regression problems with neural networks.  ...  While most approaches either ignore or explicitly model the variance as another response variable, it is trained implicitly in our case.  ...  To compare plain neural networks (NN), density networks (DN), networks trained with fast dropout (FD) and implicit variance networks (IVN), we constructed a setting which is far from tailored towards neural  ... 
doi:10.1007/978-3-642-42042-9_17 fatcat:bjggng4xgvgtvnsumdrtoto6iy

Page 563 of Neural Computation Vol. 7, Issue 3 [page]

1995 Neural Computation  
Derivation of a class of training algorithms. IEEE Transact. Neural Networks 1, 1229-1232. Martinetz, T., and Schulten, K. 1991. A ‘neural gas’ network learns topologies. Proc. ICANN-91, 397-402.  ...  In general, the cost of an implicit coordinate depends on the ratio between its variance (over all the different bumps) and the accuracy with which it must be communicated.  ... 

Implicit recurrent networks: A novel approach to stationary input processing with recurrent neural networks in deep learning [article]

Sebastian Sanokowski
2020 arXiv   pre-print
neural networks.  ...  It turns out that the presence of recurrent intra-layer connections within a one-layer implicit recurrent network enhances the performance of neural networks considerably: A single-layer implicit recurrent  ...  Our work indicates that with the use of implicit recurrent neural networks, it is also possible to increase the computational power of neural networks.  ... 
arXiv:2010.10564v1 fatcat:grnryhkxajgtvnypomchbcinmm

Page 45 of Neural Computation Vol. 4, Issue 1 [page]

1992 Neural Computation  
,” is implicit in much of the work about neural networks.  ...  The fundamental limitations resulting from the bias-variance dilemma apply to all nonparametric inference methods, including neural networks.  ... 

Implicit Saliency in Deep Neural Networks [article]

Yutong Sun, Mohit Prabhushankar, Ghassan AlRegib
2020 arXiv   pre-print
We term this as implicit saliency in deep neural networks. We calculate this implicit saliency using expectancy-mismatch hypothesis in an unsupervised fashion.  ...  Based on We introduce the background for the pre-trained deep neural networks in Section 2. In Section 3, we detail the proposed method to extract implicit saliency.  ...  To set expectancy, we use neural networks.  ... 
arXiv:2008.01874v1 fatcat:5d4ke26ofrculbsi54tj3257hi

Drop-Activation: Implicit Parameter Reduction and Harmonic Regularization [article]

Senwei Liang, Yuehaw Khoo, Haizhao Yang
2020 arXiv   pre-print
During testing, we use a deterministic network with a new activation function to encode the average effect of dropping activations randomly.  ...  The experimental results on CIFAR-10, CIFAR-100, SVHN, EMNIST, and ImageNet show that Drop-Activation generally improves the performance of popular neural network architectures for the image classification  ...  Supercomputing Center (NSCC) Singapore [1] and High-Performance Computing (HPC) of the National University of Singapore for providing computational resources, and the support of NVIDIA Corporation with  ... 
arXiv:1811.05850v5 fatcat:smj77ifkcjbhpigzznmas52v3m

Template NeRF: Towards Modeling Dense Shape Correspondences from Category-Specific Object Images [article]

Jianfei Guo, Zhiyuan Yang, Xi Lin, Qingfu Zhang
2021 arXiv   pre-print
We present neural radiance fields (NeRF) with templates, dubbed Template-NeRF, for modeling appearance and geometry and generating dense shape correspondences simultaneously among objects of the same category  ...  We demonstrate the results and applications on both synthetic and real-world data with competitive results compared with other methods based on 3D information.  ...  Related Work Implicit Neural Representations and Rendering.  ... 
arXiv:2111.04237v1 fatcat:sulvlx4y6vfntnyi5pur5nhvxq

Variational Implicit Processes [article]

Chao Ma, Yingzhen Li, José Miguel Hernández-Lobato
2019 arXiv   pre-print
IPs are therefore highly flexible implicit priors over functions, with examples including data simulators, Bayesian neural networks and non-linear transformations of stochastic processes.  ...  Experiments show that VIPs return better uncertainty estimates and lower errors over existing inference methods for challenging models such as Bayesian neural networks, and Gaussian processes.  ...  We also train VIP with neural sampler prior (VIP-NS), as defined in section 2. All neural networks use a 10-10-1 structure with two hidden layers of size 10.  ... 
arXiv:1806.02390v2 fatcat:t3yn25i3frff3plo7mjbgfnd4u

Differentiable Implicit Layers [article]

Andreas Look, Simona Doneva, Melih Kandemir, Rainer Gemulla, Jan Peters
2020 arXiv   pre-print
We demonstrate our scheme on different applications: (i) neural ODEs with the implicit Euler method, and (ii) system identification in model predictive control.  ...  These functions are parametrized by a set of learnable weights and may optionally depend on some input; making them perfectly suitable as a learnable layer in a neural network.  ...  Related Work Recurrent backpropagation (RBP) [Pineda, 1988 , Almeida, 1990 ] is the first training method for a specific type of implicit neural networks, i.e. infinitely deep recurrent neural networks  ... 
arXiv:2010.07078v2 fatcat:haa66iewtjaudb2g7nhm23otba

Volatility model calibration with neural networks a comparison between direct and indirect methods [article]

Dirk Roeder, Georgi Dimitroff
2020 arXiv   pre-print
In our paper we compare these results with an alternative direct approach where the the mapping from market implied volatilities to model parameters is approximated by the neural network, without the need  ...  The paper should be understood as a technical comparison of neural network techniques and not as an methodically new Ansatz.  ...  For example for the Rough Bergomi model with piece-wise forward variance we use three hidden layers with 68, 49, and 30 neurons, which amount to 11274 calibration parameter of the neural network.  ... 
arXiv:2007.03494v1 fatcat:m6ocn3tvxvexrbtpr6nucskrm4

A Scalable Gradient-Free Method for Bayesian Experimental Design with Implicit Models [article]

Jiaxin Zhang, Sirui Bi, Guannan Zhang
2021 arXiv   pre-print
Without the necessity of pathwise gradients, our approach allows the design process to be achieved through a unified procedure with an approximate gradient for implicit models.  ...  However, the approach requires a sampling path to compute the pathwise gradient of the MI lower bound with respect to the design variables, and such a pathwise gradient is usually inaccessible for implicit  ...  Figure 1 shows the MI lower bound as a function of neural network training epochs.  ... 
arXiv:2103.08026v1 fatcat:jksb3ia75vfbdn3xakpvvqy2ma

A variational autoencoder approach for choice set generation and implicit perception of alternatives in choice modeling [article]

Rui Yao, Shlomo Bekhor
2021 arXiv   pre-print
This paper derives the generalized extreme value (GEV) model with implicit availability/perception (IAP) of alternatives and proposes a variational autoencoder (VAE) approach for choice set generation  ...  and implicit perception of alternatives.  ...  network, and are the weights of the neural network, variance σ is a fixed hyperparameter, and ( | ) is bound from below at 0.  ... 
arXiv:2106.13319v1 fatcat:hg2kcd7isngwbc73w76qhvn46u

Kernel Implicit Variational Inference [article]

Jiaxin Shi, Shengyang Sun, Jun Zhu
2018 arXiv   pre-print
As far as we know, for the first time implicit variational inference is successfully applied to Bayesian neural networks, which shows promising results on both regression and classification tasks.  ...  However, existing methods on implicit posteriors still face challenges of noisy estimation and computational infeasibility when applied to models with high-dimensional latent variables.  ...  Thus, we can use an MMNN as g in the variational posterior for normal-size neural networks. In tasks with very small networks, we still use an MLP as g.  ... 
arXiv:1705.10119v3 fatcat:fjajtfw5urabpjbj45y3swjlnq

A Modern Take on the Bias-Variance Tradeoff in Neural Networks [article]

Brady Neal, Sarthak Mittal, Aristide Baratin, Vinayak Tantia, Matthew Scicluna, Simon Lacoste-Julien, Ioannis Mitliagkas
2019 arXiv   pre-print
This suggests that there might not be a bias-variance tradeoff in neural networks with respect to network width, unlike was originally claimed by, e.g., Geman et al. (1992).  ...  However, recent empirical results with over-parameterized neural networks are marked by a striking absence of the classic U-shaped test error curve: test error keeps decreasing in wider networks.  ...  Visualization with regression on sinusoid We trained different width neural networks on a noisy sinusoidal distribution with 80 independent training examples.  ... 
arXiv:1810.08591v4 fatcat:w4pj2e3szrhvjpzbfmjel25lzy

Rethinking the Role of Gradient-Based Attribution Methods for Model Interpretability [article]

Suraj Srinivas, Francois Fleuret
2021 arXiv   pre-print
Current methods for the interpretability of discriminative deep neural networks commonly rely on the model's input-gradients, i.e., the gradients of the output logits w.r.t. the inputs.  ...  Our experiments show that improving the alignment of the implicit density model with the data distribution enhances gradient structure and explanatory power while reducing this alignment has the opposite  ...  Score-Matching We propose to use the score-matching objective as a regularizer in neural network training to increase the alignment of the implicit density model to the ground truth, as shown in equation  ... 
arXiv:2006.09128v2 fatcat:ghtkibd3hbclfgoimsnqr2f75i
« Previous Showing results 1 — 15 out of 26,309 results