A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Neural Nets via Forward State Transformation and Backward Loss Transformation
[article]
2018
arXiv
pre-print
This article studies (multilayer perceptron) neural networks with an emphasis on the transformations involved --- both forward and backward --- in order to develop a semantical/logical perspective that ...
The common two-pass neural network training algorithms make this viewpoint particularly fitting. In the forward direction, neural networks act as state transformers. ...
The mask function M : n → P(k) captures connections and mutability; it works as ...
arXiv:1803.09356v1
fatcat:nu77qfcssfcsfejw6gp2keliii
Neural Nets via Forward State Transformation and Backward Loss Transformation
2019
Electronical Notes in Theoretical Computer Science
This article studies (multilayer perceptron) neural networks with an emphasis on the transformations involved -both forward and backward -in order to develop a semantic/logical perspective that is in line ...
In the forward direction, neural networks act as state transformers, using Kleisli composition for the multiset monad -for the linear parts of network layers. ...
Proposition 2.5 Forward state transformation (propagation) yields a functor
Backward loss transformations In the theory of neural networks one uses 'loss' functions to evaluate how much the outcome of ...
doi:10.1016/j.entcs.2019.09.009
fatcat:upnn42qgdfamriubf5yty66ee4
View Extrapolation of Human Body from a Single Image
[article]
2018
arXiv
pre-print
Our new pipeline is a composition of a shape estimation network and an image generation network, and at the interface a perspective transformation is applied to generate a forward flow for pixel value ...
Our design is able to factor out the space of data variation and makes learning at each step much easier. ...
Figure 3 . 3 Forward flow. First let us formally define the forward flow and backward flow. Both forward flow and backward The forward flow and backward flow. ...
arXiv:1804.04213v1
fatcat:ejpgnr6bsfd25fadeh6lx3a5xa
View Extrapolation of Human Body from a Single Image
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Our new pipeline is a composition of a shape estimation network and an image generation network, and at the interface a perspective transformation is applied to generate a forward flow for pixel value ...
Our design is able to factor out the space of data variation and makes learning at each step much easier. ...
Figure 3 . 3 Forward flow. First let us formally define the forward flow and backward flow. Both forward flow and backward The forward flow and backward flow. ...
doi:10.1109/cvpr.2018.00468
dblp:conf/cvpr/0004SWCY18
fatcat:lmzfymnbqfe6xjvaxumdvtx7oy
LT-Net: Label Transfer by Learning Reversible Voxel-wise Correspondence for One-shot Medical Image Segmentation
[article]
2020
arXiv
pre-print
To overcome this difficulty, we resort to the forward-backward consistency, which is widely used in correspondence problems, and additionally learn the backward correspondences from the warped atlases ...
We demonstrate the superiority of our method over both deep learning-based one-shot segmentation methods and a classical multi-atlas segmentation method via thorough experiments. ...
The forward and backward correspondences should be cycle-consistent. We conduct an ablation study with respect to the transformation consistency loss L trans and show the results in Table 2 . ...
arXiv:2003.07072v3
fatcat:ffvcy2mq7ngivgncfvzvgyofgm
Uncovering Closed-form Governing Equations of Nonlinear Dynamics from Videos
[article]
2021
arXiv
pre-print
closed-form governing equations of learned physical states and, meanwhile, serves as a constraint to the autoencoder. ...
creates mapping between the extracted spatial/pixel coordinates and the latent physical states of dynamics, and (3) a numerical integrator-based sparse regression module that uncovers the parsimonious ...
Then the forward/backward video frames can be reconstructed via the decoder asÎ j+q = ψ(T (x p (j + q))), which leads to the forward and backward frame reconstructions loss from the temporal integration ...
arXiv:2106.04776v1
fatcat:wdwnarduazeexkwj3durc63dk4
Use of symmetric kernels for convolutional neural networks
[article]
2018
arXiv
pre-print
We show that usage of such kernels acts as regularizer, and improves generalization of the convolutional neural networks at the cost of more complicated training process. ...
We also study other types of symmetric kernels which lead to vertical flip invariance, and approximate rotational invariance. ...
Equations for forward and backward passes then become: Level, pass Operation Level 0, Forward y i += ax i−1 + bx i + cx i+1 Level 1, Forward y i += a (x i−1 + x i+1 ) + bx i Level 0, Backward δx i−1 += ...
arXiv:1805.09421v1
fatcat:obrpaco2tnf3fgvtkjxnfs6l4y
SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes
[article]
2021
arXiv
pre-print
We derive analytical gradients via implicit differentiation, enabling end-to-end training from 3D meshes with bone transformations. ...
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy. ...
Training Losses Our model is trained via minimizing the binary cross entropy loss L BCE (o(x , p), o gt (x )) between the predicted occupancy of the deformed points o(x , p) and the corresponding ground-truth ...
arXiv:2104.03953v3
fatcat:kbef7zihfngfxggls4ljgp7ufe
Forecasting Sequential Data using Consistent Koopman Autoencoders
[article]
2020
arXiv
pre-print
In this work, we propose a novel Consistent Koopman Autoencoder model which, unlike the majority of existing work, leverages the forward and backward dynamics. ...
Recurrent neural networks are widely used on time series data, yet such models often ignore the underlying physical structures in such sequences. ...
Brunton, Lionel Mathelin and Alejandro Queiruga for valu- ...
arXiv:2003.02236v2
fatcat:3nz7rko5ofakrbkqcqglfgudde
MaD TwinNet: Masker-Denoiser Architecture with Twin Networks for Monaural Sound Source Separation
[article]
2018
arXiv
pre-print
We build upon the recently proposed Masker-Denoiser (MaD) architecture and we enhance it with the Twin Networks, a technique to regularize a recurrent generative network using a backward running copy of ...
Current state of the art (SOTA) results in monaural singing voice separation are obtained with deep learning based methods. ...
This loss function pushes together the hidden states of the forward net and the backward net for co-temporal timesteps. ...
arXiv:1802.00300v1
fatcat:z2nxrcmpzrfbnbqqm6ai32mh24
ParaCNN: Visual Paragraph Generation via Adversarial Twin Contextual CNNs
[article]
2020
arXiv
pre-print
Previous research often generates the paragraph via a hierarchical Recurrent Neural Network (RNN)-like model, which has complex memorising, forgetting and coupling mechanism. ...
We conduct extensive experiments on the Stanford Visual Paragraph dataset and achieve state-of-the-art performance. ...
In [28] , they use an L2 loss to force the distance between the hidden states of forwarding and backwards networks to be close, if they have the same ground-truths. ...
arXiv:2004.10258v1
fatcat:4phnrvnfdbh6pippaezaunqomq
Orthogonal Graph Neural Networks
[article]
2021
arXiv
pre-print
Through a number of experimental observations, we argue that the main factor degrading the performance is the unstable forward normalization and backward gradient resulted from the improper design of the ...
These models rely on message passing and feature transformation functions to encode the structural and feature information from neighbors. ...
Forward and backward signaling analysis. ...
arXiv:2109.11338v2
fatcat:snwrwrbjfjdazboxqeaalnlmdq
Binary Neural Networks: A Survey
2020
Pattern Recognition
the quantization error, improving the network loss function, and reducing the gradient error. ...
We also investigate other practical aspects of binary neural networks such as the hardware-friendly design and the training tricks. ...
And based on XNOR-Net, Bulat et.al. fused the activation and weight scaling factors into a single one that is learned discriminatively via backward propagation and proposed XNOR-Net++ [76] . ...
doi:10.1016/j.patcog.2020.107281
fatcat:p7ohjigozza5viejq6x7cyf6zi
DEFENDING STRATEGIES AGAINST ADVERSARIAL ATTACKS IN RETRIEVAL SYSTEMS
2020
Azerbaijan Journal of High Performance Computing
The goal of this paper is to review different strategies of attacks and defenses, describe state-of-the-art methods from both sides, and show how important the development of HPC is in protecting systems ...
The system that gathers text and visual data from the internet must classify the data and store it as the set of metadata. ...
There are some powerful gradient-based attacks known for today: Elastic-Net attacks EAD based on the 𝐿𝐿 . and 𝐿𝐿 " distortion, where 𝑐𝑐𝑠𝑠 𝐿𝐿 ' -oriented adversarial example includes the state-of-the-art ...
doi:10.32010/26166127.2020.3.1.46.53
fatcat:ny7gev2hwngezbophue5rdkbwq
Forward and Backward Information Retention for Accurate Binary Neural Networks
[article]
2020
arXiv
pre-print
Our empirical study indicates that the quantization brings information loss in both forward and backward propagation, which is the bottleneck of training accurate binary neural networks. ...
To address these issues, we propose an Information Retention Network (IR-Net) to retain the information that consists in the forward activations and backward gradients. ...
Information loss caused by the forward sign function and the backward approximation for gradient greatly harms the accuracy of binary neural networks. ...
arXiv:1909.10788v4
fatcat:ze6m43jcwzcilagnmzaefyts3m
« Previous
Showing results 1 — 15 out of 7,848 results