Filters








26,623 Hits in 6.6 sec

Invariance, encodings, and generalization: learning identity effects with neural networks [article]

S. Brugiapaglia, M. Liu, P. Tupper
2022 arXiv   pre-print
Finally, we demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs.  ...  Often in language and other areas of cognition, whether two components of an object are identical or not determines if it is well formed. We call such constraints identity effects.  ...  Acknowledgments SB acknowledges the support of NSERC through grant RGPIN-2020-06766, the Faculty of Arts and Science of Concordia University, and the CRM Applied Math Lab.  ... 
arXiv:2101.08386v5 fatcat:4x6eoh22t5fv7gy2cqju35t3hm

Encoding Sensory and Motor Patterns as Time-Invariant Trajectories in Recurrent Neural Networks [article]

Vishwa Goudar, Dean Buonomano
2017 arXiv   pre-print
The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second  ...  , generalize across relevant spatial features; third, identify the same stimuli played at different speeds - we show that this temporal invariance emerges because the recurrent dynamics generate neural  ...  We thank Nicholas Hardy, Omri Barak, Alexandre Rivkind and Jonathan Kadmon for helpful discussions, and Dharshan Kumaran for comments on an earlier version of this manuscript.  ... 
arXiv:1701.00838v2 fatcat:6sqaethccbav3d24o5he5htgo4

Encoding Sensory and Motor Patterns as Time-Invariant Trajectories in Recurrent Neural Networks [article]

Vishwa Goudar, Dean V. Buonomano
2017 bioRxiv   pre-print
The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second  ...  , generalize across relevant spatial features; third, identify the same stimuli played at different speeds — we show that this temporal invariance emerges because the recurrent dynamics generate neural  ...  We thank Nicholas Hardy, Alexandre Rivkind and Jonathan Kadmon for helpful discussions, and Dharshan Kumaran for comments on an earlier version of this manuscript.  ... 
doi:10.1101/176198 fatcat:3vgzjhn4ergyhcja5ysub5k7n4

Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks

Vishwa Goudar, Dean V Buonomano
2018 eLife  
The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second  ...  We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits.  ...  We thank Nicholas Hardy, Alexandre Rivkind and Jonathan Kadmon for helpful discussions, and Dharshan Kumaran for comments on an earlier version of this manuscript.  ... 
doi:10.7554/elife.31134 pmid:29537963 pmcid:PMC5851701 fatcat:4gqngldgcjc7rnjxjpdm57nsqu

Multi-Layer Neural Network Auto Encoders Learning Method, using Regularization for Invariant Image Recognition

Skribtsov Pavel Vyacheslavovich, Kazantsev Pavel Aleksandrovich
2016 Indian Journal of Science and Technology  
can be seen as a non-local extension of the encoder Jacobian-based family of deep neural network regularizers embedding invariance to non-local input pattern transformations into the deep neural network  ...  Background/Objectives: This paper proposes a new type of regularization for deep learning neural networks that is capable of explicit separation of the lower dimensional hidden layer input pattern representation  ...  Acknowledgments Part of the research was carried out with financial support from the Ministry of Education and Science under the grant agreement No. 14.576.21.0051 as of September 08, 2014 (agreement unique  ... 
doi:10.17485/ijst/2016/v9i27/97704 fatcat:uzgyd4tnfjanhch72vwyolnvqa

Stimulus-invariant auditory cortex threat encoding during fear conditioning with simple and complex sounds

Matthias Staib, Dominik R. Bach
2018 NeuroImage  
Here, we address how ACX encodes threat predictions during human fear conditioning using functional magnetic resonance imaging (fMRI) with multivariate pattern analysis.  ...  Overall, our findings suggest that ACX represents threat predictions, and that Heschl's gyrus contains a threat representation that is invariant across physical stimulus categories.  ...  Acknowledgements We thank Giuseppe Castegnetti, Saurabh Khemka, Christoph Korn, and Athina Tzovara, for discussions and help with data acquisition, and Jakob Heinzle for commenting on a first draft of  ... 
doi:10.1016/j.neuroimage.2017.11.009 pmid:29122722 pmcid:PMC5770332 fatcat:447wifkx2jhfdjvbc4gttn2o6e

Generalizing Outside the Training Set: When Can Neural Networks Learn Identity Effects? [article]

Simone Brugiapaglia, Matthew Liu, Paul Tupper
2020 arXiv   pre-print
We then show that a broad class of algorithms including deep neural networks with standard architecture and training with backpropagation satisfy our criteria, dependent on the encoding of inputs.  ...  Finally, we demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs.  ...  S.B. and M.L. also acknowledge the Faculty of Arts and Science of Concordia University for financial support.  ... 
arXiv:2005.04330v1 fatcat:gm3o52wcozavpfaclf6j6i7eku

Emotion-Controllable Generalized Talking Face Generation [article]

Sanjana Sinha, Sandika Biswas, Ravindra Yadav, Brojeshwar Bhowmick
2022 arXiv   pre-print
We propose a graph convolutional neural network that uses speech content feature, along with an independent emotion input to generate emotion and speech-induced motion on facial geometry-aware landmark  ...  We propose a two-branch texture generation network, with motion and texture branches designed to consider the motion and texture content independently.  ...  Audio Encoder, E A is a recurrent neural network which creates an emotion-invariant speech embedding feature f a ∈ R d (d = 128) from speech audio input S.  ... 
arXiv:2205.01155v1 fatcat:ca6xu446pjdvtme3kf2fd2czaq

Which Learning Algorithms Can Generalize Identity-Based Rules to Novel Inputs? [article]

Paul Tupper, Bobak Shahriari
2016 arXiv   pre-print
We demonstrate these results computationally with a multilayer feedforward neural network.  ...  We propose a novel framework for the analysis of learning algorithms that allows us to say when such algorithms can and cannot generalize certain patterns from training data to test data.  ...  PT was supported by an NSERC Discovery Grant, a Research Accelerator Supplement, and held a Tier II Canada Research Chair. BS was supported by an NSERC Discovery Grant.  ... 
arXiv:1605.04002v1 fatcat:2uoecuvelrcc3hpoqz7nu535oi

Generative Moment Matching Networks [article]

Yujia Li, Kevin Swersky, Richard Zemel
2015 arXiv   pre-print
We further boost the performance of this approach by combining our generative network with an auto-encoder network, using MMD to learn to generate codes that can then be decoded to produce samples.  ...  We consider the problem of learning deep generative models from data.  ...  ., 2014) , and Charlie Tang for providing relevant references. We thank CIFAR, NSERC, and Google for research funding.  ... 
arXiv:1502.02761v1 fatcat:7wbdwyjfqjeqlhfjwr7wwijmdm

Human Laughter Generation using Hybrid Generative Models

2021 KSII Transactions on Internet and Information Systems  
ability of a long short-term memory RNN (LSTM) and the CNN ability to learn invariant features.  ...  To improve the synthesis quality, we suggest two hybrid models (LSTM-VAE, GRU-VAE and CNN-VAE) that combine the representation learning capacity of variational autoencoder (VAE) with the temporal modelling  ...  Lately, deep neural network based on unsupervised learning process such as the Autoencoder (AE) and the Variational autoencoder (VAE) shows their effectiveness in data resolution.  ... 
doi:10.3837/tiis.2021.05.001 fatcat:7735kpjyyjbydllji4wsai5nry

Facial Keypoint Sequence Generation from Audio [article]

Prateek Manocha, Prithwijit Guha
2020 arXiv   pre-print
identity using a Pose Invariant (PIV) Encoder.  ...  Audio2Keypoint generalizes across unseen people with a different facial structure allowing us to generate the sequence with the voice from any source or even synthetic voices.  ...  PIV Encoder: The PIV encoder is a novel model ar-chitecture trained along with the generator and discriminator that helps the generator to learn pose invariant information while acting as a discriminator  ... 
arXiv:2011.01114v1 fatcat:ru3q4xmqgrgoph3tcqlgah26bq

Generalization by design: Shortcuts to Generalization in Deep Learning [article]

Petr Taborsky, Lars Kai Hansen
2021 arXiv   pre-print
Backed up by theory we further demonstrate that "generalization by design" is practically possible and that good generalization may be encoded into the structure of the network.  ...  We take a geometrical viewpoint and present a unifying view on supervised deep learning with the Bregman divergence loss function - this entails frequent classification and prediction tasks.  ...  As the number of nodes changes as one moves through the layers of the neural network we effectively change the dimensionality used by the neural network to represent the data manifold.  ... 
arXiv:2107.02253v1 fatcat:lg46x2dadrfbjfvcx7pa3ydmoa

Factorized Deep Generative Models for Trajectory Generation with Spatiotemporal-Validity Constraints [article]

Liming Zhang, Liang Zhao, Dieter Pfoser
2020 arXiv   pre-print
New deep neural network architectures have been developed to implement the inference and generation models with newly-generalized latent variable priors.  ...  Inspired by the success of deep generative neural networks for images and texts, a fast-developing research topic is deep generative models for trajectory data which can learn expressively explanatory  ...  This is inspired by the success of deep generative neural networks in images and texts.  ... 
arXiv:2009.09333v1 fatcat:6kbwrtggubfyjhbq6m4bailyyy

Learning TSP Requires Rethinking Generalization [article]

Chaitanya K. Joshi, Quentin Cappart, Louis-Martin Rousseau, Thomas Laurent
2021 arXiv   pre-print
optimization pipeline, from network layers and learning paradigms to evaluation protocols.  ...  End-to-end training of neural network solvers for combinatorial optimization problems such as the Travelling Salesman Problem is intractable and inefficient beyond a few hundreds of nodes.  ...  Veličković and the anonymous reviewers for helpful comments and discussions.  ... 
arXiv:2006.07054v3 fatcat:fsxtrv2tzveabftlxzjuzpdura
« Previous Showing results 1 — 15 out of 26,623 results