Filters








16,654 Hits in 7.3 sec

Learning Robust Representations by Projecting Superficial Statistics Out [article]

Haohan Wang, Zexue He, Zachary C. Lipton, Eric P. Xing
2019 arXiv   pre-print
The second method is built on the independence introduced by projecting the model's representation onto the subspace orthogonal to GLCM representation's.  ...  The first method is built on the reverse gradient method that pushes our model to learn representations from which the GLCM representation is not predictable.  ...  then projected out.  ... 
arXiv:1903.06256v1 fatcat:pvziyevmuzcm3ltd4yzqx5yxbm

Complementary learning systems within the hippocampus: a neural network modelling approach to reconciling episodic memory with statistical learning

Anna C. Schapiro, Nicholas B. Turk-Browne, Matthew M. Botvinick, Kenneth A. Norman
2016 Philosophical Transactions of the Royal Society of London. Biological Sciences  
One contribution of 13 to a theme issue 'New frontiers for statistical learning in the cognitive sciences'.  ...  Thus, in paradigms involving rapid learning, the computational trade-off between learning episodes and regularities may be handled by separate anatomical pathways within the hippocampus itself.  ...  Three hidden layers-DG, CA3, and CA1-learn representations to support this mapping, with activity flow governed by the projections indicated by the arrows.  ... 
doi:10.1098/rstb.2016.0049 pmid:27872368 pmcid:PMC5124075 fatcat:kuwbvt77xvbm3b2236ujdud3sy

Robust and Generalizable Visual Representation Learning via Random Convolutions [article]

Zhenlin Xu, Deyi Liu, Junlin Yang, Colin Raffel, Marc Niethammer
2021 arXiv   pre-print
More interestingly, our method can benefit downstream tasks by providing a more robust pretrained visual representation.  ...  In particular, in the challenging scenario of generalizing to the sketch domain in PACS and to ImageNet-Sketch, our method outperforms state-of-art methods by a large margin.  ...  In: Advances in Neural Information Processing Systems. pp. 10506-10518 (2019) [43] Wang, H., He, Z., Xing, E.P.: Learning robust representations by projecting superficial statistics out.  ... 
arXiv:2007.13003v3 fatcat:nl44sngodjfsbalczy3yril6pm

Complementary learning systems within the hippocampus: A neural network modeling approach to reconciling episodic memory with statistical learning [article]

Anna C Schapiro, Nicholas B Turk-Browne, Matthew M Botvinick, Kenneth A Norman
2016 bioRxiv   pre-print
Thus, in paradigms involving rapid learning, the computational trade-off between learning episodes and regularities may be handled by separate anatomical pathways within the hippocampus itself.  ...  We asked whether it is possible for the hippocampus to handle both statistical learning and memorization of individual episodes.  ...  Acknowledgments The authors thank Samuel Ritter, who contributed to this project during a rotation in K.A.N.'s lab, and helpful conversations with Michael Arcaro, Roy Cox, and Marc Howard.  ... 
doi:10.1101/051870 fatcat:orypi67scveybid7ipt5sk2jqa

Face Recognition: Issues, Methods and Alternative Applications [chapter]

Waldemar Wójcik, Konrad Gromaszek, Muhtar Junisbekov
2016 Face Recognition - Semisupervised Classification, Subspace Projection and Evaluation Methods  
In this chapter, we have discussed face recognition processing, including major components such as face detection, tracking, alignment and feature extraction, and it points out the technical challenges  ...  It is carried out by finding local representation of the facial appearance at each of the anchor points. The representation scheme depends on approach.  ...  The brief analysis of the face detection techniques using effective statistical learning methods seems to be crucial as practical and robust solutions.  ... 
doi:10.5772/62950 fatcat:ucj2xyk2ovflfd74nouozqlrmy

Variational Information Bottleneck for Effective Low-Resource Fine-Tuning [article]

Rabeeh Karimi Mahabadi, Yonatan Belinkov, James Henderson
2021 arXiv   pre-print
Moreover, we show that our VIB model finds sentence representations that are more robust to biases in natural language inference datasets, and thereby obtains better generalization to out-of-domain datasets  ...  Moreover, it improves generalization on 13 out of 15 out-of-domain natural language inference benchmarks. Our code is publicly available in https://github.com/rabeehk/vibert.  ...  Rabeeh Karimi was supported by the Swiss National Science Foundation under the project Learning Representations of Abstraction for Opinion Summarisation (LAOS), grant number "FNS-30216".  ... 
arXiv:2106.05469v1 fatcat:4xpobihx4bfftigm2uuemwho6i

Dorsolateral Striatal Selection and Frontal Cortex Inhibition for a Selective Detection Task in Mice [article]

Behzad Zareian, Angelina Lam, Zhaoran Zhang, Edward Zagha
2022 bioRxiv   pre-print
AbstractA learned sensory-motor behavior engages multiple brain regions, including the neocortex and the basal ganglia.  ...  Overall, these data suggest distinct functions of frontal cortex and dorsolateral striatum in this task, despite having similar neuronal representations.  ...  All licking outside the post-target response window were punished by time-out (resetting the inter-trial interval).  ... 
doi:10.1101/2022.03.03.482906 fatcat:fmh2hnar4zfbdabkpwjre5wqqy

Natural Language Generation: Recently Learned Lessons, Directions for Semantic Representation-based Approaches, and the Case of Brazilian Portuguese Language

Marco Antonio Sobrevilla Cabezudo, Thiago Pardo
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop  
We also focus on the approaches for generation from semantic representations (emphasizing the Abstract Meaning Representation formalism) as well as their advantages and limitations, including possible  ...  However, these rules did not produce a statistically significant increase in the performance, when compared to learned rules.  ...  ., 2017) mentioned, these methods suffer with the loss of information (by not using graphs and being restricted to trees), due to its projective nature.  ... 
doi:10.18653/v1/p19-2011 dblp:conf/acl/CabezudoP19 fatcat:7y46vg4omjhvnlosxpehyu3pzm

Learning Problem-agnostic Speech Representations from Multiple Self-supervised Tasks [article]

Santiago Pascual, Mirco Ravanelli, Joan Serrà, Antonio Bonafonte, Yoshua Bengio
2019 arXiv   pre-print
Learning good representations without supervision is still an open issue in machine learning, and is particularly challenging for speech signals, which are often characterized by long sequences with a  ...  The needed consensus across different tasks naturally imposes meaningful constraints to the encoder, contributing to discover general representations and to minimize the risk of learning superficial ones  ...  Acknowledgements This research was partially supported by the project TEC2015-69266-P (MINECO/FEDER, UE), Calcul Québec, and Compute Canada.  ... 
arXiv:1904.03416v1 fatcat:fvzshk6hlrhhjoztajdg6d4mny

Learning Problem-Agnostic Speech Representations from Multiple Self-Supervised Tasks

Santiago Pascual, Mirco Ravanelli, Joan Serrà, Antonio Bonafonte, Yoshua Bengio
2019 Interspeech 2019  
Learning good representations without supervision is still an open issue in machine learning, and is particularly challenging for speech signals, which are often characterized by long sequences with a  ...  The needed consensus across different tasks naturally imposes meaningful constraints to the encoder, contributing to discover general representations and to minimize the risk of learning superficial ones  ...  This research was partially supported by the project TEC2015-69266-P (MINECO/FEDER, UE), Calcul Québec, and Compute Canada.  ... 
doi:10.21437/interspeech.2019-2605 dblp:conf/interspeech/PascualRSBB19 fatcat:b633bsfmofevzcxkruc7275hr4

Deep Predictive Learning: A Comprehensive Model of Three Visual Streams [article]

Randall C. O'Reilly, Dean R. Wyatte, John Rohrlich
2017 arXiv   pre-print
We present a comprehensive framework spanning biological, computational, and cognitive levels, with a clear theoretical continuity between levels, providing a coherent answer directly supported by extensive  ...  The pulvinar nucleus of the thalamus serves as a projection screen upon which predictions are generated, through deep-layer 6 corticothalamic inputs from multiple brain areas and levels of abstraction.  ...  It is possible that the magnified effects of the V1p to TEO projection in these earlier models may reflect its importance for more robust, fault-tolerant learning.  ... 
arXiv:1709.04654v1 fatcat:yfwcuiyjj5ggpcro7iblvt3zpi

Learning Robust Global Representations by Penalizing Local Predictive Power [article]

Haohan Wang, Songwei Ge, Eric P. Xing, Zachary C. Lipton
2019 arXiv   pre-print
This paper proposes a method for training robust convolutional networks by penalizing the predictive power of the local representations learned by earlier layers.  ...  Across a battery of synthetic and benchmark domain adaptation tasks, our method confers improved generalization out of the domain.  ...  to develop robust models for machine learning in healthcare.  ... 
arXiv:1905.13549v2 fatcat:ouhzy6xpkfafxg7flimmrqgbjy

Learning Through Time in the Thalamocortical Loops [article]

Randall C. O'Reilly and Dean Wyatte and John Rohrlich
2014 arXiv   pre-print
learn based on the discrepancies from our predictions (error-driven learning), then we can learn to improve our predictions by developing internal representations that capture the regularities of the environment  ...  state representation.  ...  Acknowledgments Supported by: ONR grant N00014-13-1-0067, ONR N00014-10-1-0177, ONR D00014-12-C-0638, and Supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of the  ... 
arXiv:1407.3432v1 fatcat:vahfswx5jnbzlgyfrwn3x76qvy

What happens next and when "next" happens: Mechanisms of spatial and temporal prediction [article]

Dean Wyatte
2014 arXiv   pre-print
Overall, this work advances a biological architecture for sensory prediction accompanied by empirical evidence that supports learning of realistic time- and space-varying inputs.  ...  This counterintuitive pattern of results was accounted for by a neural network model that learned three-dimensional viewpoint invariance with LeabraTI's spatiotemporal prediction rule.  ...  This method of copying a contextual representation from an intermediate representation at discrete intervals was originally shown to be a robust way to leverage powerful error-driven learning to represent  ... 
arXiv:1407.5328v1 fatcat:q6x7qbx225fh7jblaju2kyrzeq

The intersection between the representation of the stimuli and the choice by neural ensembles in the primary visual cortex of the macaque [article]

Veronika Koren, Ariana R Andrei, Ming Hu, Valentin Dragoi, Klaus Obermayer
2020 bioRxiv   pre-print
The generalization of learning suggests that the representation of the stimulus class and of the behavioral choice have a non-zero intersection.  ...  Representation of stimuli by neural ensembles and the correlation of neural activity with the behavioral choice are, in principle, two different computational problems, however, it is only the intersection  ...  Acknowledgements This work was supported by Deutsche Forschungsgemeinschaft, grant GRK 1589/2. Author contributions  ... 
doi:10.1101/2020.01.10.901504 fatcat:b2ndqcb5tfgiteabec2sto6pzy
« Previous Showing results 1 — 15 out of 16,654 results