Filters








5,544 Hits in 3.1 sec

Practices and pitfalls in inferring neural representations

Vencislav Popov, Markus Ostarek, Caitlin Tenison
2018 NeuroImage  
This fallacy applies to the prediction of actual neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods.  ...  We support this claim with three simulations in which we achieved high prediction accuracy despite systematic differences in the geometries and dimensions of the underlying representations.  ...  This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.  ... 
doi:10.1016/j.neuroimage.2018.03.041 pmid:29578030 fatcat:c7gxumahdfgvtiulye2ij4dvwu

Building population models for large-scale neural recordings: opportunities and pitfalls [article]

Cole Hurwitz, Nina Kudryashova, Arno Onken, Matthias H. Hennig
2021 arXiv   pre-print
This has driven the development of new statistical models for analyzing and interpreting neural population activity. Here we provide a broad overview of recent developments in this area.  ...  We compare and contrast different approaches, highlight strengths and limitations, and discuss biological and mechanistic insights that these methods provide.  ...  CLH is supported by the Thouron Award and by the School of Informatics, University of Edinburgh.  ... 
arXiv:2102.01807v4 fatcat:teymhliyd5bq7hkulxd46f2zxm

Modelling lexical access in speech production as a ballistic process

Bradford Z. Mahon, Eduardo Navarrete
2016 Language, Cognition and Neuroscience  
Funding This article was supported in part by NIH [grant number NS089609] to BZM, and NSF [grant number BCS-1349042] to BZM and EN.  ...  Acknowledgements We acknowledge Alfonso Caramazza for his discussion of these issues over the years, and for initially framing the idea of lexical access as a ballistic process.  ...  Strijkers and Costa are right to criticise the form of reverse inference (Poldrack, 2006 ) that infers the representational stages at which effects originate from the time-point at which a deflection  ... 
doi:10.1080/23273798.2015.1129060 pmid:28580364 pmcid:PMC5455780 fatcat:kfaovueh7bg75kyqywmccn527u

The DELICES project: Indexing scientific literature through semantic expansion [article]

Florian Boudin, Béatrice Daille, Evelyne Jacquey, Jian-Yun Nie
2021 arXiv   pre-print
To this end, we will rely on the latest advances in semantic representations to both increase the relevance of keyphrases extracted from the documents, and extend indexing to new terms borrowed from semantically  ...  The goal of the DELICES project is to address this pitfall by exploiting semantic relations between scientific articles to both improve and enrich indexing.  ...  To avoid this pitfall, we will seek to identify precisely where, in the graph representation, more information is needed, and devise a fine-grained enrichment mechanism that can efficiently and reliably  ... 
arXiv:2106.14731v1 fatcat:w42wlnk63fekhoqufty3l6jwoi

The pitfalls of measuring representational similarity using representational similarity analysis [article]

Marin Dujmović, Jeffrey Bowers, Federico Adolfi, Gaurav Malhotra
2022 bioRxiv   pre-print
Here we demonstrate the pitfalls of using RSA to infer representational similarity and explain how contradictory findings arise and support false inferences when left unchecked.  ...  By comparing neural representations in primate, human and computational models, we reveal two problematic phenomena that are ubiquitous in current research: a 'mimic' effect, where confounds in stimuli  ...  Acknowledgments This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 741134)  ... 
doi:10.1101/2022.04.05.487135 fatcat:pdhslvbg7jgptof5wfzt6tn62i

Inferential Pitfalls In Decoding Neural Representations [article]

Vencislav Popov, Markus Ostarek, Caitlin Tenison
2017 bioRxiv   pre-print
The same argument applies to the decoding of neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods.  ...  We argue that such inferences are not always valid, because decoding can occur even if the neural representational space and the stimulus-feature space use different representational schemes.  ...  (p. 23) A useful illustration of this inference in practice comes from a recent study by Fernandino et al. (2016) .  ... 
doi:10.1101/141283 fatcat:ediz274thjfstmfj7wh6rijtgm

A Primer on Motion Capture with Deep Learning: Principles, Pitfalls and Perspectives [article]

Alexander Mathis and Steffen Schneider and Jessy Lauer and Mackenzie W. Mathis
2020 arXiv   pre-print
In particular, we will discuss the principles of those novel algorithms, highlight their potential as well as pitfalls for experimentalists, and provide a glimpse into the future.  ...  Recent advances in deep learning have tremendously advanced predicting posture from videos directly, which quickly impacted neuroscience and biology more broadly.  ...  Acknowledgments: We thank Yash Sharma for discussions around future directions in self-supervised learning, Erin Diel, Maxime Vidal, Claudio Michaelis, Thomas Biasi for comments on the manuscript.  ... 
arXiv:2009.00564v2 fatcat:w22iv453cbaa5fidf5hwemcxeu

Mean Embeddings with Test-Time Data Augmentation for Ensembling of Representations [article]

Arsenii Ashukha, Andrei Atanov, Dmitry Vetrov
2021 arXiv   pre-print
In this work, we look at the ensembling of representations and propose mean embeddings with test-time augmentation (MeTTA) simple yet well-performing recipe for ensembling representations.  ...  We believe that spreading the success of ensembles to inference higher-quality representations is the important step that will open many new applications of ensembling.  ...  In practice, though, the models only partially are invariant to these transformations.  ... 
arXiv:2106.08038v2 fatcat:qy6dgfj7hredjccktym5nkruny

Reconsidering Generative Objectives For Counterfactual Reasoning

Danni Lu, Chenyang Tao, Junya Chen, Fan Li, Feng Guo, Lawrence Carin
2020 Neural Information Processing Systems  
Our procedure acknowledges the uncertainties in representation and solves a Fenchel mini-max game to resolve the representation imbalance for better counterfactual generalization, justified by new theory.The  ...  However, existing solutions often fail to address issues that are unique to causal inference, such as covariate balancing and counterfactual validation.  ...  The research at Duke University was supported was supported in part by DARPA, DOE, NIH, ONR, NSF.  ... 
dblp:conf/nips/LuTCLGC20 fatcat:6tr7m7datnfxvgvsaml3ql3zfy

Planning from Pixels in Atari with Learned Symbolic Representations [article]

Andrea Dittadi, Frederik K. Drachmann, Thomas Bolander
2021 arXiv   pre-print
The inference model of the trained VAEs extracts boolean features from pixels, and RolloutIW plans with these features.  ...  In this paper, we leverage variational autoencoders (VAEs) to learn features directly from pixels in a principled manner, and without supervision.  ...  Both of these factors are tightly coupled with the neural architecture underlying the inference and generative networks.  ... 
arXiv:2012.09126v2 fatcat:lfpsrsuxxjh4zlgozzfe327c6m

Predictive models avoid excessive reductionism in cognitive neuroimaging

Gaël Varoquaux, Russell A Poldrack
2019 Current Opinion in Neurobiology  
Predicting behavior from neural activity can support robust reverse inference, isolating brain structures that support particular mental processes.  ...  Understanding the organization of complex behavior as it relates to the brain requires modeling the behavior, the relevant mental processes, and the corresponding neural activity.  ...  Artificial neural networks optimized for object recognition form good representations to study object recognition in the ventral stream (Yamins and DiCarlo, 2016••) .  ... 
doi:10.1016/j.conb.2018.11.002 pmid:30513462 fatcat:aylvzvm33rb6navyodweac23lu

CIS Publication Spotlight [Publication Spotlight]

Derong Liu, Chin-Teng Lin, Garry Greenwood, Simon Lucas, Zhengyou Zhang
2013 IEEE Computational Intelligence Magazine  
FCMs are inference networks, using cyclic digraphs, for knowledge representation and reasoning.  ...  A novel generation of JIT classifiers is presented to deal with recurrent concept drift by means of a practical formalization of the concept representation and the definition of a set of operators working  ... 
doi:10.1109/mci.2013.2264231 fatcat:adhxqtci7fc7plrtis62a2bsrm

Probabilistic language models in cognitive neuroscience: Promises and pitfalls

Kristijan Armeni, Roel M. Willems, Stefan L. Frank
2017 Neuroscience and Biobehavioral Reviews  
We highlight the advantages and potential pitfalls of this approach and indicate avenues for future research.  ...  Measures obtained from these models emphasize the probabilistic, information-processing view of language understanding and provide a set of tools that can be used for testing neural hypotheses about language  ...  Acknowledgments The work presented here was partly funded by the Netherlands Organisation for Scientific Research (NWO) Gravitation Grant 024.001.006 to the Language in Interaction Consortium and Vidi  ... 
doi:10.1016/j.neubiorev.2017.09.001 pmid:28887227 fatcat:idc4sce5rveypgeq2wa33hiasi

The Devil is in the Details: On the Pitfalls of Vocabulary Selection in Neural Machine Translation [article]

Tobias Domhan, Eva Hasler, Ke Tran, Sony Trenous, Bill Byrne, Felix Hieber
2022 arXiv   pre-print
We propose a model of vocabulary selection, integrated into the neural translation model, that predicts the set of allowed output words from contextualized encoder representations.  ...  This restores translation quality of an unconstrained system, as measured by human evaluations on WMT newstest2020 and idiomatic expressions, at an inference latency competitive with alignment-based selection  ...  A number of inference optimization speedups have been proposed and are used in practice: (EN) and their German correspondences (DE) as well as an English gloss (GL) of the German expression.  ... 
arXiv:2205.06618v1 fatcat:kv5hmugz5vaazkzmeoxsjl2bxy

From Perception to Programs: Regularize, Overparameterize, and Amortize [article]

Hao Tang, Kevin Ellis
2022 arXiv   pre-print
We explore several techniques for relaxing the problem and jointly learning all modules end-to-end with gradient descent: multitask learning; amortized inference; overparameterization; and a differentiable  ...  representation, which is then processed by a synthesized program.  ...  In practice, we evaluate baseline models with depth 𝐷 = 2, 3, 5 and report the best performances among them. Figure 2 . 2 Figure 2.  ... 
arXiv:2206.05922v1 fatcat:c5j76675lja63g32s6rmwbca2q
« Previous Showing results 1 — 15 out of 5,544 results