Filters








651 Hits in 4.0 sec

An Overview of Bayesian Methods for Neural Spike Train Analysis

Zhe Chen
2013 Computational Intelligence and Neuroscience  
Some research challenges and opportunities for neural spike train analysis are discussed.  ...  Here we present a tutorial overview of Bayesian methods and their representative applications in neural spike train analysis, at both single neuron and population levels.  ...  As a tradeoff, Bayesian practioners have to encounter the increasing cost of computational complexity (especially while using MCMC), which may be prohibitive for large-scale spike train data sets.  ... 
doi:10.1155/2013/251905 pmid:24348527 pmcid:PMC3855941 fatcat:nkst6mt3sfcqheuxheda3wq4wq

Multi-Scale Information, Network, Causality, and Dynamics: Mathematical Computation and Bayesian Inference to Cognitive Neuroscience and Aging [chapter]

Michelle Yongmei
2013 Functional Brain Mapping and the Endeavor to Understand the Working Brain  
Bayesian approaches can be used to analyze or decode brain signals such as spike trains and structural and functional neuroimaging data.  ...  Neuroimaging data analyses using Bayesian approaches Here I focus on Bayesian inference in fMRI data analysis, mainly for activation detection and hemodynamic response function (HRF) estimation, although  ... 
doi:10.5772/55262 fatcat:go2r6jruyzdqrp4lqcnt64j7va

Estimating Entropy Rates with Bayesian Confidence Intervals

Matthew B. Kennel, Jonathon Shlens, Henry D. I. Abarbanel, E. J. Chichilnisky
2005 Neural Computation  
We thank Pam Reinagel for valuable discussion and support; and colleagues at the Institute for Nonlinear Science (UCSD) and the Systems Neurobiology Laboratory (Salk Institute) for tremendous feedback  ...  Estimating the entropy rate from observed data like spike trains can be surprisingly difficult in practice.  ...  For example, the total cardinality is known a priori in a spike train discretized so that no more than A spikes can ever occur in a bin.  ... 
doi:10.1162/0899766053723050 pmid:15901407 fatcat:nxvll5oe55ccdpvxuepkxmv3a4

Correcting for the Sampling Bias Problem in Spike Train Information Measures

Stefano Panzeri, Riccardo Senatore, Marcelo A. Montemurro, Rasmus S. Petersen
2007 Journal of Neurophysiology  
Correcting for the sampling bias problem in spike train information measures.  ...  The main difficulty in its practical application to spike train analysis is that estimates of neuronal information from experimental data are prone to a systematic error (called "bias").  ...  Arabzadeh for sharing data, and P. E. Latham, L. Paninski, and J. D. Victor for useful discussions and insightful comments. G R A N T S Our research was supported by Pfizer Global Development to S.  ... 
doi:10.1152/jn.00559.2007 pmid:17615128 fatcat:noyziybb7vhxdmliprerk7cucm

Embedding optimization reveals long-lasting history dependence in neural spiking activity

Lucas Rudelt, Daniel González Marx, Michael Wibral, Viola Priesemann, Daniele Marinazzo
2021 PLoS Computational Biology  
When applying the method to experimental data, which are necessarily of limited size, a reliable estimation of mutual information is only possible for a coarse temporal binning of past spiking, a so-called  ...  To investigate these footprints, we developed a novel approach to quantify history dependence within the spiking of a single neuron, using the mutual information between the entire past and current spiking  ...  Paul Spitzner, as well as Abdullah Makkeh for valuable comments and for reviewing the manuscript. Author Contributions Conceptualization: Lucas Rudelt, Michael Wibral, Viola Priesemann.  ... 
doi:10.1371/journal.pcbi.1008927 pmid:34061837 fatcat:73hy5raoarf7toclqef5wfowhq

Posterior Concentration for Sparse Deep Learning [article]

Nicholas Polson, Veronika Rockova
2018 arXiv   pre-print
Spike-and-Slab Deep Learning (SS-DL) is a fully Bayesian alternative to Dropout for improving generalizability of deep ReLU networks.  ...  Our results provide new theoretical justifications for deep ReLU networks from a Bayesian point of view.  ...  Posterior Concentration for Deep Learning Reconstruction of a function f 0 from the training data (Y i , x i ) n i=1 can be achieved using a Bayesian posterior.  ... 
arXiv:1803.09138v1 fatcat:i2puhkwrwnc2pb3zy2dihihtwa

Informative Bayesian Neural Network Priors for Weak Signals

Tianyu Cui, Aki Havulinna, Pekka Marttinen, Samuel Kaski
2021 Bayesian Analysis  
We show how to encode both types of domain knowledge into the widely used Gaussian scale mixture priors with Automatic Relevance Determination.  ...  Encoding domain knowledge into the prior over the high-dimensional weight space of a neural network is challenging but essential in applications with limited data and weak signals.  ...  We use 50% of the data for training and 50% for testing, and we repeat this 50 times for each of the six experiments (i.e., for each gene), to account for the variability due to BNN training.  ... 
doi:10.1214/21-ba1291 fatcat:hos346xv45gdpjxsyze7ppooda

A statistical description of neural ensemble dynamics

John D. Long II
2011 Frontiers in Computational Neuroscience  
We employ a Bayesian estimator to weigh prior information against the available ensemble data, and use an adaptive quantization technique to aggregate poorly estimated regions of the ensemble data space  ...  Unfortunately, the combination of expert knowledge and ensemble data is often insufficient for selecting a unique model of these interactions.  ...  For comparison against a comparable estimate of ensemble firing rate, the ensemble data was binned as both binary activations and spike counts.  ... 
doi:10.3389/fncom.2011.00052 pmid:22319486 pmcid:PMC3226070 fatcat:7glrbivatjfmheu2ddsh5vjl24

Spatio-temporal correlations and visual signalling in a complete neuronal population

Jonathan W. Pillow, Jonathon Shlens, Liam Paninski, Alexander Sher, Alan M. Litke, E. J. Chichilnisky, Eero P. Simoncelli
2008 Nature  
A penalty on coupling filters was used to obtain a minimally sufficient set of coupling filters, which yields an estimate of the network's functional connectivity 19, 20 .  ...  We fit the model to data recorded in vitro from a population of 27 ON and OFF parasol ganglion cells (RGCs) in a small patch of isolated macaque monkey retina, stimulated with 120-Hz spatio-temporal binary  ...  Correspondence and requests for materials should be addressed to J.W.P. (pillow@gatsby.ucl.ac.uk).  ... 
doi:10.1038/nature07140 pmid:18650810 pmcid:PMC2684455 fatcat:nkcrgao23fetpkyis6k5rtcewy

Tight Data-Robust Bounds to Mutual Information Combining Shuffling and Model Selection Techniques

M. A. Montemurro, R. Senatore, S. Panzeri
2007 Neural Computation  
Second, we use a nonparametric test to determine whether all the information encoded by the spike train can be decoded assuming a low-dimensional response model.  ...  The estimation of the information carried by spike times is crucial for a quantitative understanding of brain function, but it is difficult because of an upward bias due to limited experimental sampling  ...  Arabzadeh for many useful discussions and for kindly making available to us the example data used in Figure 9 . Gianni Pola contributed to the early stages of this work.  ... 
doi:10.1162/neco.2007.19.11.2913 pmid:17883346 fatcat:7utyxj3zofe7voq5msko46wbe4

A primer on information theory, with applications to neuroscience [article]

Felix Effenberger
2013 arXiv   pre-print
As these quantities cannot be computed exactly from measured data in practice, estimation techniques for information-theoretic quantities will be presented.  ...  for information-theoretic analyses of neural data.  ...  Acknowledgements The author would like to thank Nihat Ay, Yuri Campbell, Jörg Lehnert, Timm Lochmann, Wiktor M lynarski and Carolin Stier for their useful comments on the manuscript.  ... 
arXiv:1304.2333v2 fatcat:b22z46tqnfem3f7y2viu5saaty

Inference, Prediction, and Entropy-Rate Estimation of Continuous-time, Discrete-event Processes [article]

S. E. Marzen, J. P. Crutchfield
2020 arXiv   pre-print
Based on experiments with complex synthetic data, the methods are competitive with the state-of-the-art for prediction and entropy-rate estimation.  ...  Here, we provide new methods for inferring, predicting, and estimating them.  ...  (Right) Mean-squared error between the estimated density and the true density as we use more training data for three different estimation techniques.  ... 
arXiv:2005.03750v1 fatcat:fwxzmon3zzep7ojzm3bcjmztsm

Inferring neural information flow from spiking data

Adrià Tauste Campo
2020 Computational and Structural Biotechnology Journal  
Finally, we discuss directions for future research, including the development of theoretical information flow models and the use of dimensionality reduction techniques to extract relevant interactions  ...  Therefore, estimating interactions between neural activity at the cellular scale has significant implications in understanding how neuronal circuits encode and communicate information across brain areas  ...  Vila-Vidal for the critical reading of manuscript, and to A. Hyafil, Il Memming Parl and Jacob Yates for fruitful discussions about the generalized linear model.  ... 
doi:10.1016/j.csbj.2020.09.007 pmid:33101608 pmcid:PMC7548302 fatcat:4btvoyi3svbqnchosnrhppukzm

Online neural connectivity estimation with ensemble stimulation [article]

Anne Draelos, Eva A. Naumann, John M. Pearson
2020 arXiv   pre-print
Many previous approaches have attempted to estimate functional connectivity between neurons using statistical modeling of observational data, but these approaches rely heavily on parametric assumptions  ...  Moreover, we prove that our approach, which reduces to an efficiently solvable convex optimization problem, can be related to Variational Bayesian inference on the binary connection weights, and we derive  ...  We would like to thank Robert Calderbank for useful discussions.  ... 
arXiv:2007.13911v2 fatcat:7anuzkndcjfebh3opsnn6c5osm

Least Square Variational Bayesian Autoencoder with Regularization [article]

Gautam Ramachandra
2017 arXiv   pre-print
This paper describes the scenario in which we wish to find a point-estimate to the parameters θ of some parametric model in which we generate each observations by first sampling a local latent variable  ...  Here we use least square loss function with regularization in the the reconstruction of the image, the least square loss function was found to give better reconstructed images and had a faster training  ...  which for the discrete case is the binary cross entropy.  ... 
arXiv:1707.03134v1 fatcat:5ijokt4wsnblblpfkvd6hoasne
« Previous Showing results 1 — 15 out of 651 results