1,209,153 Hits in 5.3 sec

Reverse inference is not a fallacy per se: Cognitive processes can be inferred from functional imaging data

Florian Hutzler
2014 NeuroImage  
a r t i c l e i n f o When inferring the presence of a specific cognitive process from observed brain activation a kind of reasoning is applied that is called reverse inference.  ...  Poldrack (2006) rightly criticized the careless use of reverse inference.  ...  It is important to note that the probabilities used for the estimation of reverse inference are estimated on the level of comparisons, but not on the level of individual activation peaks.  ... 
doi:10.1016/j.neuroimage.2012.12.075 pmid:23313571 fatcat:hclfxonw7vhivccpr3vrph7fdy

Probabilistic inference and ranking of gene regulatory pathways as a shortest-path problem

James D Jensen, Daniel M Jensen, Mark J Clement, Quinn O Snell
2013 BMC Bioinformatics  
Preliminary exploration of the use of joint edge probabilities to rank paths is largely inconclusive. Suggestions for a better framework for such comparisons are discussed.  ...  A method is proposed for achieving this by ranking paths according to the joint probability of directness of each path's edges.  ...  There is at least one valid objection to the assumption of independent edge probabilities: they are based on triplet comparisons, and any two adjacent edges are jointly involved in one triplet comparison  ... 
doi:10.1186/1471-2105-14-s13-s5 pmid:24266986 pmcid:PMC3849606 fatcat:v33xqdzymffqxoizrs3w6qvwzi

An Implementation of Empirical Bayesian Inference and Non-Null Bootstrapping for Threshold Selection and Power Estimation in Multiple and Single Statistical Testing [article]

Bahman Nasseroleslami
2018 bioRxiv   pre-print
However, the statistical inferences commonly rely on the p-values, but not on more expressive measures such as posterior probabilities, false discovery rates (FDR) and statistical power (1 - β).  ...  Curve or Spearman's rank correlation) and Gaussian Mixture Model estimation of the probability density function of the original and bootstrapped data.  ...  Such informed selection of statistical threshold is also challenging in complex statistical inferences (e.g. with non-normal data distributions) involving single or only a few comparisons or inferences  ... 
doi:10.1101/342964 fatcat:25qu34fw55g7ldgwaik5bkdwzq

Nonparametric predictive comparison of proportions

F.P.A. Coolen, P. Coolen-Schrijner
2007 Journal of Statistical Planning and Inference  
We consider both pairwise and multiple comparisons.  ...  These inferences are in terms of lower and upper probabilities that the number of successes in m future trials from one group exceeds the number of successes in m future trials from another group, or such  ...  Our method can be generalized to similar predictive lower and upper probabilities for more general multiple comparisons inferences, e.g. subset selection, where one can both study the lower and upper probabilities  ... 
doi:10.1016/j.jspi.2005.11.008 fatcat:3a3dbt76zvcfvnpl72sgmcgipa

Page 415 of Neural Computation Vol. 4, Issue 3 [page]

1992 Neural Computation  
The second level of inference is the task of model comparison.  ...  In this paper, the Bayesian approach to regularization and model-comparison is demonstrated by studying the inference prob- lem of interpolating noisy data.  ... 

The Power of Comparisons for Actively Learning Linear Classifiers [article]

Max Hopkins, Daniel M. Kane, Shachar Lovett
2020 arXiv   pre-print
Further, we show that these results hold as well for a stronger model of learning called Reliable and Probably Useful (RPU) learning.  ...  While previous negative results showed this model to have intractably large sample complexity for label queries, we show that comparison queries make RPU-learning at worst logarithmically more expensive  ...  Q(S h − {x}) infers x. Thus the probability that we cannot infer Plugging this result into Corollary 4.7 gives our desired guarantee on Comparison-Pool-RPU learning query complexity.  ... 
arXiv:1907.03816v2 fatcat:6htuqvv5hbgklnoytirmiuavhe

Noise-tolerant, Reliable Active Classification with Comparison Queries [article]

Max Hopkins, Daniel Kane, Shachar Lovett, Gaurav Mahajan
2020 arXiv   pre-print
only labels - returning a classifier that makes no errors with high probability.  ...  By introducing comparisons, an additional type of query comparing two points, we provide the first time and query efficient algorithms for learning non-homogeneous linear separators robust to bounded (  ...  Average inference dimension gives a high probability bound on the inference dimension of a finite sample.  ... 
arXiv:2001.05497v1 fatcat:yhf2tm4zobegpevbpsxwbkxeb4

Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception

Luigi Acerbi, Kalpana Dokka, Dora E. Angelaki, Wei Ji Ma, Samuel J. Gershman
2018 PLoS Computational Biology  
Bayesian comparison of causal inference strategies in heading perception PLOS Computational Biology | https://doi.  ...  We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers  ...  Bayesian comparison of causal inference strategies Thus, the goal of this work is two-fold.  ... 
doi:10.1371/journal.pcbi.1006110 pmid:30052625 pmcid:PMC6063401 fatcat:i524qrrvfvclzjscky2gqlzbce

Commentary on Gronau and Wagenmakers

Suyog H. Chandramouli, Richard M. Shiffrin
2018 Computational Brain & Behavior  
We discuss how the various inference procedures fare when the data grow large.  ...  , and give our recommendation for the simplest approach that matches statistical inference to the needs of science.  ...  posterior probabilities for the instances in the class, and use those class posteriors to make class inferences. 6) Carry out model class comparisons for model classes that do not overlap.  ... 
doi:10.1007/s42113-018-0017-1 fatcat:lvtu5vcfmzdz7lt3orrivgnyd4

Point pattern representation using imprecise, incomplete, nonmetric information

S.H. Levine, J.G. Kreifeldt, Ming-Chuen Chuang
1994 IEEE Transactions on Systems, Man and Cybernetics  
Ideally each comparison should determine a longer and shorter distance, and a set of comparisons should include all possible pairs.  ...  This information consists solely of a rank ordered list of interpoint distances determined from pairwise comparisons.  ...  The probability of resolving a comparison in a complete data set either by measurement or by inference. These results are based on simulations.  ... 
doi:10.1109/21.281422 fatcat:yjmsbxg7azc4vogvyzukmqhnfi

Match likelihood ratio for uncertain genotypes

M. W. Perlin, J. B. Kadane, R. W. Cotton
2009 Law, Probability and Risk  
Genetic data are not necessarily fully informative, leading to uncertainty in an inferred genotype.  ...  The posterior genotype probability distribution incorporates the identification information present in the data.  ...  Bayesian inference permits the probabilistic inference of forensic types, and MLR enables their comparison to ascertain match rarity. Table 1.  ... 
doi:10.1093/lpr/mgp024 fatcat:iv3xty533beenet3ujezyhorgq

Match Likelihood Ratio for Uncertain Genotypes

Mark W. Perlin, Joseph B. Kadane, Robin W. Cotton
2009 Social Science Research Network  
Genetic data are not necessarily fully informative, leading to uncertainty in an inferred genotype.  ...  The posterior genotype probability distribution incorporates the identification information present in the data.  ...  Bayesian inference permits the probabilistic inference of forensic types, and MLR enables their comparison to ascertain match rarity. Table 1.  ... 
doi:10.2139/ssrn.1509435 fatcat:ogt7fybvejbd5iwta76ratqj6i

Bayesian inference over model-spaces increases the accuracy of model comparison and allows formal testing of hypotheses about model distributions in experimental populations [article]

Thomas HB FitzGerald, Dorothea Hammerer, Thomas D Sambrook, Will D Penny
2019 arXiv   pre-print
Determining the best model or models for a particular data set, a process known as Bayesian model comparison, is a critical part of probabilistic inference.  ...  dilution, resulting in posterior probability estimates that are, on average, more accurate than those produced when using a fixed model-space.  ...  Discussion In this paper, we consider Bayesian inference over model-spaces, with a specific focus on model comparison in the context of multi-subject studies.  ... 
arXiv:1901.01916v1 fatcat:xclunza2xbftzdmbx7sjogvbtu

Page 2832 of Mathematical Reviews Vol. , Issue 84g [page]

1984 Mathematical Reviews  
In particular, he analyzes Hume’s conception of induction in connection with the notion of inductive probability logics and statistical induction, i.e., statistical inference.  ...  Now, Jeffrey's rule for revising a probability P on 2 to a new probability P* on Q based on new probabilities P*(E;) on a partition {E£,}"_, of Q is P*(A)= 3 P(A|E,)P*(E;).  ... 

Page 381 of Clinical and Experimental Pharmacology & Physiology Vol. 18, Issue 6 [page]

1991 Clinical and Experimental Pharmacology & Physiology  
For the first comparison, the Type I. and Type I. error rates are identical and the actual probability of Type I. error is identical to the nominal probability of, say, P = 0.05.  ...  A ‘worst case’ estimate of the Type I. error rate associated with multiple comparisons is given by the formula: P= l—t- FP where P’ is the actual probability of Type I error, P is the nominal probability  ... 
« Previous Showing results 1 — 15 out of 1,209,153 results