Filters








2,424 Hits in 8.7 sec

Maximum Likelihood Estimation [chapter]

2009 Statistics: A Series of Textbooks and Monographs  
Chapter 1 General Linear Models I Maximum Likelihood Estimation We can learn the mean and variance of a Gaussian distribution using the Maximum Likelihood (ML) framework as follows.  ...  Estimation in a Bayesian GLM is therefore equivalent to Maximum Likelihood estimation (ie. for IID covariances this is the same as Weighted Least Squares) with augmented data.  ...  A two-layer MLP is given by with D is the dimension of the input x, H is the number of 'hidden units' in the 'first layer', and z h is the output of the hth unit.  ... 
doi:10.1201/9781420064254.ch3 fatcat:bfsusm6pdjgjpendfxw4c7ab6y

Maximum Likelihood Estimation [chapter]

1991 Order Statistics and Inference  
Chapter 1 General Linear Models I Maximum Likelihood Estimation We can learn the mean and variance of a Gaussian distribution using the Maximum Likelihood (ML) framework as follows.  ...  Estimation in a Bayesian GLM is therefore equivalent to Maximum Likelihood estimation (ie. for IID covariances this is the same as Weighted Least Squares) with augmented data.  ...  A two-layer MLP is given by with D is the dimension of the input x, H is the number of 'hidden units' in the 'first layer', and z h is the output of the hth unit.  ... 
doi:10.1016/b978-0-12-076948-3.50013-4 fatcat:2w7lohk4tbba5j57jyj4lkb2ta

Maximum Likelihood Estimation [chapter]

2003 Handbook of Statistical Analyses Using Stata, Fourth Edition  
Chapter 1 General Linear Models I Maximum Likelihood Estimation We can learn the mean and variance of a Gaussian distribution using the Maximum Likelihood (ML) framework as follows.  ...  Estimation in a Bayesian GLM is therefore equivalent to Maximum Likelihood estimation (ie. for IID covariances this is the same as Weighted Least Squares) with augmented data.  ...  A two-layer MLP is given by with D is the dimension of the input x, H is the number of 'hidden units' in the 'first layer', and z h is the output of the hth unit.  ... 
doi:10.1201/noe1584884040.ch13 fatcat:odwdw3pq5jb5daey5kipyf275u

Maximum Likelihood Estimation [chapter]

2008 Studying Human Populations  
Chapter 1 General Linear Models I Maximum Likelihood Estimation We can learn the mean and variance of a Gaussian distribution using the Maximum Likelihood (ML) framework as follows.  ...  Estimation in a Bayesian GLM is therefore equivalent to Maximum Likelihood estimation (ie. for IID covariances this is the same as Weighted Least Squares) with augmented data.  ...  A two-layer MLP is given by with D is the dimension of the input x, H is the number of 'hidden units' in the 'first layer', and z h is the output of the hth unit.  ... 
doi:10.1007/978-0-387-73251-0_2 fatcat:cgdl2f4qk5furgmq56hi4jt3sm

Maximum Likelihood Estimation [chapter]

2012 Essential Mathematics for Market Risk Management  
Chapter 1 General Linear Models I Maximum Likelihood Estimation We can learn the mean and variance of a Gaussian distribution using the Maximum Likelihood (ML) framework as follows.  ...  Estimation in a Bayesian GLM is therefore equivalent to Maximum Likelihood estimation (ie. for IID covariances this is the same as Weighted Least Squares) with augmented data.  ...  A two-layer MLP is given by with D is the dimension of the input x, H is the number of 'hidden units' in the 'first layer', and z h is the output of the hth unit.  ... 
doi:10.1002/9781118467213.ch16 fatcat:dsyxxsptl5fzzmh6fepwktzrhi

Maximum Likelihood Estimation [chapter]

2013 Methods of Statistical Model Estimation  
Chapter 1 General Linear Models I Maximum Likelihood Estimation We can learn the mean and variance of a Gaussian distribution using the Maximum Likelihood (ML) framework as follows.  ...  Estimation in a Bayesian GLM is therefore equivalent to Maximum Likelihood estimation (ie. for IID covariances this is the same as Weighted Least Squares) with augmented data.  ...  A two-layer MLP is given by with D is the dimension of the input x, H is the number of 'hidden units' in the 'first layer', and z h is the output of the hth unit.  ... 
doi:10.1201/b14932-6 fatcat:zzmwk3me7be2tffxw4wst4wxla

Maximum Likelihood Estimation [chapter]

2000 Statistical Methods for Categorical Data Analysis  
Chapter 1 General Linear Models I Maximum Likelihood Estimation We can learn the mean and variance of a Gaussian distribution using the Maximum Likelihood (ML) framework as follows.  ...  Estimation in a Bayesian GLM is therefore equivalent to Maximum Likelihood estimation (ie. for IID covariances this is the same as Weighted Least Squares) with augmented data.  ...  A two-layer MLP is given by with D is the dimension of the input x, H is the number of 'hidden units' in the 'first layer', and z h is the output of the hth unit.  ... 
doi:10.1016/b978-012563736-7/50009-4 fatcat:hg7m25nkwvf6dahagr5smrayr4

Maximum Likelihood Estimation [chapter]

2000 Handbook of Statistical Analyses Using Stata, Fourth Edition  
Chapter 1 General Linear Models I Maximum Likelihood Estimation We can learn the mean and variance of a Gaussian distribution using the Maximum Likelihood (ML) framework as follows.  ...  Estimation in a Bayesian GLM is therefore equivalent to Maximum Likelihood estimation (ie. for IID covariances this is the same as Weighted Least Squares) with augmented data.  ...  A two-layer MLP is given by with D is the dimension of the input x, H is the number of 'hidden units' in the 'first layer', and z h is the output of the hth unit.  ... 
doi:10.1201/9781584888574.ch13 fatcat:iwrcnydvjnfkpitrdbg3fu2jem

Bhattacharyya and Expected Likelihood Kernels [chapter]

Tony Jebara, Risi Kondor
2003 Lecture Notes in Computer Science  
It satisfies Mercer's condition and can be computed in closed form for a large class of models, including exponential family models, mixtures, hidden Markov models and Bayesian networks.  ...  The kernel is then computed by integrating the product of the two generative models corresponding to two data points.  ...  Acknowledgments Thanks to A. Jagota and R. Lyngsoe for profile HMM comparison code, C. Leslie and R. Kuang for SCOP data and the referees for important corrections.  ... 
doi:10.1007/978-3-540-45167-9_6 fatcat:uxa7odqhlfbrtnrnfyoiw4lkn4

Lossless, Scalable Implicit Likelihood Inference for Cosmological Fields [article]

T. Lucas Makinen, Tom Charnock, Justin Alsing, Benjamin D. Wandelt
2021 arXiv   pre-print
We present a comparison of simulation-based inference to full, field-based analytical inference in cosmological data analysis.  ...  the lognormal cases, b) simulation-based inference using these maximally informative nonlinear summaries recovers nearly losslessly the exact posteriors of field-level inference, bypassing the need to  ...  We present a comparison of simulation-based inference to full, field-based analytical inference in cosmological data analysis.  ... 
arXiv:2107.07405v2 fatcat:ttd4ktkj35doxestt4q6jvlsfi

Spectral likelihood expansions for Bayesian inference

Joseph B. Nagel, Bruno Sudret
2016 Journal of Computational Physics  
Both the model evidence and the posterior moments are related to the expansion coefficients.  ...  A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density.  ...  In addition to the SLE approximations, the prior density π(µ) = N (µ|µ 0 , σ 2 0 ) and the exact solution π(µ|y) = N (µ|µ N , σ 2 N ) from a conjugate analysis based on Eq. (57) are shown.  ... 
doi:10.1016/j.jcp.2015.12.047 fatcat:zylmuewh3jaq5khx5uq66qvgmu

Maximum Likelihood Estimation of Latent Affine Processes

David S. Bates
2006 The Review of financial studies  
This article develops a direct filtration-based maximum likelihood methodology for estimating the parameters and realizations of latent affine processes.  ...  An application to daily stock returns over 1953-96 reveals substantial divergences from EMM-based estimates; in particular, more substantial and time-varying jump risk.  ...  The G t * t (ψ) overall estimation procedure is consequently termed approximate maximum likelihood (AML), with potentially some loss of estimation efficiency relative to an exact maximum likelihood procedure  ... 
doi:10.1093/rfs/hhj022 fatcat:vfiiisrqkfbpbpc6n27ok6ms3i

Maximum-likelihood determination of anomalous substructures

Randy J. Read, Airlie J. McCoy
2018 Acta Crystallographica Section D: Structural Biology  
This method is based on the maximum-likelihood SAD phasing function, which accounts for measurement errors and for correlations between the observed and calculated Bijvoet mates.  ...  A fast Fourier transform (FFT) method is described for determining the substructure of anomalously scattering atoms in macromolecular crystals that allows successful structure determination by X-ray single-wavelength  ...  Terwilliger (1994) showed that a Bayesian analysis of the MAD data, applying prior probabilities to the F A estimates based on the expected scattering, improved estimates of the F A in the presence of  ... 
doi:10.1107/s2059798317013468 pmid:29533235 pmcid:PMC5947773 fatcat:rgjwg7mfvvgrxpf6twi5tq52ki

Bayesian Inference for Discretely Sampled Markov Processes with Closed-Form Likelihood Expansions

O. Stramer, M. Bognar, P. Schneider
2010 Journal of Financial Econometrics  
Our approach is based on the closed-form (CF) likelihood approximations of Aït-Sahalia (CF likelihood approximation does not integrate to one; it is very close to one when near the MLE, but can markedly  ...  The efficacy of our approach is demonstrated in a simulation study of the Cox-Ingersoll-Ross (CIR) and Heston models, and is applied to two well known real-world datasets.  ...  We perform three Bayesian analyses using the exact likelihood, the Euler likelihood, and the normalized closed-form (CF) likelihood.  ... 
doi:10.1093/jjfinec/nbp027 fatcat:jss22acczvhjfaemyxk6yld6zy

Using a likelihood perspective to sharpen econometric discourse: Three examples

Christopher A. Sims
2000 Journal of Econometrics  
Two of the applied areas are related and have in common that they involve nonstationarity: macroeconomic time series modeling, and analysis of panel data in the presence of potential nonstationarity.  ...  The conclusion is that in these areas a likelihood perspective leads to more useful, honest and objective reporting of results and characterization of uncertainty.  ...  Therefore maximum likelihood estimation based on the distribution of the differenced data is consistent under these assumptions.  ... 
doi:10.1016/s0304-4076(99)00046-9 fatcat:42rjcv6i3vgutarknpus6ethrq
« Previous Showing results 1 — 15 out of 2,424 results