Filters








11,009 Hits in 4.8 sec

swyft: Truncated Marginal Neural Ratio Estimation in Python

Benjamin Kurt Miller, Alex Cole, Christoph Weniger, Francesco Nattino, Ou Ku, Meiert W. Grootes
2022 Journal of Open Source Software  
Our package swyft implements a specific, simulation-efficient SBI method called Truncated Marginal Neural Ratio Estimation (TMNRE) (Miller et al., 2021) ; it estimates the likelihoodto-evidence ratio  ...  Description of software swyft implements Marginal Neural Ratio Estimation (MNRE), a method which trains an amortized likelihood-to-evidence ratio estimator for any marginal posterior of interest. swyft  ... 
doi:10.21105/joss.04205 fatcat:d6zo4jsyvzc7rplw4dhnmeduma

Estimating the warm dark matter mass from strong lensing images with truncated marginal neural ratio estimation [article]

Noemi Anau Montel, Adam Coogan, Camila Correa, Konstantin Karchev, Christoph Weniger
2022 arXiv   pre-print
In this work we present the first analysis pipeline to combine parametric lensing models and a recently-developed neural simulation-based inference technique called truncated marginal neural ratio estimation  ...  Through a proof-of-concept application to simulated data, we show that our approach enables empirically testable inference of the dark matter cutoff mass through marginalization over a large population  ...  For the statistical analysis we employ truncated marginal neural ratio estimation (TMNRE).  ... 
arXiv:2205.09126v1 fatcat:5qmslgktrrdzpjhwqcdq7medqe

Pseudo-marginal Markov Chain Monte Carlo for Nonnegative Matrix Factorization

Junfu Du, Mingjun Zhong
2016 Neural Processing Letters  
A pseudo-marginal Markov chain Monte Carlo (PMCMC) method is proposed for nonnegative matrix factorization (NMF).  ...  Interestingly, when the marginal likelihood is not known, [2] and [1] have proposed to substitute the unknown marginal likelihood by an estimated one to compute the MH acceptance ratio, and it has  ...  -Use the importance sampling to estimate the marginal likelihood p(X|M ) and denote the estimated marginal likelihood by Z M .  ... 
doi:10.1007/s11063-016-9542-x fatcat:e2s7u4u7tzaoxavmdwpopaxkka

Parametric manufacturing yield modeling of GaAs/AlGaAs multiple quantum well avalanche photodiodes

Ilgu Yun, G.S. May
1999 IEEE transactions on semiconductor manufacturing  
Since they have demonstrated the capability of highly accurate function approximation and mapping of complex, nonlinear data sets, neural networks are proposed as the preferred tool for generating the  ...  The impact ionization rate ratio is defined as the ratio of the electron to hole ionization rate .  ...  As this figure shows, the marginal distribution of each input parameter is well-matched with the neural network predictions.  ... 
doi:10.1109/66.762882 fatcat:v3kvgcwx6nfbhanpzsqsyd7qvy

Bias-Free Scalable Gaussian Processes via Randomized Truncations [article]

Andres Potapczynski, Luhuan Wu, Dan Biderman, Geoff Pleiss, John P. Cunningham
2021 arXiv   pre-print
We address these issues using randomized truncation estimators that eliminate bias in exchange for increased variance.  ...  This paper analyzes two common techniques: early truncated conjugate gradients (CG) and random Fourier features (RFF).  ...  Unbiased Randomized Truncation We will now briefly introduce Randomized Truncation Estimators, which are the primary tool we use to unbias the CG and RFF log marginal likelihood estimates.  ... 
arXiv:2102.06695v2 fatcat:2h66rr34tfb4vmrcy7uuqdfyvm

TrustNet: Learning from Trusted Data Against (A)symmetric Label Noise [article]

Amirmasoud Ghiassi, Taraneh Younesian, Robert Birke, Lydia Y.Chen
2020 arXiv   pre-print
Considering the asymmetric truncated normal noise, the difference is smaller and decreasing with increasing noise ratio. At 60% noise SCL is only marginally better by, on average, 2.9%.  ...  Methods Symmetric Bimodal Asymmetric Truncated Normal Asymmetric and 27.2% for 40%, 50%, and 60% noise ratios, respectively.  ... 
arXiv:2007.06324v1 fatcat:c6w53frvondilgmfuopkk5vnfi

TrustNet: Learning from Trusted Data Against (A)symmetric Label Noise

Amirmasoud Ghiassi, Robert Birke, Lydia Y.Chen
2021 2021 IEEE/ACM 8th International Conference on Big Data Computing, Applications and Technologies (BDCAT '21)  
CCS CONCEPTS • Computing methodologies → Machine learning; • Machine learning approaches → Neural networks.  ...  Considering the asymmetric truncated normal noise, the difference is smaller and decreasing with increasing noise ratio. At 60% noise SCL is only marginally better by, on average, 2.9%.  ...  Estimating Noise Transition Matrix Here we briefly describe LABELNET which is a framework that consists of two neural networks: Amateur and Expert.  ... 
doi:10.1145/3492324.3494166 fatcat:m754v5wjibball2gjhcsrus644

Inhibition of Morphogenetic Movement duringXenopusGastrulation by Injected Sulfatase: Implications for Anteroposterior and Dorsoventral Axis Formation

John B. Wallingford, Amy K. Sater, J.Akif Uzman, Michael V. Danilchik
1997 Developmental Biology  
Injection of hydrolytic sulfatase into the blastocoels of gastrula stage embryos resulted in severe anteroposterior truncation, without a corresponding truncation of the dorsoventral axis.  ...  revealed that gastrulation movements are severely disrupted by sulfatase; in addition, sulfatase dramatically inhibited chordomesodermal cell elongation and convergent extension movements in planar dorsal marginal  ...  Expression of an engrailed-related protein is induced in anterior neural ecto-The authors thank B. Brown, J. Christian  ... 
doi:10.1006/dbio.1997.8571 pmid:9242419 fatcat:rswmdxatqnbyfg63wbb3hr555a

Mixed Vine Copulas As Joint Models Of Spike Counts And Local Field Potentials

Arno Onken, Stefano Panzeri
2016 Zenodo  
Here we introduce such techniques in a framework based on vine copulas with mixed margins to construct multivariate stochastic models.  ...  We propose efficient methods for likelihood calculation, inference, sampling and mutual information estimation within this framework.  ...  If the number of copula parameters is too big to be estimated in a single joint optimization, then the complexity of the copula model can be reduced by truncating the vine tree of the C-vine (truncated  ... 
doi:10.5281/zenodo.584120 fatcat:i35sgateqvbmbbro7jho72qame

A Probabilistic Framework for Nonlinearities in Stochastic Neural Networks [article]

Qinliang Su, Xuejun Liao, Lawrence Carin
2017 arXiv   pre-print
in neural networks.  ...  We present a probabilistic framework for nonlinearities, based on doubly truncated Gaussian distributions.  ...  A neural unit represented by the proposed framework is named a truncated Gaussian (TruG) unit because the framework is built upon truncated Gaussian distributions.  ... 
arXiv:1709.06123v1 fatcat:sasdksoq5vct3gzjsp7k6iqzvu

Page 6769 of Mathematical Reviews Vol. , Issue 2002I [page]

2002 Mathematical Reviews  
When the survival time and the censoring time are depen- dent, the product-limit estimator is an inconsistent estimator of the marginal survival function.  ...  This leads to the formation of likelihood-ratio tests.  ... 

On the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank: An application of flexible sampling methods using neural networks

Lennart F. Hoogerheide, Johan F. Kaashoek, Herman K. van Dijk
2007 Journal of Econometrics  
The results indicate the feasibility of the neural network approach.  ...  For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from.  ...  The RNE is the ratio between the IS estimator's estimated variance and (an estimate of) the variance that an estimator based on direct sampling would have (with the same number of drawings).  ... 
doi:10.1016/j.jeconom.2006.06.009 fatcat:ozousyjz2ze6bfytgggq7ojx7i

On the Shape of Posterior Densities and Credible Sets in Instrumental Variable Regression Models with Reduced Rank: An Application of Flexible Sampling Methods using Neural Networks

Lennart F. Hoogerheide, Johan F. kaashoek, H. K. van Dijk
2005 Social Science Research Network  
The results indicate the feasibility of the neural network approach.  ...  For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from.  ...  The RNE is the ratio between the IS estimator's estimated variance and (an estimate of) the variance that an estimator based on direct sampling would have (with the same number of drawings).  ... 
doi:10.2139/ssrn.878266 fatcat:k72zlzjanbgrhhrcnvvd6rzm7q

Sample Efficient Actor-Critic with Experience Replay [article]

Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando de Freitas
2017 arXiv   pre-print
To achieve this, the paper introduces several innovations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization  ...  In continuous control, ACER outperforms the A3C and truncated importance sampling baselines by a very significant margin.  ...  ACER uses a single deep neural network to estimate the policy π θ (a t |x t ) and the value function V π θv (x t ).  ... 
arXiv:1611.01224v2 fatcat:75dlcrlranfvxabny2g2wkg6um

Learning Bayes' theorem with a neural network for gravitational-wave inference [article]

Alvin J. K. Chua, Michele Vallisneri
2019 arXiv   pre-print
Our scheme has broad relevance to gravitational-wave applications such as low-latency parameter estimation and characterizing the science returns of future experiments.  ...  We rely on a compact representation of the data based on reduced-order modeling, which we generate efficiently using a separate neural-network waveform interpolant [A. J. K. Chua, C. R. Galley M.  ...  to be a poorer fit than the (truncated) network estimate.  ... 
arXiv:1909.05966v2 fatcat:ftmsfe53ezdcvnzw5nrgue47bi
« Previous Showing results 1 — 15 out of 11,009 results