217,140 Hits in 5.4 sec

Selectivity Estimation with Deep Likelihood Models [article]

Zongheng Yang, Eric Liang, Amog Kamsetty, Chenggang Wu, Yan Duan, Xi Chen, Pieter Abbeel, Joseph M. Hellerstein, Sanjay Krishnan, Ion Stoica
2019 arXiv   pre-print
To capture the rich multivariate distributions of relational tables, we propose the use of a new type of high-capacity statistical model: deep likelihood models.  ...  To make a truly usable estimator, we develop a Monte Carlo integration scheme on top of likelihood models that can efficiently handle range queries with dozens of filters or more.  ...  In this paper, we show how selectivity estimation can be done with high accuracy by using deep likelihood models.  ... 
arXiv:1905.04278v1 fatcat:ls6n36rjyrge3jqhs4jofwvnrq

Self-Paced Deep Regression Forests for Facial Age Estimation [article]

Shijie Ai, Lili Pan, Yazhou Ren
2020 arXiv   pre-print
To this end, we propose self-paced deep regression forests (SP-DRFs) -- a gradual learning DNNs framework for age estimation.  ...  Facial age estimation is an important and challenging problem in computer vision.  ...  In the last pace, if capped likelihoods are not involved, all the samples are selected to retrain DRFs, and we observe the worst cases are face images with obvious occlusions (e.g. whiskers).  ... 
arXiv:1910.03244v5 fatcat:vij5b6t4wbdtbng53sg5i6epba

Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning [article]

Alexander Immer, Matthias Bauer, Vincent Fortuin, Gunnar Rätsch, Mohammad Emtiyaz Khan
2021 arXiv   pre-print
Marginal-likelihood based model-selection, even though promising, is rarely used in deep learning due to estimation difficulties.  ...  In this work, we present a scalable marginal-likelihood estimation method to select both hyperparameters and network architectures, based on the training data alone.  ...  Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning (Appendix) A. Derivations and additional details A.1.  ... 
arXiv:2104.04975v3 fatcat:ubxs5ffo6vbkxfn6fevs3exvdi

A Bayesian Perspective on Training Speed and Model Selection [article]

Clare Lyle, Lisa Schut, Binxin Ru, Yarin Gal, Mark van der Wilk
2020 arXiv   pre-print
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.  ...  This provides two major insights: first, that a measure of a model's training speed can be used to estimate its marginal likelihood.  ...  Instead of computing the full posterior, it is common to select the model with the highest marginal likelihood.  ... 
arXiv:2010.14499v1 fatcat:san6haektjedvfqv6vjqh4zfvq

Deep Learning Models for Predicting Wildfires from Historical Remote-Sensing Data [article]

Fantine Huot, R. Lily Hu, Matthias Ihme, Qing Wang, John Burge, Tianjian Lu, Jason Hickey, Yi-Fan Chen, John Anderson
2021 arXiv   pre-print
Results are compared and analyzed for four different deep learning models to estimate wildfire likelihood.  ...  The results demonstrate that deep learning models can successfully identify areas of high fire likelihood using aggregated data about vegetation, weather, and topography with an AUC of 83%.  ...  The segmentation results with daily fire labels (Figure 4a ) demonstrate that deep learning models show real potential for estimating the fire likelihood.  ... 
arXiv:2010.07445v3 fatcat:zu2l57iuwffdti4lnocwevm5qa

Self-Paced Deep Regression Forests with Consideration on Ranking Fairness [article]

Lili Pan, Mingming Meng, Yazhou Ren, Yali Zheng, Zenglin Xu
2022 arXiv   pre-print
Deep discriminative models (DDMs), such as deep regression forests, deep neural decision forests, have been extensively studied recently to solve problems like facial age estimation, head pose estimation  ...  To this end, this paper proposes a new self-paced paradigm for deep discriminative model, which distinguishes noisy and underrepresented examples according to the output likelihood and entropy associated  ...  That is, the sample with high likelihood value or entropy may be selected.  ... 
arXiv:2112.06455v6 fatcat:43r7b6kifbadzjevbkrlmeid5m

Last Layer Marginal Likelihood for Invariance Learning [article]

Pola Schwöbel, Martin Jørgensen, Sebastian W. Ober, Mark van der Wilk
2022 arXiv   pre-print
The Bayesian paradigm for model selection provides a path towards end-to-end learning of invariances using only the training data, by optimising the marginal likelihood.  ...  Data augmentation is often used to incorporate inductive biases into models. Traditionally, these are hand-crafted and tuned with cross validation.  ...  Deep Kernel Learning (DKL; Hinton and Salakhutdinov, 2007; Calandra et al., 2016; Bradshaw et al., 2017) replaces the last layer of a neural network with a GP, where marginal likelihood estimation is  ... 
arXiv:2106.07512v2 fatcat:5ojyoa62kra6jlw4hoaqnwqema

Power of deep, all-exon resequencing for discovery of human trait genes

G. V. Kryukov, A. Shpunt, J. A. Stamatoyannopoulos, S. R. Sunyaev
2009 Proceedings of the National Academy of Sciences of the United States of America  
To estimate parameters of the demographic model, we computed a likelihood function for the observed site-frequency spectrum of synonymous and noncoding SNPs by using diffusion approximation of the Wright-Fisher  ...  model.  ...  We modeled a distribution of selection coefficients by a gamma distribution with parameters estimated by maximum likelihood.  ... 
doi:10.1073/pnas.0812824106 pmid:19202052 pmcid:PMC2656172 fatcat:qykj6jjzr5hphfops6vzpegw2i

phydms: Software for phylogenetic analyses informed by deep mutational scanning [article]

Sarah K Hilton, Michael B Doud, Jesse D Bloom
2017 bioRxiv   pre-print
We describe software that efficiently performs phylogenetic analyses with substitution models informed by deep mutational scanning.  ...  It can be used to compare the results of deep mutational scanning experiments to the selection on genes in nature.  ...  Tamuri, A.U., Goldman, N., dos Reis, M.: A penalized likelihood method for estimating the distribution of selection coefficients from phylogenetic data.  ... 
doi:10.1101/121830 fatcat:d3dtrk2r5fbzrnwsknaey2wgcq

Neural Networks for Parameter Estimation in Intractable Models [article]

Amanda Lenzi, Julie Bessac, Johann Rudi, Michael L. Stein
2021 arXiv   pre-print
We propose to use deep learning to estimate parameters in statistical models when standard likelihood estimation methods are computationally infeasible.  ...  We use data from model simulations as input and train deep neural networks to learn statistical parameters.  ...  We show that the deep NN can estimate parameters of max-stable models for spatial extremes with higher accuracy than traditional (approximate) likelihood approaches can, with a considerable speed-up in  ... 
arXiv:2107.14346v1 fatcat:3ksmufxckrdyriqojcnpcih4qy

Deep Sub-Ensembles for Fast Uncertainty Estimation in Image Classification [article]

Matias Valdenegro-Toro
2019 arXiv   pre-print
Fast estimates of model uncertainty are required for many robust robotics applications.  ...  With ResNet-20 on the CIFAR10 dataset, we obtain 1.5-2.5 speedup over a Deep Ensemble, with a small increase in error and NLL, and similarly up to 5-15 speedup with a VGG-like network on the SVHN dataset  ...  We will also extend this evaluation to regression and estimate computation times on CPU and GPU. Figure 1 : 1 Conceptual comparison of Deep Ensembles and Deep Sub-Ensembles with n ensemble members.  ... 
arXiv:1910.08168v2 fatcat:il74o3n2lzhxjoxnugr2g75wia

A Survey on Uncertainty Toolkits for Deep Learning [article]

Maximilian Pintz, Joachim Sicking, Maximilian Poretschkin, Maram Akila
2022 arXiv   pre-print
To this end, we present the first survey on toolkits for uncertainty estimation (UE) in DL, as UE forms a cornerstone in assessing model reliability.  ...  We investigate 11 toolkits with respect to modeling and evaluation capabilities, providing an in-depth comparison for the three most promising ones, namely Pyro, Tensorflow Probability, and Uncertainty  ...  VI-BNN or deep ensembles) have pre-written training procedures or even architectures, which limits the ability to extend a given deep learning model with uncertainty estimates.  ... 
arXiv:2205.01040v1 fatcat:7qqykwuwfbci7bi7ehsacdr5my

Global, Parameterwise and Joint Shrinkage Factor Estimation

Daniela Dunkler, Willi Sauerbrei, Georg Heinze
2016 Journal of Statistical Software  
Various types of shrinkage factors can also be estimated after a maximum likelihood fit has been obtained: while global shrinkage modifies all regression coefficients by the same factor, parameterwise  ...  The latter ones have been proposed especially in the context of variable selection.  ...  We are grateful to Drs Eichinger and Kyrle, Medical University of Vienna, for providing the data of the deep vein thrombosis study.  ... 
doi:10.18637/jss.v069.i08 fatcat:2ugzpkfvzvf4hekvis2hjhg4zm

Modeling Gene Expression Evolution with an Extended Ornstein–Uhlenbeck Process Accounting for Within-Species Variation

Rori V. Rohlfs, Patrick Harrigan, Rasmus Nielsen
2013 Molecular biology and evolution  
Ornstein-Uhlenbeck (OU) processes have been proposed to model gene expression evolution as they model both random drift and stabilizing selection and can be extended to model changes in selection regimes  ...  Through simulations, we explore the reliability of parameter estimates and the extent to which different selective regimes can be distinguished using phylogenies of varying size using both the typical  ...  FIG. 1 . 1 The species mean model log likelihood function for data simulated under the nonevolutionary species variance model with 2 ¼ 5 (within-species variation) is computed with 2 (estimated drift)  ... 
doi:10.1093/molbev/mst190 pmid:24113538 pmcid:PMC3879452 fatcat:j3wboruznvcxlkbfqvpb4sjlce

Long prereproductive selection and divergence by depth in a Caribbean candelabrum coral

C. Prada, M. E. Hellberg
2013 Proceedings of the National Academy of Sciences of the United States of America  
fitting; Δi, difference in AIC score with respect to the best model; model likelihoods, relative likelihood of the model given the data; wi, model probabilities; evidence ratio, fold difference in model  ...  Fig. 3 . 3 Migration rate estimates between Shallow and Deep obtained by fitting the IMa model to all four loci. Estimates are scaled by the neutral mutation rate.  ... 
doi:10.1073/pnas.1208931110 pmid:23359716 pmcid:PMC3593850 fatcat:6u6ttqfthrcwfjofjabddyrhmy
« Previous Showing results 1 — 15 out of 217,140 results