Filters








453 Hits in 7.9 sec

On Maximum-a-Posteriori estimation with Plug Play priors and stochastic gradient descent [article]

Rémi Laumont and Valentin de Bortoli and Andrés Almansa and Julie Delon and Alain Durmus and Marcelo Pereyra
2022 arXiv   pre-print
This papers studies maximum-a-posteriori (MAP) estimation for Bayesian models with PnP priors.  ...  We first consider questions related to existence, stability and well-posedness, and then present a convergence proof for MAP computation by PnP stochastic gradient descent (PnP-SGD) under realistic assumptions  ...  Computer experiments for this work ran on a Titan Xp GPU donated by NVIDIA, as well as on HPC resources from GENCI-IDRIS (Grant 2020-AD011011641).  ... 
arXiv:2201.06133v1 fatcat:jrbpall6m5enzpjcrbpxdreo3u

Solving Inverse Problems by Joint Posterior Maximization with Autoencoding Prior [article]

Mario González, Andrés Almansa, Pauline Tan
2022 arXiv   pre-print
to the use of a stochastic encoder to accelerate computations.  ...  Finally we show how our joint MAP methodology relates to more common MAP approaches, and we propose a continuation scheme that makes use of our JPMAP algorithm to provide more robust MAP estimates.  ...  We would like to sincerely thank Mauricio Delbracio, José Lezama and Pablo Musé for their help, their insightful comments, and their continuous support throughout this project.  ... 
arXiv:2103.01648v4 fatcat:imk753t6mfhjfbjmpmazk5d2wi

Muesli: Combining Improvements in Policy Optimization [article]

Matteo Hessel, Ivo Danihelka, Fabio Viola, Arthur Guez, Simon Schmitt, Laurent Sifre, Theophane Weber, David Silver, Hado van Hasselt
2022 arXiv   pre-print
Notably, Muesli does so without using deep search: it acts directly with a policy network and has computation speed comparable to model-free baselines.  ...  The Atari results are complemented by extensive ablations, and by additional results on continuous control and 9x9 Go.  ...  Also we thank Dan Horgan, Alaa Saade, Nat McAleese and Charlie Beattie for their excellent help with reinforcement learning environments.  ... 
arXiv:2104.06159v2 fatcat:4jafvxdd55f4tdj2vgt647gsxe

Solving Inverse Problems with Hybrid Deep Image Priors: the challenge of preventing overfitting [article]

Zhaodong Sun
2021 arXiv   pre-print
We also study the relation between the dynamics of gradient descent, and the overfitting phenomenon. The numerical results show the hybrid priors play an important role in preventing overfitting.  ...  The hybrid priors are to combine DIP with an explicit prior such as total variation or with an implicit prior such as a denoising algorithm.  ...  maximum a posteriori  ... 
arXiv:2011.01748v2 fatcat:o23wwxmed5genl5gdkpd6k5qaa

An Introduction to Variational Autoencoders

Diederik P. Kingma, Max Welling
2019 Foundations and Trends® in Machine Learning  
Variational autoencoders provide a principled framework for learning deep latent-variable models and corresponding inference models.  ...  In this work, we provide an introduction to variational autoencoders and some important extensions.  ...  Acknowledgements We are grateful for the help of Tim Salimans, Alec Radford, Rif A. Saurous and others who have given us valuable feedback at various stages of writing.  ... 
doi:10.1561/2200000056 fatcat:t3x7k3dt65a5rlviyiixdnj3yi

Review Article: Model Meets Deep Learning in Image Inverse Problems

Na Wang & Jian Sun
2020 CSIAM Transactions on Applied Mathematics  
In this paper, we will review a new trend of methods for image inverse problem that combines the imaging/degradation model with deep learning approach.  ...  But these methods require good design of image prior or regularizer that is hard to be hand-crafted.  ...  Acknowledgments This work was supported by NSFC (11971373, 11690011, U1811461, 61721002) and National Key R&D Program 2018AAA0102201.  ... 
doi:10.4208/csiam-am.2020-0016 fatcat:peaina2vorg23ow5seswsi7pzu

Accuracy Maximization Analysis for Sensory-Perceptual Tasks: Computational Improvements, Filter Robustness, and Coding Advantages for Scaled Additive Noise

Johannes Burge, Priyank Jaini, Konrad P. Kording
2017 PLoS Computational Biology  
stochastic gradient descent (AMA-SGD) routine for filter learning.  ...  Here, we first contribute two technical advances that significantly reduce AMA's compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a  ...  Geisler for suggesting the stochastic gradient descent approach. Author Contributions Conceived and designed the experiments: JB. Performed the experiments: JB PJ. Analyzed the data: JB PJ.  ... 
doi:10.1371/journal.pcbi.1005281 pmid:28178266 pmcid:PMC5298250 fatcat:5uw53jdxdjek5j62zo6pst5da4

Joint learning of variational representations and solvers for inverse problems with partially-observed data [article]

Ronan Fablet, Lucas Drumetz, Francois Rousseau
2020 arXiv   pre-print
Recently, learning-based strategies have appeared to be very efficient for solving inverse problems, by learning direct inversion schemes or plug-and-play regularizers from available pairs of true states  ...  The variational cost and the gradient-based solver are both stated as neural networks using automatic differentiation for the latter.  ...  Acknowledgements This work was supported by CNES (grant OSTST-MANATEE), Microsoft (AI EU Ocean awards) and ANR Projects Melody and OceaniX, and exploited HPC resources from GENCI-IDRIS (Grant 2020-101030  ... 
arXiv:2006.03653v1 fatcat:cbuu5ca5brfwtbw5a3s3qjxlne

Semantics, Representations and Grammars for Deep Learning [article]

David Balduzzi
2015 arXiv   pre-print
The backbone of almost all deep learning algorithms is backpropagation, which is simply a gradient computation distributed over a neural network.  ...  protocols equipped with first-order convergence guarantees).  ...  I am grateful to Marcus Frean, JP Lewis and Brian McWilliams for useful comments and discussions.  ... 
arXiv:1509.08627v1 fatcat:u6tbkcdsafcbhbxlxvyvgtltt4

Grammars for Games: A Gradient-Based, Game-Theoretic Framework for Optimization in Deep Learning

David Balduzzi
2016 Frontiers in Robotics and AI  
The backbone of almost all deep learning algorithms is backpropagation, which is simply a gradient computation distributed over a neural network.  ...  protocols equipped with first-order convergence guarantees).  ...  Lewis, and Brian McWilliams for useful comments and discussions.  ... 
doi:10.3389/frobt.2015.00039 fatcat:re4eibywmbb7xkcxti47coi5te

prDeep: Robust Phase Retrieval with a Flexible Deep Network [article]

Christopher A. Metzler, Philip Schniter, Ashok Veeraraghavan, Richard G. Baraniuk
2018 arXiv   pre-print
Progress has been made recently on more robust algorithms using signal priors, but at the expense of limiting the range of supported measurement models (e.g., to Gaussian or coded diffraction patterns)  ...  We test and validate prDeep in simulation to demonstrate that it is robust to noise and can handle a variety of system models.  ...  Acknowledgements Phil Schniter was supported by NSF grants CCF-1527162 and CCF-1716388.  ... 
arXiv:1803.00212v2 fatcat:h6lshb7lvncafeoscgwgth7xqu

Bayesian Imaging With Data-Driven Priors Encoded by Neural Networks: Theory, Methods, and Algorithms [article]

Matthew Holden, Marcelo Pereyra, Konstantinos C. Zygalakis
2021 arXiv   pre-print
Following the manifold hypothesis and adopting a generative modelling approach, we construct a data-driven prior that is supported on a sub-manifold of the ambient space, which we can learn from the training  ...  In addition to point estimators and uncertainty quantification analyses, we derive a model misspecification test to automatically detect situations where the data-driven prior is unreliable, and explain  ...  Acknowledgments The authors are grateful for useful discussions with Andrés Almansa.  ... 
arXiv:2103.10182v1 fatcat:cyleliu7prfotnnf5tlns4ky24

Image Denoising Using Nonlocal Regularized Deep Image Prior

Zhonghua Xie, Lingjun Liu, Zhongliang Luo, Jianfeng Huang
2021 Symmetry  
step and a plug-and-play proximal denoising step.  ...  Specifically, we propose a deep-learning-based method based on the deep image prior (DIP) method, which only requires a noisy image as training data, without any clean data.  ...  Acknowledgments: We gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation.  ... 
doi:10.3390/sym13112114 fatcat:bdvu6fpeszb2vhquc7clpggx3i

SUD: Supervision by Denoising for Medical Image Segmentation [article]

Sean I. Young, Adrian V. Dalca, Enzo Ferrante, Polina Golland, Bruce Fischl, Juan Eugenio Iglesias
2022 arXiv   pre-print
We validate SUD on three tasks-kidney and tumor (3D), and brain (3D) segmentation, and cortical parcellation (2D)-demonstrating a significant improvement in the Dice overlap and the Hausdorff distance  ...  Training a fully convolutional network for semantic segmentation typically requires a large, labeled dataset with little label noise if good generalization is to be guaranteed.  ...  ACKNOWLEDGMENTS SI Young thanks F Isensee for clarification on nnU-Net [37] and B Billot, Y Balbastre, M Reuter, K Van Leemput, S Ghosh and S Plis for feedback during various stages of this project.  ... 
arXiv:2202.02952v1 fatcat:zuxk2jcnnnat3jrbqxv5wutftu

Learned reconstruction methods with convergence guarantees [article]

Subhadip Mukherjee, Andreas Hauptmann, Ozan Öktem, Marcelo Pereyra, Carola-Bibiane Schönlieb
2022 arXiv   pre-print
by placing some of the existing empirical practices on a solid mathematical foundation.  ...  In this article, we will specify relevant notions of convergence for data-driven image reconstruction, which will form the basis of a survey of learned methods with mathematically rigorous reconstruction  ...  gradient-based algorithms such as the unadjusted Langevin algorithm (ULA) and stochastic gradient-descent (SGD) [59] .  ... 
arXiv:2206.05431v3 fatcat:djzhsths5jf5zf33zakkfhjlqi
« Previous Showing results 1 — 15 out of 453 results