Depth Uncertainty in Neural Networks [article]

Javier Antorán, James Urquhart Allingham, José Miguel Hernández-Lobato
2020 arXiv   pre-print
Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes, making them unsuitable for applications where computational resources are limited. To solve this, we perform probabilistic reasoning over the depth of neural networks. Different depths correspond to subnetworks which share weights and whose predictions are combined via marginalisation, yielding model uncertainty. By exploiting the sequential structure of feed-forward networks, we are able to
more » ... h evaluate our training objective and make predictions with a single forward pass. We validate our approach on real-world regression and image classification tasks. Our approach provides uncertainty calibration, robustness to dataset shift, and accuracies competitive with more computationally expensive baselines.
arXiv:2006.08437v3 fatcat:7s7rwleekfbpvbmm4miqbntwry