A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Deep pNML: Predictive Normalized Maximum Likelihood for Deep Neural Networks
[article]
2020
arXiv
pre-print
The Predictive Normalized Maximum Likelihood (pNML) scheme has been recently suggested for universal learning in the individual setting, where both the training and test samples are individual data. ...
In this work we examine the pNML and its associated learnability measure for the Deep Neural Network (DNN) model class. ...
Applications This section describes the applications of the pNML in deep neural networks 1 . First, we show how the pNML can improve upon the ERM learner in both the log-loss and accuracy sense. ...
arXiv:1904.12286v2
fatcat:jgcm4iruhvdnla3hwj5r2c5omi
Universal Supervised Learning for Individual Data
[article]
2018
arXiv
pre-print
In particular, it is demonstrated that the main proposed scheme, termed Predictive Normalized Maximum Likelihood (pNML), is a robust learning solution that outperforms the current leading approach based ...
Furthermore, the pNML construction provides a pointwise indication for the learnability of the specific test challenge with the given training examples ...
Acknowledgments Koby Bibas is acknowledged for discussions and for implementing and analyzing the pNML in various problems, from linear regression to deep neural networks. ...
arXiv:1812.09520v1
fatcat:4aezx32m25dxnio6wcpfvtwiue
Efficient Data-Dependent Learnability
[article]
2020
arXiv
pre-print
The predictive normalized maximum likelihood (pNML) approach has recently been proposed as the min-max optimal solution to the batch learning problem where both the training set and the test data feature ...
Combining both theoretical analysis and experiments, we show that when applied to neural networks, this approximation can detect out-of-distribution examples effectively. ...
neural networks. ...
arXiv:2011.10334v1
fatcat:bpwx3dh7tfhp5gmcj6ptiuqajm
Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection
[article]
2021
arXiv
pre-print
Instead, we utilize the predictive normalized maximum likelihood (pNML) learner, in which no assumptions are made on the tested input. ...
We derive an explicit expression of the pNML and its generalization error, denoted as the regret, for a single layer neural network (NN). ...
Deep neural network adaptation In previous sections, we derived the pNML for a single layer NN. ...
arXiv:2110.09246v1
fatcat:uypwdn7idvftxezpb3zv3dgu3i
A New Look at an Old Problem: A Universal Learning Approach to Linear Regression
[article]
2019
arXiv
pre-print
The Predictive Normalized Maximum Likelihood (pNML) solution for universal learning of individual data can be expressed analytically in this case, as well as its associated learnability measure. ...
[7] , [8] , with a different motivation as one of the possible variations of the Normalized Maximum Likelihood (NML) method of [6] for universal prediction. ...
This phenomenon may explain why other over-parameterized models such as deep neural networks are successful for "learnable" data. The paper outline is as follows. ...
arXiv:1905.04708v1
fatcat:li5a5s74lrahdiwupir6tp3jsm
Amortized Conditional Normalized Maximum Likelihood: Reliable Out of Distribution Uncertainty Estimation
[article]
2021
arXiv
pre-print
While deep neural networks provide good performance for a range of challenging tasks, calibration and uncertainty estimation remain major challenges, especially under distribution shift. ...
robustness with deep networks. ...
Worst-case bounds for Gaussian process models. K. Bibas, Y. Fogel, and M. Feder. Deep pnml: Predictive nor-
malized maximum likelihood for deep neural networks. ...
arXiv:2011.02696v2
fatcat:k25bnenzifb6fga7pa37nsvq2u
Distribution Free Uncertainty for the Minimum Norm Solution of Over-parameterized Linear Regression
[article]
2021
arXiv
pre-print
We utilize the recently proposed predictive normalized maximum likelihood (pNML) learner which is the min-max regret solution for the distribution-free setting. ...
We derive an upper bound of this min-max regret which is associated with the prediction uncertainty. ...
It follows the normalized maximum likelihood method (Shtarkov, 1987) . ...
arXiv:2102.07181v2
fatcat:2yzkmnl4kbesphmxfghbcrgtgq
Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
[article]
2021
arXiv
pre-print
We propose to tackle this problem by leveraging the normalized maximum-likelihood (NML) estimator, which provides a principled approach to handling uncertainty and out-of-distribution inputs. ...
While in the standard formulation NML is intractable, we propose a tractable approximation that allows us to scale our method to high-capacity neural network models. ...
Bibas et al. (2019) apply this framework for prediction using deep neural networks, but require an expensive finetuning process for every input. ...
arXiv:2102.07970v1
fatcat:zyryacfnsngz5f5eujiew47a5u
Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness
[article]
2021
arXiv
pre-print
We propose a novel solution by adopting the recently suggested Predictive Normalized Maximum Likelihood. ...
Adversarial attacks have been shown to be highly effective at degrading the performance of deep neural networks (DNNs). ...
This makes the Adversarial pNML prediction to be more robust to adversarial attacks. ...
arXiv:2109.01945v1
fatcat:n4q567hubnginjzczx5z4bjpi4
MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning
[article]
2021
arXiv
pre-print
We propose a novel mechanism for obtaining these calibrated, uncertainty-aware classifiers based on an amortized technique for computing the normalized maximum likelihood (NML) distribution. ...
To make this tractable, we propose a novel method for computing the NML distribution by using meta-learning. ...
The authors would like to thank Aviral Kumar, Justin Fu, Sham Kakade, Aldo Pacchiano, Pieter Abbeel, Avi Singh, Benjamin Eysenbach, Ignasi Clavera for valuable discussion and feedback on early drafts of ...
arXiv:2107.07184v2
fatcat:gxbaq7he5ffa3ior5tlpmmvsii
To Encourage or to Restrict: the Label Dependency in Multi-Label Learning
2022
To further verify the positive role of LD in multi-label classification and address previous limitations, we originally propose an approach named Prototypical Networks for Multi- Label Learning (PNML). ...
PNML achieves the State-Of-The-Art (SOTA) classification performance on clean data. ...
We instantiate the study empirically with linear Support Vector Machine (SVM) and Deep Neural Nets (DNN) based multi-label classifiers. ...
doi:10.25781/kaust-zr8lm
fatcat:qvjpulbhpjeahkimphqlls637a