A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2016; you can also visit the original URL.
The file type is application/pdf
.
Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
2006
Advances in Computational Mathematics
Solutions of learning problems by Empirical Risk Minimization (ERM) -and almost-ERM when the minimizer does not exist -need to be consistent, so that they may be predictive. They also need to be well-posed in the sense of being stable, so that they might be used robustly. We propose a statistical form of stability, defined as leave-one-out (LOO) stability. We prove that for bounded loss classes LOO stability is (a) sufficient for generalization, that is convergence in probability of the
doi:10.1007/s10444-004-7634-z
fatcat:64426vw54fd5vn2thn6apemeui