A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2007; you can also visit the original URL.
The file type is
Within a Bayesian framework, by generalizing inequalities known from statistical mechanics, we calculate general upper and lower bounds for a cumulative entropic error, which measures the success in the supervised learning of an unknown rule from examples. Both bounds match asymptotically, when the number m of observed data grows large. We find that the information gain from observing a new example decreases universally like d/m. Here d is a dimension that is defined from the scaling of smalldoi:10.1103/physrevlett.75.3772 pmid:10059723 fatcat:g7yofh4tfbadfj6dxbn6m2xday