A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Bayesian artificial intelligence
2004
ChoiceReviews
We give an adversary strategy that forces the Perceptron algorithm to make a( kN) mistakes in learning monotone disjunctions over N variables with at most k literals. In contrast, Littlestone's algorithm Winnow makes at most 0( k log N) mistakes for the same problem. Both algorithms use thresholded linear functions as their hypotheses. However, Winnow does multiplicative updates to its weight vector instead of the additive updates of the Perceptron algorithm. In general, we call an algorithm
doi:10.5860/choice.41-5948
fatcat:paczxy24nnd4dcumetg5ycjh64