Hardness of Proper Learning (1988; Pitt, Valiant) [chapter]

Vitaly Feldman
2014 Encyclopedia of Algorithms  
The work of Pitt and Valiant [16] deals with learning Boolean functions in the Probably Approximately Correct (PAC) learning model introduced by Valiant [17]. A learning algorithm in Valiant's original model is given random examples of a function f : {0, 1} n → {0, 1} from a representation class F and produces a hypothesis h ∈ F that closely approximates f . Here a representation class is a set of functions and a language for describing the functions in the set. The authors give examples of
more » ... ral representation classes that are NP-hard to learn in this model whereas they can be learned if the learning algorithm is allowed to produce hypotheses from a richer representation class H. Such an algorithm is said to learn F by H; learning F by F is called proper learning. The results of Pitt and Valiant were the first to demonstrate that the choice of representation of hypotheses can have a dramatic impact on the computational complexity of a learning problem. Their specific reductions from NP-hard problems are the basis of several other follow-up works on the hardness of proper learning [1, 3, 6] .
doi:10.1007/978-3-642-27848-8_177-2 fatcat:fo6epnlbqndvzjlv6uqiojmflq