Article,

The strength of weak learnability

.
Machine Learning, 5 (2): 197--227 (Jun 1, 1990)
DOI: 10.1007/BF00116037

Abstract

This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distribution-free (PAC) learning model. A concept class islearnable (orstrongly learnable) if, given access to a source of examples of the unknown concept, the learner with high probability is able to output an hypothesis that is correct on all but an arbitrarily small fraction of the instances. The concept class isweakly learnable if the learner can produce an hypothesis that performs only slightly better than random guessing. In this paper, it is shown that these two notions of learnability are equivalent.

Tags

Users

  • @kdubiq
  • @schaul
  • @tomhanika
  • @yourwelcome
  • @zeno
  • @dalbem
  • @idsia
  • @sb3000
  • @nosebrain
  • @gromgull

Comments and Reviews