Abstract

Recent work in text classification has used two different first-order probabilistic models for classification, both of which make the naive Bayes assumption. Some use a multi-variate Bernoulli model, that is, a Bayesian Network with no dependencies between words and binary word features (e.g. Larkey and Croft 1996; Koller and Sahami 1997). Others use a multinomial model, that is, a uni-gram language model with integer word counts (e.g. Lewis and Gale 1994; Mitchell 1997). This paper aims to clarify the confusion by describing the differences and details of these two models, and by empirically comparing their classification performance on five text corpora. We find that the multi-variate Bernoulli performs well with small vocabulary sizes, but that the multinomial performs usually performs even better at larger vocabulary sizes---providing on average a 27% reduction in error over the multi-variate Bernoulli model at any vocabulary size.

Description

CiteSeerX — A comparison of event models for Naive Bayes text classification

Links and resources

Tags

community

  • @knutwenzig
  • @autocode
  • @pirot
  • @telekoma
  • @pierpaolo.pk81
  • @beate
  • @jil
  • @ytyoun
  • @ngrandy
  • @vanatteveldt
@telekoma's tags highlighted