Article,

Using Maximum Entropy for Text Classification

, , and .
(1999)

Abstract

This paper proposes the use of maximum entropy techniques for text classification. Maximum entropy is a probability distribution estimation technique widely used for a variety of natural language tasks, such as language modeling, part-of-speech tagging, and text segmentation. The underlying principle of maximum entropy is that without external knowledge, one should prefer distributions that are uniform. Constraints on the distribution, derived from labeled training data, inform maximum entropy where to be minimally non-uniform. The maximum entropy formulation has a unique solution which can be found by the improved iterative scaling algorithm. In text classification tasks, maximum entropy is used to estimate the conditional distribution of the class variable given the document. Experiments on several text datasets show that maximum entropy performance is sometimes significantly better, but also sometimes worse, than naive Bayes text classification. Much future work remains, though the ...

Tags

Users

  • @kabloom
  • @brefeld

Comments and Reviews