@jil

Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms

. Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10, page 1--8. Stroudsburg, PA, USA, Association for Computational Linguistics, (2002)
DOI: 10.3115/1118693.1118694

Abstract

We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modification of the proof of convergence of the perceptron algorithm for classification problems. We give experimental results on part-of-speech tagging and base noun phrase chunking, in both cases showing improvements over results for a maximum-entropy tagger.

Description

Discriminative training methods for hidden Markov models

Links and resources

Tags

community

  • @sidyr
  • @dblp
  • @jil
@jil's tags highlighted