Abstract

Zipf’s law of word frequencies for language discourses is established with statistical rigor. Data show a departure from Zipf’s power law term at low frequencies. This is accounted by a modi-fying exponential term. Both arise naturally in a model for word frequencies based on Information Theory, algorithmic coding of a text preserving the symbol sequence, concepts from quantum statistical physics and computer science and extremum principles. The Optimum Meaning Preserving Code (OMPC) of the discourse is realized when word frequencies follow the Modified Power Law (MPL). The model predicts a variant of the MPL for the relative frequencies of a small fixed set of symbols such as letters, phonemes and grammatical words. The OMPC can be viewed as containing orderly and random parts. This leads us to a quantitative definition of complexity of a string (C) that tends to 0 for the extremes of ‘all order’ and ‘all random’ but is a maximum (C = 1) for a mixture of both (Gell-Mann). It is found that natural languages have maximum complexity. The uniqueness of Zipf’s power law index (γ = 2) is shown to arise in four different ways, one of which depends on scale invariance characteristic of fractal structures. It is argued that random text models are unsuitable for natural languages. It is speculated that a drastic change in symbol frequency distribution starting from phrases is related to emergence of meaning and coherence of a discourse.

Description

CiteULike: Algorithmic information, complexity and Zipf's law

Links and resources

Tags

community

  • @andreab
  • @dblp
@andreab's tags highlighted