Zipf's law
Originally the term Zipf's law meant the observation of Harvard linguist George Kingsley Zipf that the frequency of use of the the nth-most-frequently-used word in any natural language is inversely proportional to n.
Mathematically, that is impossible if there are infinitely many words in a language, since (letting c denote the constant of proportionality that would make the sum of all relative frequencies equal to 1) we have
Empirical studies have found that in English, the frequencies of the approximately 1000 most-frequently-used words are approximately proportional to 1/n1+ε where ε is just slightly more than zero. After about the 1000th word, the frequencies drop off faster.
[A scholarly reference to support this assertion about word frequencies should be added here.]
As long as the exponent 1 + ε exceeds 1, it is possible for such a law to hold with infinitely many words, since if s > 1 then
The value of this sum is ζ(s), where ζ is Riemann's zeta function.
The term Zipf's law has consequently come to be used to refer to frequency distributions of "rank data" in which the relative frequency of the nth-ranked item is given by the Zeta distribution, 1/(nsζ(s)), where s > 1 is a parameter indexing this family of probability distributions. Indeed, the term Zipf's law sometimes simply means the zeta distribution, since probability distributions are sometimes called "laws".
A more general law proposed by Benoit Mandelbrot has frequencies
This is the Zipf-Mandelbrot law. The "constant" inthis case is the reciprocal of the Hurwitz zeta function evaluated at s.
Zipf's law is an experimental law, not a theoretical one. The causes of Zipfian distributions in real life are a matter of some controversy. However, Zipfian distributions are commonly observed in many kinds of phenomena.
For example, if f1 is the frequency (in percent) of the most common English word, f2 is the frequency of the second most common English word and so on, then there exist two positive numbers a and b such that for all n ≥ 1:
Note that the frequencies fn have to add up to 100%, so if this relationship were strictly true for all n ≥ 1, and we had infinitely many words, then b would have to be greater than one and a would have to be equal to ζ(b), i.e., the value of the Riemann zeta function at b.
Zipf's law is often demonstrated by scatterplotting the data, with the axes being log(rank order) and log(frequency). If the points are close to a single straight line, the distribution follows Zipf's law.
Examples of collections approximately obeying Zipf's law:
- frequency of accesses to web pages
- in particular the access counts on the Wikipedia most popular page, with b approximately equal to 0.3
- page access counts on Polish Wikipedia (data for late July 2003) approximately obey Zipf's law with b about 0.5
- words in the English language
- for instance, in Shakespeare's play Hamlet, with b approximately 0.5, see Shakespeare word frequency lists
- sizes of settlements
- income distribution amongst individuals
- size of earthquakes
It has been pointed out (see external link below) that Zipfian distributions can also be regarded as being Pareto distributions with an exchange of variables.
See also
- Benford's law,
- Bradford's law,
- harmonic number of order
- law (principle),
- Mathematical economics,
- Pareto distribution,
- Pareto principle,
- power law,
- Zipf-Mandelbrot law
Further reading
- Zipf, George K.; Human Behaviour and the Principle of Least-Effort, Addison-Wesley, Cambridge MA, 1949
- W. Li, "Random texts exhibit Zipf's-law-like word frequency distribution", IEEE Transactions on Information Theory, 38(6), pp.1842-1845, 1992.