[go: nahoru, domu]

Jump to content

Information theory

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 66.227.203.49 (talk) at 10:25, 8 January 2007 (→‎See also). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Information theory is a discipline in applied mathematics involving the quantification of data with the goal of enabling as much data as possible to be reliably stored on a medium and/or communicated over a channel. The measure of data, known as information entropy, is usually expressed by the average number of bits needed for storage or communication. For example, if a daily weather description has 3 bits of entropy, then, over enough days, we can describe daily weather with an average of approximately 3 bits per day (each bit being a 0 or a 1).

Applications of fundamental topics of information theory include ZIP files (lossless data compression), MP3s (lossy data compression), and DSL (channel coding). The field is at the crossroads of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Its impact has been crucial to success of the Voyager missions to deep space, the invention of the CD, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields.

Overview

The main concepts of information theory can be grasped by considering the most widespread means of human communication: language. Two important aspects of a good language are as follows: First, the most common words (e.g., "a," "the," "I") should be shorter than less common words (e.g., "benefit," "generation," "mediocre"), so that sentences will not be too long. Such a tradeoff in word length is analogous to data compression and is the essential aspect of source coding. Second, if part of a sentence is unheard or misheard due to noise — e.g., a passing car — the listener should still be able to glean the meaning of the underlying message. Such robustness is as essential for an electronic communication system as it is for a language; properly building such robustness into communications is done by channel coding. Source coding and channel coding are the fundamental concerns of information theory.

Note that these concerns have nothing to do with the importance of messages. For example, a platitude such as "Thank you; come again" takes about as long to say or write as the urgent plea, "Call an ambulance!" while clearly the latter is more important and more meaningful. Information theory, however, does not involve message importance or meaning, as these are matters of the quality of data rather than the quantity of data, the latter of which is determined solely by probabilities.

Information theory is generally considered to have been founded in 1948 by Claude Shannon in his seminal work, "A Mathematical Theory of Communication." The central paradigm of classic information theory is the engineering problem of the transmission of information over a noisy channel. The most fundamental results of this theory are Shannon's source coding theorem, which establishes that, on average, the number of bits needed to represent the result of an uncertain event is given by its entropy; and Shannon's noisy-channel coding theorem, which states that reliable communication is possible over noisy channels provided that the rate of communication is below a certain threshold called the channel capacity. The channel capacity can be approached by using appropriate encoding and decoding systems.

Information theory is closely associated with a collection of pure and applied disciplines that have been investigated and reduced to engineering practice under a variety of rubrics throughout the world over the past half century or more: adaptive systems, anticipatory systems, artificial intelligence, complex systems, complexity science, cybernetics, informatics, machine learning, along with systems sciences of many descriptions. Information theory is a broad and deep mathematical theory, with equally broad and deep applications, amongst which is the vital field of coding theory.

Coding theory is concerned with finding explicit methods, called codes, of increasing the efficiency and reducing the net error rate of data communication over a noisy channel to near the limit that Shannon proved is the maximum possible for that channel. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible. A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis. See the article deciban for a historical application.

Information theory is also used in information retrieval, intelligence gathering, gambling, statistics, and even in musical composition.

Historical background

See main article: History of information theory.

The decisive event which established the discipline of information theory, and brought it to immediate worldwide attention, was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October of 1948.

Prior to this paper, limited information theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation , where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish that one sequence of symbols from any other, thus quantifying information as , where S was the number of possible symbols, and n the number of symbols in a transmission. The natural unit of information was therefore the decimal digit, much later renamed the hartley in his honour as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers.

Much of the mathematics behind information theory with events of different probabilities was developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory.

In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion that

"The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."

With it came the ideas of

Mathematical theory of information

See main article: Quantities of information.

The mathematical theory of information is based on probability theory and statistics. The most important quantities of information are entropy, the information in a random variable, and mutual information, the amount of information in common between two random variables. The former quantity indicates how easily message data can be compressed while the latter can be used to find the communication rate across a channel.

The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. The most common unit of information currently in use is the bit, based on the binary logarithm. Thus base is usually assumed to be 2. In addition, due to limit behavior, the usually undefined is considered to be 0.

Entropy

Entropy of a Bernoulli trial as a function of success probability, often called the binary entropy function, . The entropy is maximized at 1 bit per trial when the two possible outcomes are equally probable, as in an unbiased coin toss.

The entropy, , of a discrete random variable is a measure of the amount of uncertainty one has about the value of . It is here that the definition of bit used is crucial. For example, suppose one transmits 1000 bits in the conventional sense (0s and 1s). If these bits are known ahead of transmission (to be a certain value with absolute probability), logic dictates that no information has been transmitted. If, however, each is equally and independently likely to be 0 or 1, 1000 bits (in the information theoretic sense) have been transmitted. Between these two extremes, information can be quantified as follows: If is the set of all messages that could be, and , then has

bits of entropy. An important property of entropy is that it is maximized when all the messages in the message space are equiprobable — i.e., most unpredictable — in which case

Sometimes the function H is expressed in terms of the probabilities of the distribution:

where

An important special case of this is the binary entropy function:

The joint entropy of two discrete (not necessarily independent) random variables and is merely the entropy of their pairing, . For example, if represents the position of a chess piece — the row and the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece. Mathematically,

If and are independent, then the joint entropy is simply the sum of their individual entropies. (Note: Joint entropy should not be confused with cross entropy, despite similar notations.)

The conditional entropy of given is the entropy would have if it were known that . In the aforementioned example, it would be the entropy of chess piece row placements for a given column. Therefore theconditional entropy of given is:

where is the conditional probability of given .

The conditional entropy of given random variable (also called the equivocation of about ) is the average conditional entropy over :

Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that:

Mutual information and other information measures

One last important measure of information is the mutual information (sometimes called transinformation). This is a measure of how much information can be obtained about one random variable by observing another. This is particularly important in communication, as a sent and a received signal, although not always identical, should be able to transmit an adequate amount of information. The mutual information of relative to (which represents conceptually the average amount of information about that can be gained by observing ) is given by:

A basic property of the mutual information is that:

That is, knowing Y, we can save an average of bits in encoding X compared to not knowing Y. Mutual information is symmetric:

Related quantities like self-information, Pointwise Mutual Information (PMI), Kullback-Leibler divergence (information gain), and differential entropy also play a crucial role in information theory.

Channel capacity

See main article: Noisy channel coding theorem.

Communications over a channel — such as an ethernet wire — is the primary motivation of information theory. As anyone who's ever used a telephone (mobile or landline) knows, however, such channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. How much information can one hope to communicate over a noisy (or otherwise imperfect) channel?

Consider the communications process over a discrete channel. A simple model of the process is shown below:

File:Communicationsystem.JPG

Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let be the conditional probability distribution function of Y given X. We will consider to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of , the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the amount of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the channel capacity and is given by:

This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error.

Channel capacity of particular model channels

  • A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of bits per channel use, where is the binary entropy function:
File:Binarysymmetricchannel.jpg
  • A binary erasure channel (BEC) with erasure probability p is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol 'e' called an erasure. The erasure represents complete loss of information about an input bit. The capacity of the BEC is 1 - p bits per channel use.
File:Binaryerasurechannel.JPG

Source theory

Any process that generates successive messages can be considered a source of information. A memoryless source is one in which each message is an independent identically-distributed random variable, whereas the properties of ergodicity and stationarity impose more general constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory.

Rate

Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the most general case, it is

Precisely speaking, this is the expected conditional entropy per message (i.e. per unit time) given all the previous messages generated. It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a memoryless source is simply , since by definition there is no interdependence of the successive messages of a memoryless source. The rate of a source of information is related to its redundancy and how well it can be compressed.

Applications

Coding theory

See main article Coding theory.

File:CDSCRATCHES.JPG
A picture showing scratches on the readable surface of a CD-R. Music and data CDs are coded using error correcting codes and thus can still be read even if they have minor scratches using error detection and correction.

Coding theory is the most important and direct application of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source.

  • Data Compression (Source Coding): There are two formulations for the compression problem:
  1. lossless data compression the data must be reconstructed exactly;
  2. lossy data compression allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of Information Theory is called rate distortion theory.
  • Error Correcting Code (Channel coding):While data compression removes as much redundancy as possible, an error correcting code adds just the right kind of redundancy (i.e. error correction) needed to transmit the data efficiently and faithfully across a noisy channel.

This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models.

Intelligence Uses & Secrecy applications

Information theoretic concepts are widely used in making and breaking cryptographic systems. For an interesting historical example, see the article on deciban. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability.

Shannon's theory of information is extremely important in work, much more so than its use in cryptography indicates. Intelligence agencies use Information Theory to keep classified information secret, and to discover as much information as possible about an adversary, in a future-proof secure way. The fundamental theorem leads us to believe it is much more difficult to keep secrets than it might first appear. In general it is not possible to stop the leakage of classified information, only to slow it. Furthermore, the more people who have access to the information, and the more those people have to work with and review that information, the greater the redundancy that information acquires. It is extremely hard to contain the flow of information that has high redundancy. This inevitable leakage of classified information is due to the psychological fact that what people know does somewhat influence their behavior, however subtle that influence might be.

Pseudo Random Number generation

A good example of the application of information theory to covert signaling is the design of the Global Positioning System signal encoding. The system uses a pseudorandom encoding that places the radio signal below the noise floor. Thus, an unsuspecting radio listener would not even be aware that there was a signal present, as it would be drowned out by assorted noise sources (eg, atmospheric and antenna noise). However, if one integrates the signal over long periods of time, using the "secret" (but known to the listener) pseudorandom sequence, one can eventually detect a signal, and then discern modulations of that signal. In the GPS system, the C/A signal has been publicly disclosed to be a 1023-bit sequence, but the pseudorandom sequence used in the P(Y) signal remains a secret. The same technique can be used to transmit and receive covert intelligence from short-range, extremely low power systems, without an Enemy even being aware of the existence of a radio signal. This is analogous to steganography. See also spread spectrum communications.

Miscellaneous applications

Information theory also has applications in gambling and investing, black holes, bioinformatics, and music.

References

The classic paper

Other journal articles

Textbooks on information theory

  • Claude E. Shannon, Warren Weaver. The Mathematical Theory of Communication. Univ of Illinois Press, 1949. ISBN 0-252-72548-4
  • Robert Gallager. Information Theory and Reliable Communication. New York: John Wiley and Sons, 1968. ISBN 0-471-29048-3
  • Robert B. Ash. Information Theory. New York: Interscience, 1965. ISBN 0-470-03445-9. New York: Dover 1990. ISBN 0-486-66521-6
  • Thomas M. Cover, Joy A. Thomas. Elements of information theory, 1st Edition. New York: Wiley-Interscience, 1991. ISBN 0-471-06259-6.
2nd Edition. New York: Wiley-Interscience, 2006. ISBN 0-471-24195-4.
  • Stanford Goldman. Information Theory. New York: Prentice Hall, 1953. New York: Dover 1968 ISBN 0-486-62209-6, 2005 ISBN 0-486-44271-3
  • Fazlollah M. Reza. An Introduction to Information Theory. New York: McGraw-Hill 1961. New York: Dover 1994. ISBN 0-486-68210-2
  • Raymond W. Yeung. A First Course in Information Theory Kluwer Academic/Plenum Publishers, 2002. ISBN 0-306-46791-7
  • David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1
  • Masud Mansuripur. Introduction to Information Theory. New York: Prentice Hall, 1987. ISBN 0-13-484668-0

Other books

  • James Bamford, The Puzzle Palace, Penguin Books, 1983. ISBN 0-14-006748-5
  • Leon Brillouin, Science and Information Theory, Mineola, N.Y.: Dover, [1956, 1962] 2004. ISBN 0-486-43918-6
  • A. I. Khinchin, Mathematical Foundations of Information Theory, New York: Dover, 1957. ISBN 0-486-60434-9
  • H. S. Leff and A. F. Rex, Editors, Maxwell's Demon: Entropy, Information, Computing, Princeton University Press, Princeton, NJ (1990). ISBN 0-691-08727-X
  • Tom Siegfried, The Bit and the Pendulum, Wiley, 2000. ISBN 0-471-32174-5
  • Charles Seife, Decoding The Universe, Viking, 2006. ISBN 0-670-03441-X

See also

Applications

History

Theory

Concepts

External links