Channel capacity

From Wikipedia, the free encyclopedia

In computer science, channel capacity, is the amount of discrete information that can be reliably transmitted over a channel. By the noisy-channel coding theorem, the channel capacity of a given channel is the limiting information transport rate (in units of information per unit time) that can be achieved with vanishingly small error probability.

Information theory, developed by Claude E. Shannon in 1948, defines the notion of channel capacity and provides a mathematical model by which one can compute the maximal amount of information that can be carried by a channel. The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution.

[edit] Formal Definition

Image:Communication_system.svg

Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let

pY | X(y | x)

be the conditional distribution function of Y given X. We will consider pY | X(y | x) to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution

pX,Y(x,y)

of X and Y is completely determined by our channel and by our choice of

p_X(x)=\int_yp_{X,Y}(x,y)\,dy

the marginal distribution of messages we choose to send over the channel. The joint distribution can be recovered by using the identity

p_{X,Y}(x,y)=p_{Y|X}(y|x)\,p_X(x)\,dx

Under these constraints, we would like to maximize the amount of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information I(X;Y), and this maximum mutual information is called the channel capacity and is given by

C = \sup_{p_X} I(X;Y)\,

[edit] Noisy channel coding theorem

The channel coding theorem states that for any ε > 0 and for any rate R less than the channel capacity C, there is an encoding and decoding scheme that can be used to ensure that the probability of block error is less than ε for any sufficiently long code. Also, for any rate greater than the channel capacity, the probability of block error at the receiver goes to one as the block length goes to infinity.

[edit] See also

In other languages