Channel capacity

From Wikipedia, the free encyclopedia

In electrical engineering, computer science and information theory, channel capacity is the tightest upper bound on the amount of information that can be reliably transmitted over a communications channel. By the noisy-channel coding theorem, the channel capacity of a given channel is the limiting information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability. [1] [2]

Information theory, developed by Claude E. Shannon during World War II, defines the notion of channel capacity and provides a mathematical model by which one can compute it. The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution [3]

Contents

[edit] Formal definition

Let X represent the space of signals that can be transmitted, and Y the space of signals received, during a block of time over the channel. Let

\ p_{Y|X}(y|x)

be the conditional distribution function of Y given X. Treating the channel as a known statistic system, pY | X(y | x) is an inherent fixed property of the communications channel (representing the nature of the noise in it). Then the joint distribution

\ p_{X,Y}(x,y)

of X and Y is completely determined by the channel and by the choice of

\ p_X(x) = \int_yp_{X,Y}(x,y)\,dy

the marginal distribution of signals we choose to send over the channel. The joint distribution can be recovered by using the identity

\ p_{X,Y}(x,y)=p_{Y|X}(y|x)\,p_X(x)

Under these constraints, next maximize the amount of information, or the message, that one can communicate over the channel. The appropriate measure for this is the mutual information I(X;Y), and this maximum mutual information is called the channel capacity and is given by

\ C = \sup_{p_X} I(X;Y)\,

[edit] Noisy-channel coding theorem

The noisy-channel coding theorem states that for any ε > 0 and for any rate R less than the channel capacity C, there is an encoding and decoding scheme that can be used to ensure that the probability of block error is less than ε for a sufficiently long code. Also, for any rate greater than the channel capacity, the probability of block error at the receiver goes to one as the block length goes to infinity.

[edit] Example application

An application of the channel capacity concept to an additive white Gaussian noise channel with B Hz bandwidth and signal-to-noise ratio S/N is the Shannon–Hartley theorem:

 C =  B \log \left( 1+\frac{S}{N} \right)\

C is measured in bits per second if the logarithm is taken in base 2, or nats per second if the natural logarithm is used, assuming B is in hertz; the signal and noise powers S and N are measured in watts or volts2, so the signal-to-noise ratio here is expressed as a power ratio, not in decibels (dB); since figures are often cited in dB, a conversion may be needed. For example, 30 dB is a power ratio of 1030 / 10 = 103 = 1000.

[edit] See also

[edit] References

  1. ^ Saleem Bhatti. Channel capacity. Lecture notes for M.Sc. Data Communication Networks and Distributed Systems D51 -- Basic Communications and Networks.
  2. ^ Jim Lesurf. Signals look like noise!. Information and Measurement, 2nd ed..
  3. ^ Thomas M. Cover, Joy A. Thomas, Elements of Information Theory, John Wiley & Sons, New York, 2006.