Shannon–Hartley theorem

From Wikipedia, the free encyclopedia

In information theory, the Shannon–Hartley theorem is an application of the noisy channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise. The theorem establishes Shannon's channel capacity, a bound on the maximum amount of error-free digital data (that is, information) that can be transmitted over such a communication link with a specified bandwidth in the presence of the noise interference, under the assumption that the signal power is bounded and the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley.

Contents

[edit] Statement of the theorem

Considering all possible multi-level and multi-phase encoding techniques, the Shannon–Hartley theorem states that the channel capacity C, meaning the theoretical maximum rate of clean (or arbitrarily low bit error rate) data that can be sent with a given average signal power S through an analog communication channel subject to additive white Gaussian noise of power N, is:

C =  B \log_2 \left( 1+\frac{S}{N} \right)

where

C is the channel capacity in bits per second;
B is the bandwidth of the channel in hertz;
S is the total signal power over the bandwidth and
N is the total noise power over the bandwidth.
S/N is the signal-to-noise ratio of the communication signal to the Gaussian noise interference expressed as a straight power ratio (not as decibels).

[edit] Historical development

During the late 1920s, Harry Nyquist and Ralph Hartley developed a handful of fundamental ideas related to the transmission of information, particularly in the context of the telegraph as a communications system. At the time, these concepts were powerful breakthroughs individually, but they were not part of a comprehensive theory. In the 1940s, Claude Shannon developed the concept of channel capacity, based in part on the ideas of Nyquist and Hartley, and then formulated a complete theory of information and its transmission.

[edit] Nyquist rate

In 1927, Nyquist determined that the number of independent pulses that could be put through a telegraph channel per unit time is limited to twice the bandwidth of the channel. In symbols,

f_p \le 2B \,

where fp is the pulse frequency (in pulses per second) and B is the bandwidth (in hertz). The quantity 2B later came to be called the Nyquist rate, and transmitting at the the limiting pulse rate of 2B pulses per second as signalling at the Nyquist rate. Nyquist published his results in 1928 as part of his paper "Certain topics in Telegraph Transmission Theory."

[edit] Hartley's law

During that same year, Hartley formulated a way to quantify information and its rate of transmission across a communications channel. This method, later known as Hartley's law, became an important precursor for Shannon's more sophisticated notion of channel capacity.

Hartley argued that the maximum number of distinct pulses that can be transmitted and received reliably over a communications channel is limited by the dynamic range of the signal amplitude and the precision with which the receiver can distinguish amplitude levels. Specifically, if the amplitude of the transmitted signal is restricted to the range of [ –A ... +A ] volts, and the precision of the receiver is +/– ΔV volts, then the maximum number of distinct pulses M is given by

M  =   1 +  {  A \over \Delta V }

By taking information to be the logarithm of the number of distinct messages that could be sent, Hartley then constructed a measure of information proportional to both the bandwidth of a channel and to the duration of its use. Hartley's law is sometimes quoted as just that proportionality.

Hartley then combined Nyquist's observation that the number of independent pulses that could be put through a channel of bandwidth B hertz was 2B pulses per second, and his own quantification of the quality or noise of a channel in terms of the number of pulse levels that could be reliably distinguished, M, to arrive at his quantitative measure for achievable information rate.

Hartley's law is often quoted in this more quantitative form, as an achievable information rate of R bits per second:

R = 2B \log_2(M) \,

Hartley did not work out exactly how the number M should depend on the noise statistics of the channel, or how the communication could be made reliable even when individual symbol pulses could not be reliably distinguished to M levels; with Gaussian noise statistics, system designers had to choose a very conservative value of M to achieve a low error rate.

The concept of an error-free capacity awaited Claude Shannon, who built on Hartley's observations about a logarithmic measure of information and Nyquist's observations about the effect of bandwidth limitations.

Hartley's rate result can be viewed as the capacity of an errorless M-ary channel of 2B symbols per second. Some authors refer to it as a capacity. But such an errorless channel is an idealization, and the result is necessarily less than the Shannon capacity of the noisy channel of bandwidth B, which is the Hartley–Shannon result that followed later.

[edit] Noisy channel coding theorem and capacity

Claude Shannon's development of information theory during World War II provided the next big step in understanding how much information could be reliably communicated through noisy channels. Building on Hartley's foundation, Shannon's noisy channel coding theorem (1948) describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. The proof of the theorem shows that a randomly constructed error correcting code is essentially as good as the best possible code; the theorem is proved through the statistics of such random codes.

Shannon's theorem shows how to compute a channel capacity from a statistical description of a channel, and establishes that given a noisy channel with capacity C and information transmitted at a rate R, then if

R < C \,

there exists a coding technique which allows the probability of error at the receiver to be made arbitrarily small. This means that theoretically, it is possible to transmit information nearly without error up to nearly a limit of C bits per second.

The converse is also important. If

R > C \,

the probability of error at the receiver increases without bound as the rate is increased. So no useful information can be transmitted beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal.

[edit] Shannon–Hartley theorem

The Shannon–Hartley theorem establishes what that channel capacity is, for a finite-bandwidth continuous-time channel subject to Gaussian noise. It connects Hartley's result with Shannon's channel capacity theorem in a form that is equivalent to specifying the M in Hartley's information rate formula in terms of a signal-to-noise ratio, but achieving reliability through error-correction coding rather than through reliably distinguishable pulse levels.

If there were such a thing as an infinite-bandwidth, noise-free analog channel, one could transmit unlimited amounts of error-free data over it per unit of time. Real channels, however, are subject to limitations imposed by both finite bandwidth and nonzero noise.

So how do bandwidth and noise affect the rate at which information can be transmitted over an analog channel?

Surprisingly, bandwidth limitations alone do not impose a cap on maximum information rate. This is because it is still possible for the signal to take on an indefinitely large number of different voltage levels on each symbol pulse, with each slightly different level being assigned a different meaning or bit sequence. If we combine both noise and bandwidth limitations, however, we do find there is a limit to the amount of information that can be transferred by a signal of a bounded power, even when clever multi-level encoding techniques are used.

In the channel considered by the Shannon-Hartley theorem, noise and signal are combined by addition. That is, the receiver measures a signal that is equal to the sum of the signal encoding the desired information and a continuous random variable that represents the noise. This addition creates uncertainty as to the original signal's value. If the receiver has some information about the random process that generates the noise, one can in principle recover the information in the original signal by considering all possible states of the noise process. In the case of the Shannon-Hartley theorem, the noise is assumed to be generated by a Gaussian process with a known variance. Since the variance of a Gaussian process is equivalent to its power, it is conventional to call this variance the noise power.

Such a channel is called the Additive White Gaussian Noise channel, because Gaussian noise is added to the signal; "white" means equal amounts of noise at all frequencies within the channel bandwidth. Such noise can arise both from random sources of energy and also from coding and measurement error at the sender and receiver respectively. Since sums of independent Gaussian random variables are themselves Gaussian random variables, this conveniently simplifies analysis, if one assumes that such error sources are also Gaussian and independent.

[edit] Implications of the theorem

[edit] Comparison of Shannon's capacity to Hartley's law

Comparing the channel capacity to the information rate from Hartley's law, we can find the effective number of distinguishable levels M:

2B \log_2(M) = B \log_2 \left( 1+\frac{S}{N} \right)
M = \sqrt{1+\frac{S}{N}}

The square root effectively converts the power ratio back to a voltage ratio, so the number of levels is approximately proportional to the ratio of rms signal amplitude to noise standard deviation.

This similarity in form between Shannon's capacity and Hartley's law should not be interpreted to mean that M pulse levels can be literally sent without any confusion; more levels are needed, to allow for redundant coding and error correction, but the net data rate that can be approached with coding is equivalent to using that M in Hartley's law.

[edit] Alternative forms

[edit] Frequency-dependent (colored noise) case

In the simple version above, the signal and noise are fully uncorrelated, and in that case S + N is the total power of the received signal and noise together. A generalization of the above equation for the case where the additive noise is not white (or that the S/N is not constant with frequency over the bandwidth) is obtained by treating the channel as many narrow, independent Gaussian channels in parallel:

C = \int_{0}^B  \log_2 \left( 1+\frac{S(f)}{N(f)} \right) df

where

C is the channel capacity in bits per second;
B is the bandwidth of the channel in Hz;
S(f) is the signal power spectrum
N(f) is the noise power spectrum
f is frequency in Hz.

Note: the theorem only applies to noises that are Gaussian stationary processes. This formula's way of introducing frequency-dependent noise cannot describe all continuous-time noise processes. For example, consider a noise process that consists of adding a random wave whose amplitude is 1 or -1 at any point in time, and a channel that adds such a wave to the source signal. Such a wave's frequency components are highly dependent. Though such a noise may have a high power, it is fairly easy to transmit a continuous signal with much less power than one would need if the underlying noise was a sum of independent noises in each frequency band.

[edit] Approximations

For large or small and constant signal-to-noise ratios, the capacity formula can be approximated:

  • If S/N >> 1, then
C \approx 0.332 \cdot B \cdot \mathrm{SNR (in \ dB)}
where
\mathrm{SNR (in \ dB)} = 10\log_{10}{S \over N}
  • Similarly, if S/N << 1, then
C \approx 1.44 \cdot B \cdot {S \over N}
In this low-SNR approximation, capacity is independent of bandwidth if the noise is white, of spectral density N0 watts per hertz, in which case the total noise power is B \cdot N_0.
C \approx 1.44  \cdot {S \over N_0}

[edit] Examples

  1. If the SNR is 20 dB, and the bandwidth available is 4 kHz, which is appropriate for telephone communications, then C = 4 log2(1 + 100) = 4 log2 (101) = 26.63 kbit/s. Note that the value of S/N = 100 is equivalent to the SNR of 20 dB.
  2. If it is required to transmit at 50 kbit/s, and a bandwidth of 1 MHz is used, then the minimum S/N required is given by 50 = 1000 log2(1+S/N) so S/N = 2C/W -1 = 0.035 corresponding to an SNR of -14.5 dB. This shows that it is possible to transmit using signals which are actually much weaker than the background noise level, as in spread-spectrum communications.

[edit] References

  • R.V.L. Hartley, "Transmission of Information," Bell System Technical Journal, July 1928.
  • C. E. Shannon, The Mathematical Theory of Communication. Urbana, IL:University of Illinois Press, 1949 (reprinted 1998).
  • C. E. Shannon, "Communication in the presence of noise", Proc. Institute of Radio Engineers, vol. 37, no.1, pp. 10-21, Jan. 1949.
  • Herbert Taub, Donald L. Schilling (1986). Principles of Communication Systems. McGraw-Hill. 
  • John M. Wozencraft and Irwin Mark Jacobs (1965). Principles of Communications Engineering. New York: John Wiley & Sons. 

[edit] See also

[edit] External links