Talk:Channel capacity

From Wikipedia, the free encyclopedia

WP:TEL This article is within the scope of WikiProject Telecommunications, an attempt to build a comprehensive and detailed guide to telecommunications on Wikipedia. If you would like to participate, you can edit the article attached to this page, or visit the project page, where you can join the project as a "full time member" and/or contribute to the discussion.

Contents

[edit] deleted information -- worth rewriting?

The following information was deleted from this article (see [1] diff):

Channel capacity, shown often as "C" in communication formulas, is the amount of discrete information bits that a defined area or segment in a communications medium can hold. Thus, a telephone wire may be considered a channel in this sense. Breaking up the frequency bandwidth into smaller sub-segments, and using each of them to carry communications results in a reduction in the number of bits of information that each segment can carry. The total number of bits of information that the entire wire may carry is not expanded by breaking it into smaller sub-segments.
In reality, this sub-segmentation reduces the total amount of information that the wire can carry due to the additional overhead of information that is required to distinguish the sub-segments from each other.

However, no reason for this deletion was given. Is this information faulty? Should it be rewritted?

WpZurp 16:46, 29 July 2005 (UTC)

The above information is not very relevant to the article. However, the article could definitely use some rewriting, as I have added some information to it in rather rough form. -- 130.94.162.61 02:34, 22 February 2006 (UTC)

[edit] Figure-text agreement

The statement of the noisy-channel coding theorem does not agree well with the figure. I will try to fix it. 130.94.162.64 19:10, 22 May 2006 (UTC)


[edit] X given Y or Y given X

The article currently reads "Let p(y | x) be the conditional probability distribution function of X given Y" should this not be "Let p(y | x) be the conditional probability distribution function of Y given X"?

Yes, you are right. Bob.v.R 14:24, 17 August 2006 (UTC)

[edit] Prepositions

I'd like to make a few comments on the following wording. "Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel."

First, I wonder who "we" are, and whether that includes or excludes me.

It is evident from the illustration that the channel receives messages from the transmitter and transmits them to the receiver. One might suspect that "messages received...over our channel" are those which go around it somehow, or those that are received by it. The usual prepositions used with transmission and reception, namely "from," "to," "through," and "by" don't appear in the sentence. "Over" is certainly used in this way, but is less precise than "through." I find it troubling. Also, the concept of "space" enters the wording in a way which further complicates comprehension. Is it "the space of messages" which is transmitted?

I'm not familiar enough with the material to edit the sentence myself, but would like to suggest amending it to take advantage of the more universal prepositions and avoid ambiguity.

Perhaps one of the following, which avoid the hazards, may express the intended message.

"X represents the space of messages entering, and Y the space of messages leaving, the channel in one unit of time."

"X is the number of messages per unit time entering the channel from the transmitter, and Y, those it sends to the receiver."

"X is the flow of messages into, and Y out of, the channel." D021317c 02:18, 24 March 2007 (UTC)

[edit] Mange01's new lead that I reverted

In digital communication, channel capacity is a theoretical upper bound for the amount of non-redundant information, in bit per second, that can be transmitted without bit errors over a point-to-point channel.

When calculating the channel capacity of a noisy channel, an ideal channel coding is assumed, i.e. an optimal combination of forward error correction, modulation and filtering. In practice such an ideal code does not exist, meaning the channel capacity should be considered as a theoretical upper bound for the information entropy per unit time, i.e the maximum possible net bit rate exclusive of redundant forward error correction that can be achieved. The Shannon-Hartley noisy-channel coding theorem defines the channel capacity of a given channel characterized by additive white gaussian noise and a certain bandwidth and signal-to-noise ratio.

In case of a noiseless channel, forward error correction is not required, meaning that the channel capacity is equal to the maximum data signalling rate, i.e. the maximum gross bit rate. The maximum data signalling rate for a base band communication system using a line coding scheme, i.e. pulse amplitude modulation, width certain number of alternative signal levels, is given by Hartley's law, which could be described as an application of the Nyquist sampling theorem to data transmission.

The channel capacity can be considered as the maximum throughput of a point-to-point physical communication link.

I reverted that because pretty much every sentence of it is either imprecise or not true. Please let me know if I need to enumerate the errors. Dicklyon 00:14, 21 June 2007 (UTC)
Dear Dicklyon, I would be greatful if you did that. I would be even more happy if you tried to improve my text instead of just reverting it. I have written similar formulations on other wikipedia pages and in my own course material, and I don't understand what is wrong. Mange01 10:41, 21 June 2007 (UTC)
OK, here are some comments
  • "non-redundant information" is redundant, and therefore misleading, as it seems to rely on a different definition of information than the rest of this topic does.
  • "in bit per second" is unnecessarily narrow, therefore misleading the reader about what channel capacity really is
  • "transmitted without bit errors" is misleading; the probility of error can be made low, but the concept of channel capacity does not allow "without error" in general; and bit is too narrow again
  • "over a point-to-point channel" is perhaps too narrow, too; what about broadcast and multi-user channel capacity?
  • "When calculating the channel capacity of a noisy channel, an ideal channel coding is assumed" is simply not true. The calculation of capacity is independent of any code. And even the proof of the channel-code theorem assumes a random code, not an ideal code.
  • "information entropy per unit time, i.e the maximum possible net bit rate exclusive of redundant forward error correction that can be achieved" is a mish-mash of terminological confusion, and the sentence structure obscures the important point that the part after the "i.e." is part of what capacity is an upper bound on; it tempts the reader to think you're saying capacity can be achieved.
  • The stuff about line coding and Hartley's law is a bit off topic, or should be brought in later as a relationship, not mixed into the lead. And "Hartley's law, which could be described as an application of the Nyquist sampling theorem to data transmission" implies that Nyquist said something about sampling, which he did not; in fact he wrote about line coding, and the sampling result can be considered an offshoot of that.
  • And the final sentence, equating capacity to "maximum throughput" is just nuts.
Dicklyon 03:31, 22 June 2007 (UTC)

I understand most of your comments. Now I am more happy. Actually I have made thousands of revisions to Wikipedia, and they have been reverted only twice, both times by you. It is about to become a bad habit of yours. ;)

Anyway, I would like to invite you as member in the WP:TEL WikiProject. Mange01 11:18, 26 June 2007 (UTC)

[edit] 2 Channel capacity pages

There is a subsection of information theory that talks about the same. Should be merged. —Preceding unsigned comment added by Besap (talk • contribs) 10:22, 7 November 2007 (UTC)

[edit] Capacity of images

I added this link, which was then removed. That page is what originally brought me to this article. It seems like a particularly intuitive application of channel capacity that could make for some interesting examples. Particularly, the application of channel capacity to image quality seems like a good way to cut through shortcomings of other quality metrics such as megapixels, bit depth, sensor noise, and lens MTF. Can anyone comment on this application? —Ben FrantzDale (talk) 14:39, 9 January 2008 (UTC)

That's one of many approximate hypothetical applications of the capacity concept, but otherwise doesn't contribute to understanding the concept or where it is known to really apply. Dicklyon (talk) 19:28, 9 January 2008 (UTC)
Fair enough. It's all interesting stuff. —Ben FrantzDale (talk) 16:24, 15 January 2008 (UTC)