Talk:NTSC

From Wikipedia, the free encyclopedia

This is the talk page for discussing improvements to the NTSC article.
This is not a forum for general discussion about the article's subject.

Article policies
Cleanup Taskforce article This article has been improved by the Cleanup Taskforce to conform with a higher standard of quality. Please see its Cleanup Taskforce page for more details on this process, and possible ideas on how you can further improve this article!
This article is within the scope of Wikipedia:WikiProject Filmmaking, an attempt to better organize information in articles related to filmmaking. If you would like to participate, you can edit the article attached to this page, or visit the project page, where you can join the project and/or contribute to the discussion.
B This article has been rated as B-Class on the assessment scale.
This article is part of WikiProject Media, an attempt to better organize information in articles related to media. If you would like to participate, you can edit the article attached to this page, or visit the project page, where you can join the project and/or contribute to the discussion.

Contents

[edit] NTSC is just a colour encoding system!

This seems to be lost on a lot of people, and this article doesn't do much to explain it. NTSC is purely related to how colour is encoded over luminance. NTSC-M is the North American broadcast standard, and it's the M rather than the NTSC that designates number of lines, frames per second, etc. The third paragraph of the PAL article makes this clear, but it is only briefly mentioned in this NTSC article under the "Refresh rate" section: "The NTSC format—or more correctly the M format; see broadcast television systems—consists of 29.97 interlaced frames".

Given that it's the M format that determines this, and not NTSC, I propose that the section on refresh rate be moved to Broadcast television system and that only colour-related technical details be left in this article. I also propose that the third paragraph of the PAL article be adapted and included in this NTSC article. The opening paragraph of this article should also be modified, as it doesn't mention colour at all! Balfa 20:09, 21 June 2006 (UTC)

I'm sorry, but you're wrong. The National Television System Committee had (has?) control over all aspects of the forma, not just the colour subcarrier.
Atlant 22:06, 21 June 2006 (UTC)
Oh. That's unfortunate, I was liking the idea of seperation :) So then the line "The NTSC format—or more correctly the M format" is incorrect? Balfa 13:24, 22 June 2006 (UTC)

The NTSC standard does include line and field rates, but to use PAL as synonymous with the 625 line system is wrong - PAL is just a colour coding system, not a line standard. Not all PAL systems are 625, and not all 625 sytems PAL. The author should have known this, that they are not interchangeable terms.

[edit] NTSC Standard

The following was formerly in NTSC standard which, being redundant, I have redirected to NTSC:

NTSC standard: Abbreviation for National Television Standards Committee standard. The North American standard (525-line interlaced raster-scanned video) for the generation, transmission, and reception of television signals.

[edit] Exact frame rate

I think it should be made more clear that the actual frame rate of NTSC is 30/1.001 Hz. 29.97 Hz is an inexact approximation. That 29.97 is the actual rate is a common misconception which I would prefer not to propagate.

Dmaas 16:39, 7 January 2005

[edit] Unsharp

Why is NTSC so unsharp and blurry compared to PAL? --Abdull 21:17, 17 Jan 2005 (UTC)

Because both the vertical line count and bandwidth available for horizontal resolution are higher (effectively about 338x483 compared to 403x576 according to one source. The PAL image has nearly half-again as many "pixels". There is less motion information (lower frame rate) and colour information, but the sharpness you perceive depends mostly on the luminance info in individual fields.
Now that you mention it, there is a general POV cast to the article that seems subtly defensive of NTSC and critical of PAL. Hopefully it can be gradually rewritten to strain that out (a more accurate assessment might be that both systems suck ;). - toh 21:00, 2005 Mar 5 (UTC)
Be a little cautious. The video bandwidth of the various PAL implementations varies from a low of 4.2MHz (the same as NTSC) up to a much-nicer 6.0MHz. It's only the systems with video bandwidths greater than NTSC that have horizontal resolutions better than NTSC. (Admittedly, I think this constitutes the majority of PAL systems in the world, although the less-great 5MHz video bandwidth seems most popular).
There's a pretty nice description (and table) at Broadcast television system.
In the final analysis, I think you got it right: both systems suck ;).
Atlant 22:18, 5 Mar 2005 (UTC)
Yes I've just read this and I definately get the feeling that NTSC is being painted in a better light than PAL. I'm not sure what it is, something subtle in the way it's written. BTW the PAL entry is very poor compared to this and the SECAM one !
--GeorgeShaw 17:49, 2005 May 11 (UTC)
Well come on, some PAL people! The gauntlet is cast, step up to the plate, and all those other sportsy metaphors! Let's see some activity over there on the PAL article! After all, we'll soon all be swept away by digital TV systems. :-)
Atlant 18:11, 11 May 2005 (UTC)

The question should really be "why are 525 line systems blurry compared to 625 line ones?" 625 line systems use 20 - 50% more horizontal and 20% more vertical resolution than 525 sytems. This is a function of the line number and system bandwidth rather then the colur system used. 625 NTSC, which I have seen, should be sharper than 625 PAL, because the luminance and chrominance do not ovelap as much, so can be separated more efficiently.

[edit] Map

Note that the map is cropped such that a good half of New Zealand is missing. Does anyone have the original image? Or is a completely new one necessary to fix this? --abhi 17:18, 2005 Apr 15 (UTC)

[edit] Consumer-grade equipment

Reading magazine reviews of both HDTV and SDTV sets, it seems to me that many TV manufacturers deliberately deviate from the NTSC specification; apparently it's normal to have some amout of "wide angle color demodulation" and "red boost" inherent in NTSC color decoders; this does not seem to be a common manufacturing practice with PAL devices. That of course contributes to the "Never the same color" stigma as well; maybe someone with a more in-depth technical understanding of this could write about this in the article. NewRisingSun 16:10, 16 Jun 2005 (UTC)

IIRC, this was introduced in something like the 70s where the decoders were "rigged" so that a wide range of colors in the hue range around caucasian flesh tones rendered closer to some appropriate value. This was done, of course, because NTSC sets have a hue (phase) control and folks were always mis-setting that control, making people's faces green or red instead of whatever color caucasians nominally are. I had assumed that this sort of thing was phased out as more consistenty-accurate decoders became available; certainly most NTSC sets bury the hue control somewhere in some menu these days and I don't think I've touched one for years.
Atlant 16:49, 16 Jun 2005 (UTC)
From what I've read, it actually has gotten worse with all those "improvement" features. One TV reviewer wrote that they use nonstandard color decoders that produce "warmer" colors because they use "colder" phosphors to make the picture look brighter. It also seems that these nonstandard algorithms are different for Japanese and American TVs, probably because Asian fleshtones are different than caucasian ones. Compare this data sheet for some TV chip thing: http://www.sony.co.jp/~semicon/english/img/sonyde01/a6801857.pdf
On page 17, it reads:
AXIS (1) : R-Y, G-Y axis selector switch
0 = Japan axis R-Y: 95° ´ 0.78, G-Y: 240° ´ 0.3
1 = US axis R-Y: 112° ´ 0.83, G-Y: 252° ´ 0.3 (B-Y: 0° ´ 1)
I would assume that a TV set using this chip could not correctly reproduce colors at all (no matter what the "Hue" setting is), as R-Y should be at exactly 90° (don't know what that fraction number means, maybe "gain" or something). Again, I wonder if and how to best write this up for the main article. NewRisingSun 19:32, 16 Jun 2005 (UTC)

[edit] Variants, NTSC-J

Um, isn't the stereo audio system also different on the Japanese system? The North American version uses a similar approach to FM stereo (suppressed-carrier AM difference signal added to audio before medium-width FM modulation), but moves the frequencies down so that the stereo pilot tone is the same as the TV line frequency; the Japanese stereo implementation predates this and IIRC inserts a second FM subcarrier onto the audio before the main FM sound modulation step. --carlb 01:01, 22 August 2005 (UTC)

I really don't see how NTSC-J could be completely compatible with NTSC-M, when some of the studios in Japan that produce video in NTSC-J use 50 Hz AC power, while North America is exclusively 60 Hz. Wouldn't that result in horizontal sync failure when attempting to view such an NTSC-J signal on an NTSC-M receiver without first converting the signal? -- Denelson83 04:33, 6 August 2006 (UTC)
It has been a long, long time since television equipment used the power line for a frequency standard. They use crystals now.
Bryan Henderson 21:57, 14 October 2006 (UTC)

[edit] "Pixels" in NTSC

Many readers are familiar with the pixel dimensions of their computer monitors. If the above estimate of 338x483 is accurate, it would be educational to mention it in the article. (I'm insufficiently knowledgeable to confidently do this.)

The graph at the bottom currently claims boldly that VGA and NTSC are the same, 640 x 480. NTSC "pixels" of course are about a billion times blurrier than any 640x480 VGA monitor. Tempshill 00:04, 30 July 2005 (UTC)

A VGA monitor is 640x480, RGB, non-interlaced. That means it displays all 480 lines each 1/60th of a second (instead of showing every second line, then going back for the rest next time, NTSC-style). It also means that the three individual colours are kept separate throughout the system.
NTSC uses a black-and-white signal (nominally at least 640x480 interlaced visible image resolution) plus two colour difference signals (sent at a significantly lower horizontal resolution). Some old TV's did a poor job of interlacing the two fields (240 lines each, vertical resolution) back together to get 480 visible lines and made a mess of of separating the colour information from the main picture; those built before the widespread use of comb filters gave significantly below-VGA results virtually always.
Some claim NTSC to be capable of up to about 720x480 for the monochrome portion of the image (these numbers, and variants, were used in the DVD standards) although broadcast sources vary widely. (Dish Network's DVB video is typically running at 544x480, on some DBS systems there's even significant variation between different TV channels/same provider). MPEG2 compression of broadcast signals introduces further loss of detail; analogue broadcast introduces snow and noise which affects the image most adversely at the high frequencies used for fine image detail. As for "a billion times blurrier", let me see, 640 x 480 divided by a billion - no that's not even one pixel; NTSC isn't quite that bad yet although I'm sure they're working on it. ;) --carlb 01:01, 22 August 2005 (UTC)
You might want to also discuss a related concept called TV lines per picture height. -- Denelson83 04:37, 6 August 2006 (UTC)

[edit] List of Countries

Diego Garcia isn't in the Pacific. I have no idea which continent it is associated with. --MGS 13:37, 4 August 2005 (UTC)

Diego Garcia is in the Indian Ocean

[edit] Explanation of Standard

In the Standard section, voodoo is used to justify the number of lines in an NTSC frame. It should explain why the line frequency is 15750Hz. --66.66.33.245 23:59, 21 August 2005 (UTC)

The number 15750 is easy enough to calculate; take 30 (the number of complete frames sent per second) and multiply by 525 (the total number of lines in one complete frame, including any hidden beyond the screen edge). 30 x 525 = 15750 :) --carlb 01:01, 22 August 2005 (UTC)

[edit] Vertical Interval Reference (VIR)

Would it be worth mentioning the "patches" applied to attempt to fix/salvage NTSC by inserting extra data such as VIR (according to SMPTE Historical Note JUNE 1981, "The vertical interval reference (VIR) system constrains the receiver settings of chroma and luminance to follow the values set at the studio")?

It would seem like, much in the same way that DNA has supplanted dental records to identify accident victims defaced beyond recognition, VIR, comb filters (and a largely-unused GCR [ghost canceling reference] signal intended to replace VIR) could allow viewers to identify images defaced beyond recognition by NTSC? --carlb 01:47, 22 August 2005 (UTC)

In my opinion, yes, you should discuss this. So be bold!

Atlant 12:17, 22 August 2005 (UTC)

[edit] Cleanup

The "Standard" section really needs to be cleaned up. It makes reference to too many magic (arbitrary numbers). The section should either give an overview of how the various important frequencies were choosen, or explain where the magic numbers come from: get rid of "then one multiplies xyz MHz by 2 and then divides by yzx before adding zxy to obtain the frequency" by explaining where xyz, yzx and zxy come from. And explain WHY one multiplies and divides by these numbers; otherwise you are just doing magicmath.

The section also claims that NIST operates a 5MHz reference broadcast for this purpose. Is this really true? If it is, then "not a coincidence" needs to be removed (and a reference cited, perhaps?). My gut says that it isn't true because there would be all sorts of problems with the phase (for various technical reasons) of the NIST broadcast at the NTSC transmitter vs. the NTSC reciever. --66.66.33.245 23:59, 21 August 2005 (UTC)

NIST would be the owners of the US shortwave atomic-clock radio stations WWV (Colorado) and WWVH (Hawaii), which are on exactly 2.5, 5, 10 and 15MHz on the radio dial IIRC. These weren't created solely for use by television stations; they're available to anyone with a shortwave receiver. Their existence would make it easier for TV stations to adjust various frequencies at the transmitting site to match a known standard; they would not be being used directly within individual TV receivers though. A quartz crystal of 3.579545MHz (for the colour subcarrier) and a plain 'ol variable resistor or two (to set vertical/horizontal hold manually) were the standard equipment in TV sets, with possibly more crystals being added to the mix later to control the chips behind the newer digital tuners. --carlb 01:07, 22 August 2005 (UTC)

Note 1: In the NTSC standard, picture information is transmitted in vestigial-sideband AM and sound information is transmitted in FM.

Note 2: In addition to North America, the NTSC standard is used in Central America, a number of South American countries, and some Asian countries, including Japan. Contrast with PAL, PAL-M, SECAM.

Source: from Federal Standard 1037C

[edit] standard (moved here from article)

I removed the section labelled 'standard'. It is too technical for most people, not informative enough, the tone is too 'chatty', and is unnecessary. (Of course, all that is strictly IMHO.) Some of this may want to be moved back so I place it here intact.


525 is the number of lines in the NTSC television standard. This choice was not an accident. The reasons for this are apparent upon examination of the technical properties of analog television, as well as the prime factors of 525; 3, 5, 5, and 7. Using 1940s technology, it was not technically feasible to electronically multiply or divide the frequency of an oscillator by any arbitrary real number.

So if one started with a 60 hertz reference oscillator, (such as the power line frequency in the U.S.) and sought to multiply that frequency to a suitable line rate, which in the case of black and white transmission was set at 15750 hertz, then one would need to have such a means of multiplying or dividing the frequency of an oscillator with a minimum of circuitry. In fact, the field rate for NTSC television has to be multiplied to twice the line rate to obtain a frequency of 31500 hertz, i.e. for black and white transmission synchronized to power line rate.

One means of doing this is of course to use harmonic generators and tuned circuits, i.e. if using the direct frequency multiplication route. With the conversion of U.S. television to color, beginning in the 1950's the frequencies were changed slightly, so that a 5 MHz oscillator could be used as a reference. The National Institute of Standards and Technology (NIST) transmits a 5 MHz signal from its standard time and frequency station WWV which may be useful for this purpose. The 5MHz signal may be multiplied by a rational number to obtain the vertical frequency.

Interestingly enough, when one analyzes how people get the 59.94 vertical field rate, one realizes that it is just 60 hertz multiplied by 1000/1001. Now 1001 in turn has prime factors of 7, 11, and 13, so that when cascading simple flip flop based circuitry it is possible to take a 60 kilohertz reference source and divide it by 1001 exactly to obtain the vertical field rate. It is not a coincidence that NIST operates a radio station, WWVB, that broadcasts a time and frequency standard synchronized to an atomic clock on this frequency, that is, 60 kHz.


If 5MHz is [i]multiplied[/i] by rational number as stated above you get a very high number of megaherz - not the low frequency vertical field rate. There is no rational number which 5MHz can be divided by to give the vertical rate. It seems that as well as the world's worst television system, the USA has the world's worst mathematicians!

[edit] Australia used NTSC?

The list of (former) NTSC countries now includes Australia. AFAIK, Australia only experimented with NTSC before choosing a 625-line system (one possible factor was that the country uses 50-Hz electricity). If anyone can give more background on this, it would be helpful. ProhibitOnions 21:40, 12 January 2006 (UTC)

Looks like my assumption was right. Thanks for fixing this. ProhibitOnions 23:47, 26 February 2006 (UTC)

The UK also experimented with 625 NTSC (but never 405 NTSC as the article states.) Before the UK went colour, the BBC wanted to use 625 line NTSC and ITV 405 line PAL. I have seen pictures from this experiment in 625 line NTSC. One of the arguments was that for the same line standard NTSC gives sharper pictures because the luminance / chrominance interleaving is more perfect, giving less overlap so allowing better use of bandwidth.

[edit] Shift of field rate

Corrected explaination of shifted field rate. Reason is to minimize interferences between

  • luma <=> chroma

AND

  • chroma <=> audio

[edit] Clarification on the "beat" paragraph?

After reading this article, the first explanation for why there is now a 59.94059...field rate is still unclear. The text that I feel needs improvement reads:

"When NTSC is broadcast, a radio frequency carrier is amplitude modulated by the NTSC signal just described, while an audio signal is transmitted by frequency modulating a carrier 4.5 MHz higher. If the signal is affected by non-linear distortion, the 3.58 MHz color carrier may beat with the sound carrier to produce a dot pattern on the screen. The original 60 Hz field rate was adjusted down by the factor of 1000/1001, to 59.94059... fields per second, so that the resulting pattern would be less noticeable."

It's clear that the 4.5MHz audio carrier and the 3.58MHz color carrier will beat and produce a low frequency of 920KHz. But why does this generate interference? Interference with what? How does adjusting the field rate down prevent this interference? I don't know the answers, but I'm sure somebody does.

The 920 KHz beat frequency that may be produced is well within the video frequency range, so if the beat frequency occurs at all, it is displayed on the picture as a pattern of bright and dark dots with roughly 920/15.75=58 dots per line. If this dot pattern were not synchronized with the video frame, its constant motion would be very distracting. By choosing the frequencies as they did, the dot pattern, if it occurs, is:
  1. synchronized with the rest of the video so it "holds still", and
  2. reverses phase each line, helping to hide its presence.
Clear now?
Atlant 19:28, 11 May 2006 (UTC)


Well... Thank you for the response, but I'm still confused. Here's what I'm confused by:

(1) Is the intent to hold the dot pattern still, or to make the dot pattern move? An earlier entry in this NTSC article says: In the color system the refresh frequency was shifted slightly downward to 59.94 Hz to eliminate stationary dot patterns..."

(2) How does shifting the line frequency down slightly cause the dot pattern to change? 920KHz/15.75KHz = 58.412. 920KHz/[(1000/1001)*15.75] = 58.354. So the number of dots doesn't change...why are the dots stationary in one case, and move in the other?

This is my first forray into wikipedia editing, and I realize that asking lots of techie questions is perhaps best left for tech forums, rather than wiki discussion pages. On the other hand, one of the reasons I love wikipedia is because it does delve into the details of complex problems. If this can be clarified so that an amature can understand it, I think that would be beneficial. Thanks!

The difference frequency is 58.5 times the line frequency. This means that the dot pattern with an unmodulated carrier is stationary, but out of phase between successive lines. This means that the "bright" dots on one line line up with the "dark" dots on the next, so they cancel each other out when viewd from a reasonable distance. If the frequency had not been shifted that number would have been 58.21, resulting in a moving dot pattern without the cancellation. However since the carrier is FM, this makes nonsense of all this, as the frequency is varying anyway, so the dot pattern would be moving except during periods of perfect silence, which are very rare.

[edit] Better explanation of 30/1.001?

I found this information on why the change from 30 to 30/1.001 was needed. It's an excellent explanation, and I think this is likely to be accurate also: (credit: Bob Myers) http://groups.google.com/group/sci.engr.advanced-tv/msg/108815e3089c4d53

I recently received some mail asking where the NTSC 59.94 Hz field rate came from in the first place. Thinking that this might be a topic of general interest, I've decided to post a short discussion of this here - hope no one minds!

Before the NTSC color encoding system was added to the U.S. TV standard, television WAS at 60.00 Hz; it was set at this rate to match the power line frequency, since this would make interference from local AC sources less objectionable (the "hum bars" would be stable in the displayed image, or - if the TV rate wasn't exactly locked to the line - at least would move very slowly). Actually, in some early systems, the TV vertical rate WAS locked to the AC mains!

A problem came up, though, when trying to add the color information. The FCC had already determined that it wanted a color standard which was fully compatible with the earlier black-and-white standard (there were already a lot of TV sets in use, and the FCC didn't want to obsolete these and anger a lot of consumers just to add color!) Several schemes were proposed, but what was finally selected was a modification of a pixel-sequential system proposed by RCA. In this new "NTSC" (National Television Standards Committee) proposal, the existing black-and-white video signal would continue to provide "luminance" information, and two new signals would be added so that the red, green, and blue color signals could be derived from these and the luminance. (Luminance can be considered the weighted sum of R, G, and B, so only two more signals are needed to provide sufficient information to recover full color.) Unfortunately, there was not enough bandwidth in the 6 MHz TV channels (which were already allocated) to add in this new information and keep it completely separate from the existing audio and video signals. The possibility of interference with the audio was the biggest problem; the video signal already took up the lion's share of the channel, and it was clear that the new signal would be placed closer to the upper end of the channel (the luminance signal is a vestigial-sideband AM signal, with the low-frequency information located close to the bottom of the channel; the audio is FM, with the audio carrier 4.5 MHz up).

Due to the way amplitude modulation works, both the luminance and the color ("chrominance") signals tend to appear, in the frequency domain (what you see on a spectrum analyzer) as a sort of "picket fence" pattern. The pickets are located at multiples of the line rate up and down from the carrier for these signals. This meant that, if the carrier frequencies were chosen properly, it would be possible to interleave the pickets so that the luminance and chrominance signals would not interfere with one another (or at least, not much; they could be separated by using a "comb filter", which is simply a filter whose characteristic is also a "picket fence" frequency spectrum. To do this, the color subcarrier needed to be at an odd multiple of one-half the video line rate. So far, none of this required a change in the vertical rate. But it was also clearly desirable to minimize interference between the new chroma signal and the audio (which, as mentioned, is an FM signal with a carrier at 4.5 MHz and 25 kHz deviation. FM signals also have sidebands (which is what made the "picket fence" pattern in the video signals), but the mathematical representation isn't nearly as clean as it is for AM. Suffice it to say that it was determined that to minimize chroma/audio mutual interference, the NTSC line and frame rates could either be dropped by a factor of 1000/1001, or the frequency of the audio carrier could be moved UP a like amount. There's been (and was then) a lot of debate about which was the better choice, but we're stuck with the decision made at the time - to move the line and field/frame rates. This was believed to have less impact on existing receiver than a change in the audio carrier would.

So, now we can do the math.

We want a 525 line interlaced system with a 60 Hz field rate.

525/2 = 262.5 lines/field. 262.5 x 60 Hz = 15,750 Hz line rate.

This is the rate of the original U.S. black-and-white standard.

We want to place the color subcarrier at an odd multiple of 1/2 the line rate. For technical reasons, we also want this multiple to be a number which is fairly easy to generate from some lower multiples. 455 was selected, and

       15,750 x 455/2 = 3.58313 MHz

This would've been the color subcarrier frequency, but now the 1000/1001 correction to avoid interference with the audio:

       60 x 1000/1001 = 59.94005994005994.......

The above relationships still apply, though:

       262.5 x 59.94... = 15,734.265+ Hz
       15,734.265+ x 455/2 = 3.579545+ MHz

And so we now have derived all of the rates used in the current standard.

[edit] Spelling

Standardisation is spelt with an S, not a z. Stop abusing our language! —The preceding unsigned comment was added by Zoanthrope (talk • contribs) .

Don't be stupid. In both American English and British English it's spelt with a Z, see [1]. MrTroy 20:11, 3 August 2006 (UTC)
That's funny; your reference suggests that in British English, it can be spelled either way.
Atlant 23:22, 3 August 2006 (UTC)
It CAN be spelled either way, yes. That still means Zoanthrope was wrong in saying it can't be spelt with a Z, because it's completely valid. May I remind you, by the way, that it's against WP policy to remove other people's comments from the talk page, even if you think it's a personal attack. Which it isn't, where I live "don't be stupid" is hardly offensive, it just means "calm down". MrTroy 08:33, 4 August 2006 (UTC)
Fine. Put your comment back, and hope an administrator doesn't cite your for violating WP:NPA.
Atlant 12:56, 4 August 2006 (UTC)

I apoligise. My mistake. S S S Zoanthrope 18:27, 21 August 2006 (UTC)

Talking of spelling, that "apoligise" doesn't look right. Maybe that's cos I don't use it much! Zoanthrope 18:29, 21 August 2006 (UTC)

It's spelt apologize. Incidentally, it's in the Oxford list of commonly misspelled words :-) -- MrTroy 09:27, 22 August 2006 (UTC)
Or apologise. The page you linked has both versions. The OED prefers -ize, but many other British dictionaries prefer -ise. -- anonymous 29 Oct 2006

[edit] difference between looking like an infomercial and looking like a feature film

i thought ntsc had something to do with this. can someone please explain this to me?

What you are refering to could be the difference between a feature film recorded at 24fps which is played using 3:2 pulldown on NTSC, and a television program which might be recorded fully interlaced at 60 different fields per second. Another difference could be that of the difference in contrast, colour saturation and dynamic range between film and television cameras. Ozhiker 19:58, 12 October 2006 (UTC)

I was searching for a copy of EIA RS170a and from what I can tell it has been superceeded with SMPTE 170M-2004 "Composite Analog Video Signal - NSTC for Studio Applications". And in fact is avialable for $US36 from the SMPTE store and a PDF file.

[edit] Citations

The first reference in the article is an explanation that the source for the information in the article has to be purchased through ITU. It should instead be a proper citation as per the manual of style for citations. Whether or not the information has to be purchased through ITU doesn't matter, it should be cited regardless. Also, if specific page numbers or sections can be cited, then cite them. This way it's easier for those who want to verify the information through the sources. Ceros 06:01, 9 December 2006 (UTC)

[edit] Indonesia / Hong Kong / Singapore

Indonesia, Hong Kong, and Singapore use PAL for broadcasting. However they use NTSC-J for gaming. Maybe, it's worth mentioning--w_tanoto 00:41, 24 January 2007 (UTC)

[edit] 100 Hz Scanning not Common

The article claims that all 50 hz receivers are scanned at 100Hz. This is just not true. Only a very tiny minority of sets used this. The motion artefacts cause by scan doubling were worse than the flcker, so the system never really took off. This is a serious piece of misinformation in the article. —The preceding unsigned comment was added by 82.40.211.149 (talk) 21:52, 17 February 2007 (UTC).

I agree to that. Its not common and was not as good as the original 50Hz - also hes right about the number of TVs which use this. Moooitic 22:18, 2 April 2007 (UTC)

[edit] CVBS Error

The article says "One odd thing about NTSC is the Cvbs (Composite vertical blanking signal) is something called "setup." This is a voltage offset between the "black" and "blanking" levels. Cvbs is unique to NTSC."

CVBS stands for Composite Video Burst and Sycs, not Composite vertical Blanking System." It is also used to refer to a Pal signal, so is not unique to NTSC. It has absolutely no connection with the "set-up" or "pedestal" referred to here (althjough obviously a CVBS signal can have this.) The more accurate "Composite Video" article says "Composite video is the format of an analog television (picture only) signal before it is combined with a sound signal and modulated onto an RF carrier. It is usually in a standard format such as NTSC, PAL, or SECAM. Composite video is often designated by the CVBS acronym, meaning either "Color, Video, Blank and Sync", "Composite Video Baseband Signal", "Composite Video Burst Signal", or "Composite Video with Burst and Sync"." —The preceding unsigned comment was added by 82.40.211.149 (talk) 22:06, 17 February 2007 (UTC).

[edit] Color encoding and Luminance Derivation

The article says "Luminance (derived mathematically from the composite color signal) takes the place of the original monochrome signal." Luminance of course is NOT derived mathematically from the composite color signal but from a weighted average of the red, green and blue gamma corrected signals. It should also be mentioned that the chrominace signals are supressed subcarrier as well as quadrature modulated - this is an important part of the system. —The preceding unsigned comment was added by 82.40.211.149 (talk) 00:15, 18 February 2007 (UTC).

[edit] 29.97 was an engineering error

This article by Paul Lehrman of Mix magazine

http://web.archive.org/web/20020108053619/http://www.paul-lehrman.com/insider/2001/08insider.html

asserts that the whole 29.97 thing was more the result of faulty engineering assumptions and poor testing than any genuine need to correct "beating" or "interference". If true, and he makes a good case, then all the confident articles about why the frame rate for NTSC had to be lowered to accomodate color are in error. It really ought to be incorporated in this article, but I've found "frame rate" is a topic video people have very dogmatic notions about, so i'll leave it to some one braver than I.

[edit] 480i 680x480 and 576i 720x576

Hello. Well I am not a pro in all that stuff here (just came by to read some about pal conv. issue) but the article says that 480i has 680x480 and 576i has 720x576. Now that makes 4:3 on 480i and 3.75:3 on 576i which makes sense (about the pal issue - black on upper and lower screen-ends) BUT the picture here says something different. Namely that NTSC (480i) is 720x480 - now I think the picture is wrong but anyway I just wanted to know / say some about that. If I got something wrong here please tell me :-D

http://en.wikipedia.org/wiki/Image:Common_Video_Resolutions.svg Is the image i refer to - its the picture at the bottom. SECAM PAL and NTSC share that picture. (as well as some other article)

Moooitic 22:14, 2 April 2007 (UTC)