Talk:Kell factor

From Wikipedia, the free encyclopedia

Contents

[edit] Vs. Nyquist

This sounds like it's related to the Nyquist frequency - am I right in this assumption? Peter S. 14:51, 18 November 2005 (UTC)

Kell is built on top of Nyquist, but they're separate ideas --Dtcdthingy 04:56, 28 May 2006 (UTC)
It's my impression that they're different measures of the same issue, except Kell doesn't involve nearly as much math (and seems, to me, vaguely-defined by comparison). Kell factor is applied only to image processing - in audio and radio you cannot ignore the frequency domain and thus cannot avoid the math. (comment by 216.191.144.135).
Thank you :-) Peter S. 23:20, 16 August 2006 (UTC)

[edit] Both the Article and this Discussion Are Ambiguous and Conflicting

Before writing this discussion I read numerous articles posted on the Internet including the subject Wikipedia article. What I found is a lot of ambiguity and conflicting information.

Let’s begin with the definition as stated in the subject article. The definition states that the Kell Factor is related to the resolution of a “discrete display device.” What might be very significant to this definition is that it does not include the camera or scanning device and/or the relationship between the camera or scanning device and the discrete display. Presently, I don’t whether this is true or not, but this distinction will become more relevant as this discussion continues.

Next, and probably worth noting up front is that the definition implies that the effective resolution of a display is possibly less than that suggested by the resolution inherent in the design. Specifically, in the case of a television display, the implication is that the effective resolution will be less than the vertical line resolution of the television display.

Next, the defining discussion includes no explanation for what causes this factor to have a value other than precisely one. Of course, the possibility does exist that the cause was never investigated or determined by Raymond Kell or anyone else, although that would seem doubtful.

Next, in the opening discussion of the subject article the writer claims that the Kell Factor has no fixed value. This suggests that the value is dependent upon (a) a variable parameter, (b) several parameters that are independent of each other, or (3) a relationship between parameters that is other than a simple quotient of two variables. For example, the resolution of a telescope based on the Rayleigh Criterion is a ratio of the wavelength of the light to the diameter of the optics (lens or mirror). Although the resolution determined by this criterion is not fixed, the relationship expressing the criterion is fixed. It is also based on a theoretical proof.

Next, the discussion implies that displayed resolution is dependent upon the spot size and/or the Gaussian intensity distribution of the electron beam used in the display. Regarding the spot size, this would appear to make more sense than the prospect of the resolution being dependent on simply the vertical-line resolution. For example, if two dots that are produced by two adjacent horizontal sweeps are positioned in vertical alignment, then increasing the size of the two electron-beam spots would cause increasing overlapping of the dots being displayed. Also, with a sufficient increase in the dot size the adjacent dots would eventually become inseparable and indistinguishable. This is similar in concept to the common example used to illustrate the Rayleigh Criterion.

Regarding the electron beam spot having Gaussian intensity distributions, I’m not sure if that is true. The spot created by the electron beam is created by using a magnetic lens that has an apparent diameter and focal length that is similar in principle to that of an optical lens. In an optic model, if an equal intensity of light falls entirely on the light collecting optics, the focused spot will have equal intensity across its area. So this would probably be expected to be true for the electron bean as well. Also, although there is such a thing as Gaussian beam divergence related to beam propagation, this phenomenon (a) relates to divergence typically noted in lasers having very narrow beams, (b) is relevant to very narrow beam widths that are within orders of magnitude of the propagation wavelength, and (c) does not address irregularities in beam intensity over the spot area.

Next, regarding the first example that discusses the image containing black and white stripes placed in front of the camera, I found several problems. First, this example is more related to the problems encountered in scanning by the camera than to the problems of the “discrete display” (as I noted earlier). Note, however, that the third sentence uses the term “effective resolution of the TV system,” which is inconsistent with the initial definition that the Kell factor applied only to the discrete display.

Second, the explanation in the first example appears to establish or relate the cause for achieving a lower resolution in the camera scan to be related to the difficulties of aligning the stripes on the card (image) to the scan lines of the camera rather than to something else. This rationale is strongly noted by the words “since it is unlikely the stripes will line up perfectly with the lines on the camera’s sensor,” as if to suggest that it is because of the camera operator’s inability to align the stripes that this causes the Kell Factor.

Regarding the discussions on this “talk” page, there are conflicting discussions over the relationship of the Kell Factor to the Nyquist Criterion. In the opening discussion both Peter S. and Dtcdthingy suggest that the Kell Factor is simply another term or application of the Nyquist Criterion. The writer of the section titled “Interlace” states that “the Kell factor, while determined empirically, is still a manifestation of the Nyquist effect.” However, under the section titled “CCD Display” Jluff states “Kell is empirically determined and not derived. It is not related to Nyquist.” It cannot be much more conflicting than this.

Regarding what I have read in other articles on the Internet, some writers suggest that Raymond Kell was simply attempting to express the relationship between what he and others had observed in assessing the resolution of television sets in the early days of television. None of the articles described the details or any rationale explaining the cause. One textbook with two references to documents naming Raymond Kell as a contributing author states that Kell’s reasoning was that an image simply had to be over-sampled to be reliably represented on the display. More specifically, it states that if the scanning resolution were not greater than the resolution contained in a pattern, then the effects of phasing could dominate the overall representation seen on the display. Accordingly, it is not until the scanning resolution is around 1.5 times the pattern resolution that the pattern is sufficiently relieved of the noted effects of phasing.

For those unfamiliar with the term phasing, phasing is simply a single word label for the effects caused by moving the scanning device with respect to the pattern. In one case, if the resolutions are identical, then shifting the alignment of a scanner over a black and white striped pattern can provide a range of representations from alternating stripes to a uniform gray solid. The writer also made the comment that Kell had been working with interlaced displays at the time, so his ideas became associated with problems in interlacing, when in fact they were not. The bottom line on this article is that it basically attributes the Kell Factor to scanning problems directly related to the Nyquist Criterion and not directly to the display as suggested in the subject Wikipedia article.

Regarding any additional comments that I may have gleaned from my attempt to gain an understanding of the Kell Factor, I have two: The first relates to the fact that pretty much every discussion that attempts to explain the Kell Factor attempts to explain it in terms of scanning a striped pattern without success and then increasing the scan resolution to about 140 percent (to provide 70 percent of the pattern resolution) and thereby achieving success. This phenomenon is simply related to the Nyquist Criterion, and although the Nyquist Criterion only requires a minimum of two samples for sampling one period of a sine wave, other wave shapes and patterns are certain to require additional samples.

Accordingly, even for a repeating-striped pattern (which is analogous to a square wave), two samples per period can be insufficient when the effects of phasing become dominate and/or when the number of periods or repeats is sufficiently low in number as to not allow a recurring cycle in the resulting scanned pattern. For example, two samples taken from one period of a sine wave may provide little or no information while 199 or 201 samples taken from 100 periods is likely to provide full amplitude, frequency, and phase information (if the waveform is known to be a sinusoid). And, although a Nyquist approach can be used to analyze scanning patterns, it may not necessarily be the best or the only approach to actually measuring resolution. And, in fact, using the Nyquist Criterion may be completely misleading simply because of the effects of phasing. As a rule, it is always best to understand all the details when applying theories, models, standards, and criteria to observations.

However, and although this is not my field of expertise, if I were to devise a method to evaluate the resolution of only a display, I would probably alternate lines and simply verify that the alternating lines displayed as alternating lines. Next, if I were to devise a method of to evaluate the resolution of a system that included a scanner or camera and a display, I would probably apply either two adjacent lines or two adjacent dots of various widths and vary the spacing between them. Because of the effects caused by the alignment of the position of the scan lines over the pattern, I would probably provide both a best case and worse case resolution, where the best case would correspond to a scan line falling on center between the two pattern lines or dots and the worst case would correspond to two scan lines falling symmetrically on both sides of the center line between the two pattern lines or dots. It may be helpful to draw a diagram of this to see why this is true.

My second comment relates to black and white television displays. First, black and white television sets do not contain a mask, so that a full electron beam produces full illumination by striking the phosphors on the viewing screen. Second, because the subject electron beam is typically focussed to provide a round spot, and because sweeping a round spot across a screen provides a line with non-uniform intensity across its width, black and white televisions would not be expected to provide sweep lines having uniform intensity. Third, to provide for uniform intensity and minimal banding, it would appear that scan lines would be required to overlap around 13 percent (one minus the square root of three over two). Accordingly, it would be expected that this increase in line width would thereby decrease the resolution over that suggested by counting only the number of lines per unit width.

Also, a similar effect analogous that described above could be attributed to the effect of altering the width of the scan lines as well. Tighter or more narrowed scan lines would provide scanned values closer to the values in the center of the path or band being scanned, while wider scan lines would provide scan values that are affected by the overlapping of scans and by the values of the adjacent bands.

All in all, it is certainly interesting that this phenomenon is so poorly understood and explained by so many who are willing to state what is apparently more opinion based than fact. Possibly we need to follow the references to Raymond Kell and his writings and actually read what he wrote.

BillinSanDiego (talk) 02:33, 26 March 2008 (UTC)

[edit] interlace

Note that the presence or absence of interlacing is irrelevant to Kell factor.

Is this correct? I thought that interlace was the reason that Kell Factor on a TV was .7, while a CCD scan viewed on a computer could be up to .9. Algr 08:42, 30 March 2006 (UTC)

The number for Kell factor is a matter of perception. I suppose you might come up with a higher number if interlacing is absent (because the picture in general looks better), but there's no direct relationship --Dtcdthingy 04:56, 28 May 2006 (UTC)

Interlacing is important. Images must be filtered in the vertical dimension to avoid temperal aliasing, and thus interlaced images have lower resolution. This is a contentious issue, so someone from the television industry may be inserting erroneous comments into articles.

Yes, I'm a shill for the TV industry. You got me. No, that statement was in there because I've heard a few people (including in reference works) claim Kell factor exists to account for interlacing, which I hope you agree is inaccurate. I stand corrected on it being irrelevant. --Dtcdthingy 23:47, 24 October 2006 (UTC)

The Kell factor, while determined empirically, is still a manifestation of the Nyquist effect. In any sampled system the maximum frequency is determined by the Sinx/x loss. In a CRT the vertical scan is sampled, whereas the horizontal is continuous. Therefore the maximum horizontal can be reduced to compensate for vertical sampling losses. Since interlace increases apparent vertical resolution but does not double it, the effective vertical resolution upon which the Kell factor is calculated depends of whether the scan is interlaced or not. Since modern LCD screens are progressive scan (even when fed with an interlaced signal) and use sampling in both horizontal and vertical directions, and do not use scanning but simultaneous clocking, they do not have a Kell factor (or the Kell factor is 1), since the sampling losses are the same in each direction. This is the reason why no Kell factor is used for modern HD television signals. The reason there is so much disagreement about the Kell Factor is that all explanations are true - they are all different ways of seeing the same effect, but are still all equally valid. 82.40.211.149 19:59, 11 February 2007 (UTC)BM82.40.211.149 19:59, 11 February 2007 (UTC)

[edit] ClearType?

This sounds related to ClearType as well. —Ben FrantzDale 03:25, 28 May 2006 (UTC)

Nope, nothing to do with it. --Dtcdthingy 04:56, 28 May 2006 (UTC)
I my not understand Kell factor entirely, but I'm surprised you say this. I guess font hinting is actually more related to this than ClearType. As I understand it, font hinting takes advantage of the fact that the resolution of a display device goes up when the signal you want to display happens to be in phase with the pixels on the screen (the extreme being if I want to draw a one-pixel square at (2,2), an LCD display can do this exactly whereas the same square cannot be drawn accurately centered at (2.5,2.5)). This sounds like Kell factor to me. Am I mistaken? —Ben FrantzDale 12:04, 30 May 2006 (UTC)
Possibly yes. They're both concerned with the problem of picture details not lining up with the pixel grid. Kell is simply a method of estimating its effects, whereas font hinting is a way to minimize it. --Dtcdthingy 15:41, 30 May 2006 (UTC)
I just found a fascinating paper that relates Kell factor with subpixel (ClearType) rendering: "Subpixel Image Scaling for Color Matrix Displays" by Michiel A. Klompenhouwer and Gerard de Haan. —Ben FrantzDale 19:12, 30 May 2006 (UTC)

[edit] CCD display

The article makes two references to a "CCD display". What is this meant to refer to? --Dtcdthingy 00:05, 24 October 2006 (UTC)

It means the author needs to be beaten with the cluestick. 87.11.30.175 22:16, 22 January 2007 (UTC)

Text by 87.11.30.175:

This makes no sense whatsoever, since CCD is a sensor technology, not a display technology. From the explanation given in the second paragraph, it's clear that the Kell factor is related to the acquisition process, not the display process. Thus it makes no difference whether you're talking about a cheap crappy old TV or the brand-spanking-new HDTV you just bought. You should be comparing the cameras used to acquire video, and the format it is encoded in.

This was moved from the article. —Ben FrantzDale 22:30, 22 January 2007 (UTC)

Kell is empirically determined and not derived. It is not related to Nyquist. Kell determined the value in 1934 by using expert and non-expert viewers using a PROGRESSIVELY scanned image, thus it is unrelated to interlace, but rather was an attempt to come up with a way to relate horizontal and vertical resolution. The value used is affected by the type of scanning and the MTF of the total system. Kell was first determined when cameras used tubes with Gaussian scanning beams, and CRT displays with similar Gaussian beam distribution. modern CCD cameras and digital displays act differently due to the physics of the process of acquiring and displaying the image. The best way to relate resolution is using MFT (Modulation Transfer Function), which is discussed in the Gary Tonge citation added to the article. Jluff 02:06, 28 January 2007 (UTC)

I changed the article to more clearly state that the fixed-pixel nature of the image sensor and display are the issue. But can anyone jump in with a citation for the 0.90 figure? --Damian Yerrick (talk | stalk) 18:44, 11 March 2007 (UTC)

[edit] Cleanup/disputed

I've removed these two tags until someone makes an actual complaint about something that needs changing --Dtcdthingy 21:39, 12 February 2007 (UTC)

[edit] Answers?

I've followed the links around and, without digging up Kell's original paper, I can't find any description of how to measure the Kell factor of a given display. I assume you display a resolution test pattern of increasing spatial frequency and have an observer pick the perceived cutoff frequency, then go on to compute the Kell factor accordingly. Is that right? Isn't that essentially finding the frequency for which the MTF is some given percent, at least if you measure MTF as the minimum MTF over all phases of input signals?

Relating this to the Nyquist limit: The Nyquist–Shannon sampling theorem says that the sampling rate must be strictly greater than the maximum signal frequency. That seems to imply that the Kell factor cannot be 1, at least not without phase aliasing of up to 90° in the highest-frequency component. —Ben FrantzDale (talk) 02:16, 3 April 2008 (UTC)