Talk:Subpixel rendering

From Wikipedia, the free encyclopedia

Perhaps, not the best place to ask, but does anyone know why subpixel rendering is not used for CRT? The geometry of the screen is known perfectly well and pixels are relatively distinctive

Enlarge

and keep their position respective of each other. So it should be possible to do the same as cleartype does, but apparently this isn't done. Why?

Probably the best man to ask would be [1]. He explains it nicely in the FAQ section. --Josh Lee 01:42, Jan 28, 2005 (UTC)
The software does not have enough info about the electron beam convergence error and the alignment of pixels to the apreture grill. Worse yet, this varies across the screen and even in response to magnetic fields. AlbertCahalan 18:08, 25 May 2005 (UTC)

Contents

[edit] Subpixel rendering on the Apple II

The article states "Whereas subpixel rendering sacrifices color to gain resolution, the Apple II sacrificed resolution to gain color." Steve Wozniak, the designer of the Apple II, would disagree with this claim. See [2] where he is quoted as saying: "more than twenty years ago, Apple II graphics programmers were using this 'sub-pixel' technology to effectively increase the horizontal resolution of their Apple II displays." (emphasis added). Drew3D 18:14, 1 March 2006 (UTC)

I've rewritten the section to explain the Apple II graphics mode in hopefully enough detail (perhaps too much!) to explain how it both is and is not subpixel rendering depending on how you look at it. Gibson is right if you think of the screen as having 140 color pixels horizontally, which is not unreasonable if you're doing color software, but when programming it you wouldn't plot purple and green pixels, you'd just plot white pixels (which would show up purple or green or white, depending on what was next to them and whether they were at even or odd coordinates). The Apple II did have a true subpixel rendering feature (a half-pixel shift when painting with certain colors) that was exploited by some software, but it bears no relation to LCD subpixel rendering as described in this article. Jerry Kindall 16:23, 2 March 2006 (UTC)

Apple II graphics programmers used sub-pixel technology to increase the horizontal resolution of their Apple II displays. That is an absolutely unambiguous statement by Steve Wozniak, the designer of the Apple II. The paragraphs you added do nothing to disprove his statement, or even argue against it. Your claim that this "bears no relation to LCD subpixel rendering as described in this article" is false. There are differences in how exactly a pixel is addressed programmatically because one is an LCD display and one is an NTSC display, but the basic concept is exactly the same: instead of addressing whole pixels, you address sub-pixels, in order to achieve a higher apparent resolution. That is hardly "bearing no relation". That is the exact same concept. Drew3D 22:30, 3 March 2006 (UTC)
There are no sub-pixels on an Apple II hi-res display because there are not actually any colors. The colors are completely artifacts. As Gibson describes it, programmers would alternate purple pixels and green pixels to smooth out diagonals, when in fact if you're drawing in white you can't help doing that because the even pixels are always green and the odd pixels are always purple! (Or vice versa, I can't remember right now.) Whether it's subpixel rendering or not depends entirely on whether you think of the graphics screen as being 280x192 monochrome or 140x192 color, but these are just ways of thinking about the screen and not actually separate modes! At the lowest level you can only turn individual monochrome pixels on and off, and if you wanted to be sure you got white you had to draw two pixels wide. The half-pixel shift with colors 4-7 is sort of like subpixel rendering, but it doesn't actually plot a fractional pixel but a full pixel shifted half a pixel's width to the right. I won't go so far as to claim Wozniak doesn't know how his own computer works, which would be stupid, but I wouldn't say he is necessarily above claiming credit for Apple II "innovations" that only become apparent in hindsight if you tilt your head a certain way. If you read Wozniak's original literature on the Apple II, he is clearly intending to trade off resolution for color, not the other way around. No Apple II graphics programmer (and I was one, co-authoring a CAD package called AccuDraw) thought they were plotting "subpixels" when plotting lines as Gibson describes, they were drawing white lines with a width of two pixels. The way the Apple II did color, while a clever hardware hack, was a complete pain for programmers and had to be constantly worked around in various ways. The typical question was not "how do I get more resolution from this color screen?" but "how do I get rid of these #$#! green and purple fringes?" Jerry Kindall 17:15, 4 March 2006 (UTC)

The Apple II section is incorrect, no matter what Woz said. The Apple II is not capable of subpixel rendering because every pixel is the same size - no matter what color. If purple or green pixels on the screen were half the size of white pixels, then you'd have subpixel rendering. If you disagree, if you believe that the Apple II is capable of subpixel rendering, then please show me a screenshot of a color Apple II display on which a colored pixel is physically smaller than a white pixel. Otherwise, the "Subpixel Rendering and the Apple II" section needs to be edited down to remove the irrelevant discussion of why 1-bit hi-res bitmaps suddenly take on fringes of color when they're interpreted as color bitmaps. - Brian Kendig 18:10, 19 March 2006 (UTC)

Well, my entire point was that because of the position-based trick for generating color, you can't have a single, horizontally-isolated white pixel. All solitary pixels on a scanline appear to be color, and dots that appear white must be at least two pixels wide. This can be very easily seen on any emulator if you don't have an actual Apple II handy. The discussion of Apple II hi-res color is admittedly too long and contains too much detail for casual readers interested only in subpixel rendering, but some of that background is necessary to understand why Gibson is sort of right and sort of wrong, depending on how you think about color on the Apple II. I just wasn't sure what to cut, maybe I'll take another whack at it after thinking about it for a bit. Jerry Kindall 01:20, 25 March 2006 (UTC)
I've been trying to understand this, but I'm still unclear on the concept! Are you saying that, for example, on a color Apple II display: a bit pattern of '11' represents a white pixel, but a bit pattern of '01' represents a pixel of which half is black and half is a color? So, in other words, you can't specify the color of an individual pixel; setting a color of '01' or '10' will make one pixel black and the pixel beside it a color, but setting a color of '11' will make two adjacent pixels white? - Brian Kendig 21:52, 25 March 2006 (UTC)
You've nearly got it. A single pixel (i.e. the surrounding pixels on either side are black) is ALWAYS a color. Even pixels are purple (or blue) and odd pixels are green (or orange). Two pixels next to each other are always white. (Blue and orange are complications which can be ignored for now because they are analogous to purple and green. Each byte in screen memory holds 7 pixels, plus a flag byte that chooses a color set. Each horizontal group of 7 pixels can thus contain either purple/green/white pixels OR blue/orange/white pixels, but never both.)
If you have an Apple II emulator, try this: HGR:HCOLOR=3:HPLOT 0,0. Note that HCOLOR=3 is described as "white" in Apple II programming manuals, but you get a purple pixel. Then try: HGR:HCOLOR=3:HPLOT 1,0. Again, you're still plotting in "white," but you get green! Finally, try HGR:HCOLOR=3:HPLOT 0,0:HPLOT 1,0. Now you have two pixels and both are white. (If you want to see blue and orange instead of purple and green, try HCOLOR=7 instead of HCOLOR=3. HCOLOR=7 is also documented as being white, but it sets the high-bit of the bytes in the graphics memory, inducing a half-pixel shift that results in the alternate colors.) Also note that if you do something like this: HGR:HCOLOR=2:HPLOT 0,0 TO 279,0 you get what looks like a solid purple line in color mode, but if you switch the emulator to monochrome mode (most allow this) you'll see that the computer has simply left every other dot black!
Now try this: HGR:HCOLOR=1:HPLOT 0,0. The screen's still blank even though you've told the computer to draw a green pixel. Why? You have told the computer to draw in green (HCOLOR=1) but have told it to plot 0,0, a pixel which cannot be green because it has an even horizontal coordinate. So the computer makes that pixel black! In short, when you tell the computer to plot in a color besides black or white, it actually draws "through" a mask in which every other horizontal pixel is forced off.
One last illusration. HGR:HCOLOR=3:HPLOT 1,0: HPLOT 2.0. You get two pixels that both look white. So we see that any pair of pixels comes out white. You don't need an even pixel and then an odd one, an odd one followed by an even one works just as well.
Now, if you think of the screen in terms of pixel pairs, which does simplify things from a perpsective of programming color graphics (you can say 00 means black, 01 means green, 10 means purple, 11 means white, as long as you don't mind halving the horizontal resolution), then you have two "subpixels" and these can indeed be used to smooth diagonal lines as Gibson explains. However, most Apple II programmers wouldn't think of it that way. Since the colors are artifacts of positioning to begin with, what Gibson's talking about doing is exactly, bit-for-bit identical to plotting a white diagonal line two pixels wide using the full horizontal resolution. You're not really turning on a green pixel next to a purple pixel. You can't make a pixel green or purple; pixels are colored because of their positioning. You can only turn pixels on and off, and you get purple or green depending on whether they are at even or odd horizontal coordinates, and two next to each other are always white. (This all has to do with the phase of the NTSC color subcarrier at particular positions on a scanline.)
This is way harder to explain than it should be. But then, it was way harder to understand back in 1982 (when I learned it) than it should have been, too. Apple II color is a total hack designed to get color into a machine at a price point that wouldn't allow Woz to include real color generation circuitry. Anyhow, perhaps it's time for a separate article explaining Apple II graphics modes? That could get linked from here and from Apple II articles and that way we could shorten this one up. Jerry Kindall 23:13, 27 March 2006 (UTC)
With a little distance I was able to rewrite this section to be a bit shorter. Jerry Kindall 16:11, 6 April 2006 (UTC)

[edit] Some samples would be nice

Could someone add some samples at 1:1 resolution for common LCD colour orders? It's be nice to be able to compare different techniques' output. --njh 08:47, 31 May 2006 (UTC)

[edit] only lcd?

so crt's don't benefit from sub-pixel rendering? What is the term then for rendering sub-pixels to more accurately determine a pixel's color?

The sub-pixels (red, green, and blue) inherently determine the pixel's color. It's already being done as accurately as it can be done given the hardware. I'm not sure what you're asking. Jerry Kindall 22:12, 6 June 2006 (UTC)
I figured out the term I was thinking of, which is "oversampling" (I think). And that shows benefits on a CRT screen too. I guess you could think of oversampling as like the internal resolution you're using to figure out the final shade : on a fractal, for example, located entirely within a pixel, if the lines are black and the background is light, then one pixel's worth of resolution would imply that the pixel should be very very very light grey - but if you quad oversample, and calculate the fractal on a 16x16 grid, and then average all of the greyscale values into the single pixel, it will be a slightly darker shade of grey, and therefore more accurate. As you oversample greater and greater, you approach closer and closer to the color of the pixel if you actually zoomed in to that pixel at 1600x1200 resolution, actually rendered all that, and then took the average value of all 1,920,000 pixels. Of course, this isn't substantially better than just taking the time to render 16 pixels (4x4) in the place of that single value. However, I believe it's better to do this (render 16 sub-pixels to more accurately determine the color of a pixel) than to just render the single pixel without figuring out exact sub-values. I think the article should mention that "sub-pixel rendering" is about rendering sub-pixels specifically to actually use the three separate pixels each LCD (and only LCD) monitor is comprised of for each pixel, rather than "rendering" sub-pixels in order to more accurately determine a pixel's color. The article isn't clear enough that sub-pixel rendering is a "hack" based solely on the fact that LCD monitors don't have solid pixels.
What you refer to is called antialiasing, and is somewhat orthogonal to that covered in this article. Freetype (and derived renderers such as libart, inkscape(livarot) and AGG) already computes exact coverage for each pixel (and over-sampling any more than 16x16 is pointless with only 256 levels). Over-sampling is but one method for achieving antialiased output. How would you improve the article to prevent confusion in the future? Please sign your posts. --njh 13:02, 7 June 2006 (UTC)
Subpixel rendering != antialiasing. How is subpixel rendering a "hack"? —Keenan Pepper 20:49, 7 June 2006 (UTC)
I think the misunderstanding was not realizing that subpixel rendering generally throws away the need to portray colours. It is usually applied as a form of anti-aliasing (though yeah, it does have some other uses), and it's a "hack" because the hardware wasn't really originally intended to be used this way. I don't think we should use the word "hack" though, given it's possible derogatory connotation. I've added "black-and-white" to the lead description, which might help this problem. (And in answer to the question about CRT versus LCD, it's that CRTs don't have as clear separation of the subpixels. CRTs are hitting glowing phosphorus (which "bleeds" a bit) with an electron gun from a distance, whereas LCDs have a direct electrical connection with the subpixel, which is much more accurate.) - Rainwarrior 00:18, 8 June 2006 (UTC)
Phosphor != phosphorus. And colored text looks fine with subpixel rendering on my laptop's LCD. —Keenan Pepper 01:15, 8 June 2006 (UTC)
Thanks for straightening me out on Phosphor. I'm not much of a chemist, as I guess is evident. As for colored text looking fine, it may, but the ability to anti-alias with this method given a colour context diminishes. You can still do a bit, but there's less to material to work with; it really is a black-and-white technique. (If the text is coloured, a few missing sub-pixels isn't really going to make it look bad, I suppose, but this is a subtle effect to begin with.) - Rainwarrior 06:18, 8 June 2006 (UTC)
Though I might add to my above description: CRTs, unlike LCDs, don't have a 1 to 1 pixel to RGB location on their screens (which is part of the design which compensates for the bleeding effect); any given pixel might be covering, say, give different RGB segments on the actual screen. (Furthermore, even if they were lined up this way, not all CRTs arrange them in vertical strips.) - Rainwarrior 06:56, 8 June 2006 (UTC)

[edit] Further clarification

I just realized that you misinterpretted me with:

so crt's don't benefit from sub-pixel rendering? What is the term then for rendering sub-pixels to more accurately determine a pixel's color?
The sub-pixels (red, green, and blue) inherently determine the pixel's color. It's already being done as accurately as it can be done given the hardware. I'm not sure what you're asking. Jerry Kindall 22:12, 6 June 2006 (UTC)

I didn't mean sub-pixels red, green, blue, I meant sub-pixels as meaning greater internal resolution. Antialiasing, which the topic above meandered off to, is specifically about JAGGINESS, e.g. within a diagonal line. But I don't mean compensating for jagginess. I mean, if you render ---------------------------->

this figure
Enlarge
this figure

entirely confined to a pixel, then if your internal resolution matches the physical resolution, you might well end up simply deciding on a very different color than if you decide to render it internally at a full 1600 x 1200 and then average all those pixels. For example, the thumbnail I inserted above has a DIFFERENT AVERAGE PIXEL VALUE than the full picture, which is at 1024x768. However, because the picture is a mathematical abstraction, we can render it even more deeply. (Approaching closer and closer to the full fractal). For example, I am certain that the full 1024x768 picture linked above was not merely rendered by solving the equation at each point at a center of a pixel. (ie the equation was not only solved 786,432 times). Rather, I bet the software that rendered that picture rendered probably a dozen points for each pixel, maybe more, to better approximate the final fractal. However, and here's the rub, some black-and-white fractals end up completely black if you render them to infinity, and others disappear (for example the fractal defined thus: a line such that the middle third is removed and the operation is repeated on the left and the right third, ad infinitum -- if you render this "fully" you get no line to see at all.) So I'm most certainly not talking about rendering green, red, blue pixels. I'm talking about rendering pixels that are at a greater granularity to your target, which I think is called oversampling. Of course, this applies especially to cases where a mathematical abstraction certainly lets us get exact sub-values. You can't apply it to a photograph at a given resolution to get more accurate pixels.

A Special Misunderstanding

and over-sampling any more than 16x16 is pointless with only 256 levels

I don't think this is true for a few different reasons. First of all, when we're talking about full colors, the kind of rendering I'm thinking of is appropriate at 16 or 32 bits of colorspace. And secondly, the reason it still helps to have greater internal resolution is because when your view shifts slightly over time (as with a 3D game) the granulity of your final pixels reveals, over motion, finer detail. It's the same as a fence you can't see through because it leaves only little slivers open -- the slivers are like the final pixels -- but behind that, there's a whole lot more resolution you're just being shown a slice of, and as you move around, you can make out everything behind that fence, just from that one sliver. Or try this: make a pinhole in a piece of paper and look at it from far away that you can only make out a tiny bit of detail behind it (so, only a few pixel's worth of resolution). Now jitter it around in time, and you can see that because of how the slice changes you get a lot of the "internal" resolution hidden behind those few pixels. Or notice how a window screen (graininess) disappears if you move your head around quickly and you see the full picture behind it.

Sorry for the rambling. Is "oversampling" the only term for the things I've described? - 11:03, 10 June 2006, 87.97.10.68

Oversampling is one method of accomplishing anti-aliasing (the most common). Often the chosen locations for oversampling are jittered randomly to avoid secondary aliasing (a sort of aliasing in the aliasing which sometimes occurs from too regularly oversampling), sometimes called Supersampling. An alternative method to oversampling is convolution with, say, a small gaussian blur, but this is usually more computationally expensive and is used in more specialized applications.
As for your claims about accuracy, yes technically you should render each pixel in infinite detail, not just screen sized, to precisely determine its colour. In practice this is not possible, and if you remember that an actual pixel at 24bit colour only has 256 degrees for any colour component; there us a finite precision to the colour you can choose. Doing a lot of extra calcualtion for an effect that is generally subtle, and perhaps usually too subtle to represent once calculated, even with an infinitely detailed image such as the Mandelbrot, is generally considered "useless", yes. I would actually doubt that your Mandelbrot image uses any better than 4x oversampling, which would already quadruple the calculation time. (By the way, the image of the Mandelbrot is not really an "equation", it is an iteration of a function given a starting location, and whether that point goes infinite when iterated determines its membership in the Mandelbrot set; this is why it takes more effort to compute than most images, every pixel in that picture I would assume required about 500 iterations of this function (if oversampling, multiply that by the number of oversamples).) It's not worthwhile to take a many thousand times speed hit by doing a full screen oversample of each pixel when the expected error of only taking even just four samples is rather small. The difference between 4x and 9x oversampling is subtle, the difference between 9x and 16x becomes more subtle, eventually it becomes pointless. Yes there is a chance for error given any amount of oversampling (it is easy to come up with pathological examples, though with jittering even "hard" cases tend to work out pretty well), but on average it is very small.
Sub-pixel rendering is often implemented as an oversampling at the sub-pixel level (3x oversampling). The term is not used to refer to oversampling in generally, but rather this particular application that hijacks the RGB sub-pixels to accomplish it. - Rainwarrior 15:56, 10 June 2006 (UTC)
AWESOME REPLIES! The replies contain a wealth of information, and the second one confirms my initial suspicion way above, that "oversampling" is the general term of art to which I referred. As for the first response (unsigned though, above Rainwarrior's) Wow, very good. Awesome. The only thing I'm left wondering about, is whether it's fair to call, say, "800x600 physical resolution rendered with 9x oversampling" by the term "2400x1800 internal resolution 'scaled' to 800x600 physical representation".
The difference seems to be that "2400x1800 internal resolution" implies that there's no "Often the chosen locations for oversampling are jittered randomly" as you state. Secondly, you don't mention whether I'm right that the internal resolution can be fully "revelealed" through motion -- ie not just a "prettier picture" or "more accurate picture" but that through subtle motion the full 2400x1800 internal resolution can be revealed. A still picture can't represent 2400x1800 in 800x600 at the same color depth, since an average of nine distinct pictures at the larger resolution would have to map to the same rendering at the lower resolution. However, if you add a component of time/motion the full 2400x1800 internal resolution can be revealed in a distinct way. No two will look the same, since the internal resolution is fully revealed. However. we have no article on internal_resolution and your mentioning "jittered randomly" leaves me with the uneasy thought that this is just a way of getting a nice average. (The random jittering also would lead to a nice dithering effect if I understand you correctly, as long as it really is random enough -- if it's not random enough the jitter would lead to an interference pattern). I can give you an example if this is wrong, to show my thinking. I think all the articles in this subject can be improved to facilitate understanding, including especially anti-aliasing, interference_pattern, internal_resolution, jittering, oversampling, etc etc etc.
finally, it seems that my comment original comment "...was not merely rendered by solving the equation at each point at a center of a pixel" seems to imply that I have the correct understanding, which you reiterated with "By the way, the image of the Mandelbrot is not really an 'equation', it is an iteration of a function given a starting location, and whether that point goes infinite when iterated determines its membership in the Mandelbrot set". I think you'll agree I didn't call it an equation, I called it solving an equation for the color-value of each point, and the definition you mention "whether that point goes infinite when iterated determines its membership in the Mandelbrot set" is certainly the result of a function. Also, the coloring is a metric of "how fast" it goes to infinite (black spots don't), but either way each point solved gives an answer to a question, which question can be phrased as an equation. (If you read mandelbrot_set can you agree that your simplification applies to black-white pictures, and color pictures represent how steeply the point goes to infinite? (I have no idea what scale they calculate steepness by! But it gives some purdy colors.)
It was just one reply, I broke it into paragraphs. Anyhow, an "internal resolution" implies things that may not be desired; first, it suggests that more memory might actually be required, but, for instance, in the case of 4x oversampling, the four pixels can often be calculated all at once without requiring the much larger image be stored; second, it disallows jittering (which is generally the cause). As with most things, how to do anti-aliasing is application dependant, ie. jiterring is not used in graphics cards because of the extra calculation in involves (this may have changed recently). Generating a large image and then down-sampling (usually an average) is one method of accomplishing anti-aliasing, but as I said, usually this is avoided so that the memory can be conserved (having more texture memory, for instance, might be important).
Jittering does produce a kind of dithering. Its purpose is to reduce the effect that regular patterns below the nyquist freqency can have on aliasing. It doesn't reduce the error, but it distributes that error across the image, making it much more tolerable. Motion can indeed reveal the finer details of an underlying image (which jittering would destroy somewhat), but in most real-time applications where you're going to see motion, it's far more important to do the calculations quickly than completely accurately (want to watch that motion at 1 frame per second, or 30?). Whomever made the earlier comment about 16x16 being the maximum detail for 24bit colour is in an academic sense wrong about that, not only if there is motion (you could simply multiply his number by the number of frames that particular pixel covers the same area and the idea would be the same), but because resolving finer detail may indeed change the value significantly; the spirit of his point was correct though, the probability that the it will change significantly drops off rather quickly (excepting some special cases) as you increase oversampling size.
As for the Mandelbrot, its original definition was: the set of points what would produce a connected Julia set, as opposed to a disconnected one. (I don't know if you're familiar with Julia sets.) So, technically, the Mandelbrot set is black and white; it would just take an infinite amount of calculation to resolve all of its finer details. The coloured images are usually produced by calculating mandelbrots at different levels of calculation. To call it "solving an equation" isn't very accurate because the solution is already known, the image of the mandelbrot is a numerical calculation. ie. the solution for the mandelbrot after 4 iterations is: z4 = ((c2 + c)2 + c)2 + c, and I think you could see that there are similar solutions for further iterations, but what determines whether or not a point c is in the set is whether or not this z4 is large enough that it cannot possibly converge to zero after further iterations. If a point is determined to still be under a certain magnitude (a sort of "escape" radius) after 4 iterations, it is guaranteed to have been under that escape magnitude after 3, 2, and 1 iterations, thus each level of detail exists inside the set of points calculated by the previous one, and when a smooth gradiation of colour is used across these levels of detail, the terraced effect you have seen occurs. - Rainwarrior 17:55, 10 June 2006 (UTC)
thank you, this is more excellent clarification. I'm still reading it, but wonder about the phrase "second, it disallows jittering (which is generally the cause)", as I look at the context above I can't determine what it is the "cause" of, or if you mean it's generally the case, can you clarify the preceding phrase? I'm still reading though. ( also am I correct that 1 fps vs 30 is just rhetorical, right, for more like 15 instead of 30 or 7 instead of 30? Unless the 4-fold (etc) texture size increase exceeds your available memory? computationally it's just linear, right? )
I'm not sure whether I meant "case" or not, it might have been an unfinished sentence. What I should say is that jittering is used in a lot of applications (ie. fractals, raytracing), but not often in graphics card anti-aliasing because of its increased computation. The 1 vs 30 wasn't referring to a specific case, but if you were comparing, say, 4x oversampling to 128x oversampling this would be a realistic figure. My point was that greater oversampling incurs (except in special circumstances) a proportional speed hit. - Rainwarrior 22:55, 10 June 2006 (UTC)

Re: the Mandelbrot set pictures (more of which at User:Evercat), I wrote the program and yes, it did indeed use more than 1024x768 points (actually a lot more, though I don't recall the precise number, but it was probably at least 4 and maybe 9 or 16 subsamples per actual pixel), with the final colour for a pixel the average of the subpixels.... Evercat 21:29, 12 July 2006 (UTC)

[edit] Image (re-)size query

Just wondering if it wouldn't make for sense to resize the first picture. At full size it shows pixel images blown up by a factor of 6. I think it would make more sense to reduce the thumbnail image also by a factor of 6 to render the blown up pixels as 1:1 and prevent strange artifacts being introduced. —The preceding unsigned comment was added by 82.10.133.130 (talk • contribs) 09:33, 17 August 2006 (UTC)

I think that particular image needs to be introduced as a thumbnail. It is a much better demonstration at its full size. Perhaps it would be better broken into three small (1:1) images and used later in the document, but what I think we should definitely do is choose a new picture to go at the top. The first image in the document should be a simple picture of subpixel rendering, not an explanation of it. Rainwarrior 18:50, 17 August 2006 (UTC)

[edit] Open Source Subpixel Rendering

I have just decided that ClearType is one of my favorite things about Windows, and I am wondering to what extent it is used and available in Linux.

[edit] Non rectangular sub-pixels

All the examples assume that LCD sub-pixels are rectangular, making a square pixel. But this isn't always the case - I've just noticed that my two seemingly identical monitors are different - one has the standard pattern, the other has chevron shaped subpixels (a bit like this: >>>) I don't think there's much effect on sub-pixel rendering as they're both based on a square, but it does explain why one always looks slightly more focused than the other.Elliot100 13:34, 14 September 2006 (UTC)