Demonstration of subpixels. © en:2004 David Remahl.
This is an illustration of en:subpixel rendering. The first column displays the original text at 100% size. A part of the text has been magnified 600% (each pixel in the magnification is 6×6 pixels) in a regular image editing program. The upper image does not use subpixel rendering, but does use anti-aliasing. It is completely gray-scale. The lower does use subpixel rendering. At the edges of the strokes of the letters there is noticeable colour deviations. The normal-sized subpixel rendered text should appear significantly sharper than the regularly rendered text, but only on a TFT display with RGB subpixels in that exact order.
The second column displays the pixels as they would look if one enlarged an image of the monitor. The white pixels do not appear white, since the display elements are red, green and blue. In the regular rendition, the red, green and blue pixels are only controlled in triplets, i.e. a triplet of subpixels must have the same colour value. There is no such restriction in the subpixel rendered version, below.
The third column shows, enlarged, how the text is perceived when the light from the red, green and blue pixels mix and form various shades of gray.
The text was generated by the Quartz engine used by Mac OS X. Microsoft's en:ClearType subpixel rendering technology would have produced slightly different results, but the principle is the same. The font used for the example is en:Optima. en:Helvetica Neue is used for the labels.
How the image was created
First, Apple's text editing program TextEdit was used to draw the text at 12 and 18 pt (72 dpi) and captured by taking a screenshot. The images were imported in Photoshop and positioned. The two images were then duplicated and scaled up with nearest-neighbour sampling, 600%. Then the following Python script was used to split the components of the source image into three vertical components:
#!/usr/bin/env python
from AppKit import *
from Foundation import *
import sys
image = NSImage.alloc().initWithContentsOfFile_(sys.argv[1])
imageRep = image.representations()[0]
bmpData = imageRep.bitmapData()
numPixels = len(bmpData) / 3
if sys.argv[2] == "1":
for p in range(0, numPixels):
(r,g,b) = bmpData[p*3:(p+1)*3]
sys.stdout.write(3*r)
sys.stdout.write(3*g)
sys.stdout.write(3*b)
elif sys.argv[2] == "2":
nul = chr(0)
for p in range(0, numPixels):
(r,g,b) = bmpData[p*3:(p+1)*3]
sys.stdout.write(r+nul+nul)
sys.stdout.write(nul+g+nul)
sys.stdout.write(nul+nul+b)
The above script was used twice per text image (the raw screenshot), producing four distorted image that were three times as wide as normal. They were imported into Photoshop using the RAW format import. Each of the four images was scaled like this, with nearest-neighbour sampling:
- Scale height 300%.
- Scale width and height 200%.
The resulting images had the same scale as the original text that had been enlarged 6 times. The enlarged images were cropped to an approximate square and positioned like they appear in the final image.
The Photoshop file, with intact layers, can be acquired by mailing the copyright holder. In the future, I plan to upload it to the wiki, but right now Photoshop files are unfortunately not supported by MediaWiki.
|