Pixel
From Wikipedia, the free encyclopedia
In digital imaging, a pixel (picture element[1]) is the smallest piece of information in an image. Pixels are normally arranged in a regular 2-dimensional grid, and are often represented using dots or squares. Each pixel is a sample of an original image, where more samples typically provide a more accurate representation of the original. The intensity of each pixel is variable; in color systems, each pixel has typically three or four components such as red, green, and blue, or cyan, magenta, yellow, and black.
The word pixel is based on the abbreviation "pix" for "pictures"; similar back-formations include voxel, luxel, and texel.
Contents |
[edit] Technical
A pixel is generally thought of as the smallest single component of an image. The definition is highly context sensitive; for example, we can speak of printed pixels in a page, or pixels carried by electronic signals, or represented by digital values, or pixels on a display device, or pixels in a digital camera (photosensor elements). This list is not exhaustive, and depending on context there are several terms that are synonymous in particular contexts, e.g. pel, sample, byte, bit, dot, spot, etc. We can also speak of pixels in the abstract, or as a unit of measure, in particular when using pixels as a measure of resolution, e.g. 2400 pixels per inch, 640 pixels per line, or spaced 10 pixels apart.
The measures dots per inch (dpi) and pixels per inch (ppi) are sometimes used interchangeably, but have distinct meanings especially in the printer field, where dpi is a measure of the printer's resolution of dot printing (e.g. ink droplet density). For example, a high-quality inkjet image may be printed with 200 ppi on a 720 dpi printer.
The more pixels used to represent an image, the closer the result can resemble the original. The number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition. Pixel counts can be expressed as a single number, as in a "three-megapixel" digital camera, which has a nominal three million pixels, or as a pair of numbers, as in a "640 by 480 display", which has 640 pixels from side to side and 480 from top to bottom (as in a VGA display), and therefore has a total number of 640 × 480 = 307,200 pixels or 0.3 megapixels.
The pixels, or color samples, that form a digitized image (such as a JPEG file used on a web page) may or may not be in one-to-one correspondence with screen pixels, depending on how a computer displays an image.
In computing, an image composed of pixels is known as a bitmapped image or a raster image. The word raster originates from television scanning patterns, and has been widely used to describe similar halftone printing and storage techniques.
[edit] Sampling patterns
For convenience, pixels are normally arranged in a regular two dimensional grid. By using this arrangement, many common operations can be implemented by uniformly applying the same operation to each pixel independently.
Other arrangements of pixels are also possible, with some sampling patterns even changing the shape (or kernel) of each pixel across the image.
For this reason, care must be taken when acquiring an image on one device and displaying it on another, or when converting image data from one pixel format to another.
For example:
- LCD screens typically use a staggered grid, where the red, green, and blue components are sampled at slightly different locations. Cleartype is a technology which takes advantage of these differences to improve the rendering of text on LCD screens.
- Some digital cameras use a Bayer filter, resulting in a regular grid of pixels where the color of each pixel depends on its position on the grid.
- A clipmap uses a hierarchical sampling pattern, where the size of the support of each pixel depends on its location within the hierarchy.
- Warped grids are used when the underlying geometry is non-planar, such as images of the earth from space.[2]
- The use of non-uniform grids is an active research area, attempting to bypass the traditional Nyquist limit.[3]
- Pixels on computer monitors are normally square, but some digital video formats have non-square aspect ratios, such as the anamorphic widescreen formats of the CCIR 601 digital video standard.
[edit] Display resolution vs. native resolution in computer monitors
Modern computer monitors use pixels to display images, often representing an abstract image such as a GUI. The resolution of this image is called the display resolution and is determined by the video card of the computer.
Modern computer monitors also have a native resolution. In the case of an LCD monitor, each pixel is made up of individual triads, and the number of these triads will determine the native resolution. On some CRT monitors, the beam sweep rate may be fixed, resulting in a fixed native resolution.
To produce the sharpest images possible, the user must ensure the display resolution of the computer matches the native resolution of the monitor.
If these resolutions are different, the image may appear squashed or stretched, or the monitor may resample the image, resulting in a blurry or jagged appearance.
[edit] Bits per pixel
The number of distinct colors that can be represented by a pixel depends on the number of bits per pixel (bpp). A 1 bpp image uses 1 bit for each pixel, so each pixel can be either on or off. Each additional bit doubles the number of colors available, so a 2 bpp image can have 4 colors, and a 3 bpp image can have 8 colors:
- 1 bpp, 21 = 2 colors (monochrome)
- 2 bpp, 22 = 4 colors
- 3 bpp, 23 = 8 colors
...
- 8 bpp, 28 = 256 colors
- 15 bpp, 215 ≈ 32 thousand colors ("Highcolor" )
- 24 bpp, 224 ≈ 16.7 million colors ("Truecolor")
For color depths of 15bpp (HighColor) and larger, the depth is normally the sum of the bits allocated to each of the three RGB (red, green and blue) components. 16bpp normally has five bits for red and blue, and six bits for green, as the human eye is more sensitive to green than the other two primary colors. For applications involving transparency, the 16 bits may be divided into five bits each of red, green, and blue, with one bit left for transparency. A 24-bit depth allows 8 bits per component. On some systems, 32-bit depth is available: this means that each 24-bit pixel has an extra 8 bits to describe its opacity (for purposes of combining with another image).
Images with 256 colors or fewer (8bpp and less) are stored in the computer's video memory in either packed pixel (chunky) format, or planar format. In an indexed image, the pixel's value is an index into a list of colors called a palette. Changing the colors in the palette produces a type of animation effect with a well known examples being the the startup logos of Windows 95 and Windows 98.
[edit] Subpixels
Many display and image-acquisition systems are, for various reasons, not capable of displaying or sensing the different color channels at the same site. Therefore, the pixel grid is divided into single-color regions that contribute to the displayed or sensed color when viewed at a distance.
In some displays, such as LCD, LED, and plasma displays, these single-color regions are separately addressable elements, which have come to be known as subpixels. For example, LCDs typically divide each pixel horizontally into three subpixels.
Most digital camera image sensors also use single-color sensor regions, for example using the Bayer filter pattern, but in the case of cameras these are known as pixels, not subpixels.
For systems with subpixels, two different approaches can be taken:
-
- The subpixels can be ignored, with full-color pixels being treated as the smallest addressable imaging element; or
- The subpixels can be included in rendering calculations, which requires more analysis and processing time, but can produce apparently superior images in some cases.
The latter approach has been used to increase the apparent resolution of color displays. The technique, referred to as subpixel rendering, uses knowledge of pixel geometry to manipulate the three colored subpixels separately and produce a better displayed image.
While CRT displays also use red-green-blue masked phosphor areas, dictated by a mesh grid called the shadow mask, these can not be aligned with the displayed pixel raster, and therefore can not be utilised for subpixel rendering.
[edit] Megapixel
A megapixel is 1 million pixels, and is a term used not only for the number of pixels in an image, but also to express the number of image sensor elements of digital cameras or the number of display elements of digital displays. For example, a camera with an array of 2048×1536 sensor elements is commonly said to have "3.1 megapixels" (2048 × 1536 = 3,145,728). The neologism sensel is sometimes used to describe the elements of a digital camera's sensor, since these are picture-detecting rather than picture-producing elements.[4]
Digital cameras use photosensitive electronics, either charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) image sensors, consisting of a large number of single sensor elements, each of which records a measured intensity level. In most digital cameras, the sensor array is covered with a patterned color filter mosaic having red, green, and blue regions in the Bayer filter arrangement, so that each sensor element can record the intensity of a single primary color of light. The camera interpolates the color information of neighboring sensor elements, through a process called demosaicing, to create the final image. These sensor elements are often called "pixels", even though they only record 1 channel (only red, or green, or blue) of the final color image. Thus, two of the three color channels for each sensor must be interpolated and a so-called N-megapixel camera that produces an N-megapixel image provides only one-third of the information that an image of the same size could get from a scanner. Thus, certain color contrasts may look fuzzier than others, depending on the allocation of the primary colors (green has twice as many elements as red or blue in the Bayer arrangement).
In contrast to conventional image sensors, the Foveon X3 sensor uses three layers of sensor elements, so that it detects red, green, and blue intensity at each array location. This structure eliminates the need for de-mosaicing and eliminates the associated image artifacts, such as color blurring around sharp edges. Citing the precedent established by mosaic sensors, Foveon counts each single-color sensor element as a pixel, even though the native output file size has only one pixel per three camera pixels.[1] With this method of counting, an N-megapixel Foveon X3 sensor therefore captures the same amount of information as an N-megapixel Bayer-mosaic sensor, though it packs the information into fewer image pixels, without any interpolation.
[edit] Standard display resolutions
Selected standard display resolutions include:
Name | Resolution (megapixels) |
Width x Height |
---|---|---|
CGA | 0.064 | 320×200 |
EGA | 0.224 | 640×350 |
VGA | 0.3 | 640×480 |
SVGA | 0.5 | 800×600 |
XGA | 0.8 | 1024×768 |
SXGA | 1.3 | 1280×1024 |
UXGA | 1.9 | 1600×1200 |
[edit] Similar concepts
Several other types of objects derived from the idea of the pixel, such as the voxel (volume element), texel (texture element) and surfel (surface element), have been created for other computer graphics and image processing uses.
[edit] Etymology
The word pixel was first published in 1965 by Frederic C. Billingsley of JPL, to describe the picture elements of video images from space probes to the moon and Mars; but he did not coin the term himself, and the person he got it from (Keith E. McFarland at the Link Division of General Precision in Palo Alto) does not know where he got it, but says it was "in use at the time" (circa 1963).
The word is a combination of picture and element, via pix. Pix was first coined in 1932 in a Variety Magazine headline, as an abbreviation for the word pictures, in reference to movies; by 1938 pix was being used in reference to still pictures by photojournalists.
The concept of a picture element dates to the earliest days of television, for example as Bildpunkt (the German word for pixel, literally picture point) in the 1888 German patent of Paul Nipkow. According to various etymologies, the earliest publication of the term picture element itself was in Wireless World magazine in 1927,[5]though it had been used earlier in various U.S. patents filed as early as 1911.[2]
Some authors explain pixel as picture cell, as early as 1972.[6]
A detailed history of pixel and picture element, with references, is linked below.
[edit] See also
[edit] References
- ^ Rudolf F. Graf (1999). Modern Dictionary of Electronics. Oxford: Newnes, 569. ISBN 0-7506-4331-5.
- ^ Image registration of blurred satellite images. Retrieved on 2008-05-09.
- ^ ScienceDirect - Pattern Recognition : Image representation by a new optimal non-uniform morphological sampling:. Retrieved on 2008-05-09.
- ^ Michael Goesele (2004). New Acquisition Techniques for Real Objects and Light Sources in Computer Graphics. Books on Demand. ISBN 3833414898.
- ^ "ON LANGUAGE; Modem, I'm Odem", The New York Times, April 2, 1995. Accessed April 7, 2008.
- ^ Robert L. Lillestrand (1972). "Techniques for Change Detection". IEEE Trans. Computers C-21 (7).
[edit] External links
- A Pixel Is Not A Little Square: Microsoft Memo by computer graphics pioneer Alvy Ray Smith.
- A Brief History of 'Pixel': More than you need to know about the history of pixel, pel, and picture element.
- Pixels and Me: Video of a history talk at the Computer History Museum.
- Square and non-Square Pixels: Technical info on pixel aspect ratios of modern video standards (480i,576i,1080i,720p), plus software implications.