Commodity computing
From Wikipedia, the free encyclopedia
Commodity computing is computing done on commodity computers as opposed to supermicrocomputers or boutique computers. Commodity computers are computer systems manufactured by multiple vendors, incorporated components based on open standards. Such systems are said to be based on commodity components since the standardization process promotes lower costs and less differentiation among vendor's products.
Contents |
[edit] History
[edit] The Mid-1960s to Early 1980s
The first computers were large, expensive, complex and proprietary. The move towards commodity computing began when DEC introduced the PDP-8 in 1965. This was a computer that was relatively small and inexpensive enough that a department could purchase one without convening a meeting of the board of directors. The entire minicomputer industry sprang up to supply the demand for 'small' computers like the PDP-8. Unfortunately, each of the many different brands of minicomputers had to stand on their own because there was no software and very little hardware compatibility between them.
When the first general purpose microprocessor was introduced in 1974 it immediately began chipping away at the low end of the computer market, replacing embedded minicomputers in many industrial devices.
This process accelerated in 1977 with the introduction of the first commodity - like microcomputer, the Apple II. With the development of the Visicalc application in 1979, microcomputers broke out of the factory and began entering office suites in large quantities, but still through the back door.
[edit] The 1980s to Mid-1990s
The IBM PC was introduced in 1981 and immediately began displacing Apple II's in the corporate world, but commodity computing as we know it today truly began when Compaq developed the first true IBM PC compatible. More and more PC compatible microcomputers began coming into big companies through the front door and commodity computing was well established.
During the 1980s microcomputers began displacing "real" computers in a serious way. At first, price was the key justification but by the mid 1980s, semiconductor technology had evolved to the point where microprocessor performance began to eclipse the performance of discrete logic designs. These traditional designs were limited by speed-of-light delay issues inherent in any CPU larger than a single chip, and performance alone began driving the success of microprocessor-based systems.
The old processor architectures began to fall, first minis, then superminis, and finally mainframes. By the mid 1990s, every computer made was based on a microprocessor, and most were microcomputers compatible with IBM PC. Although there was a time when every traditional computer manufacturer had its own proprietary micro-based designs there are only a few manufacturers of non-commodity computer systems today. However, super microcomputers (large-scale computer systems based on one or more microprocessors, like those of the IBM p, i, and z series) still own the high-end of the market.
[edit] Commodity Computing in the Present Day
As the power of microprocessors continues to increase, there are fewer and fewer business computing needs that cannot be met with off-the shelf commodity computers. It is likely that the low end of the supermicrocomputer genre will continue to be pushed upward by increasingly powerful commodity microcomputers. There will be fewer non-commodity systems sold each year, resulting in fewer and fewer dollars available for non-commodity R&D, resulting in a continually narrowing performance gap between commodity microcomputers and proprietary supermicros.
As the speed of Ethernet increases to 10 gigabits, the differences between multiprocessor systems based on loosely coupled commodity microcomputers and those based on tightly coupled proprietary supermicro designs (like the IBM p-series) will continue to narrow and will eventually disappear.
When 10 gigabit Ethernet becomes standard equipment in commodity microcomputer servers, multi-processor cluster or grid systems based on off-the-shelf commodity microcomputers and Ethernet switches will take over more and more computing tasks that can currently be performed only by high- end models of proprietary supermicros like the IBM p-series, further eroding the viability of the supermicro industry.
[edit] Characteristics of Commodity Computers
A large part of the current commodity computing marketplace is based on IBM PC compatibles. This typically means systems that are capable of running Microsoft Windows, Linux, or PC-DOS/MS-DOS, without requiring special drivers.
Some of the general characteristics of a commodity computer are:
- Shares a base instruction set common to many different models.
- Shares an architecture (memory, I/O map and expansion capability) that is common to many different models.
- High degree of mechanical compatibility, internal components (CPU, RAM, motherboard, peripheral cards, drives) are interchangeable with other models.
- Software is widely available off the shelf.
- Compatible with most available peripherals, works with most right out of the box.
Other characteristics of today's commodity computers include:
- ATX motherboard footprint.
- Built-in interfaces for floppy drives, IDE CD-ROMs and hard drives.
- Industry-standard PCI slots for expansion.
Some characteristics that are becoming common to many commodity computers and may become part of the commodity computer definition:
- Built-in Ethernet interface.
- Built-in USB ports.
- Built-in video.
- Built in interfaces for SATA drives.
Standards such as SCSI, FireWire, and Fibre Channel help commodotize computer systems more powerful than typical PCs. Standards such as ATCA and Carrier Grade Linux are helping to commoditize telecommunications systems. Blade servers, server farms, and computer clusters are also computer architectures that exploit commodity hardware.