Blade server

From Wikipedia, the free encyclopedia

IBM HS20 blade server. Two bays for 2.5" SCSI hard drives can be seen in the upper left area of the image.
IBM HS20 blade server. Two bays for 2.5" SCSI hard drives can be seen in the upper left area of the image.

Blade servers are self-contained computer servers, designed for high density. Whereas a standard rack-mount server can exist with (at least) a power cord and network cable, blade servers have many components removed for space, power and other considerations while still having all the functional components to be considered a computer. A blade enclosure, which can hold multiple blade servers, provides services such as power, cooling, networking, various interconnects and management—though different blade providers have differing principles around what should and should not be included in the blade itself (and sometimes in the enclosure altogether). Together these form the blade system.

In a standard server-rack configuration, 1U (one rack unit, 19" wide and 1.75" tall) is the minimum possible size of any equipment. The principal benefit of, and the reason behind the push towards, blade computing is that components are no longer restricted to these minimum size requirements. The most common computer rack form-factor being 42U high, this limits the number of discrete computer devices directly mounted in a rack to 42 components. Blades do not have this limitation; densities of up to 84 discrete servers per rack are achievable with the current generation of blade systems.

Contents

[edit] Server blade

In the purest definition of computing (a Turing machine, simplified here), a computer requires only:

  • memory to read input commands and data
  • a processor to perform commands manipulating that data, and
  • backing storage to store the results.

Today (contrast with the first general-purpose computer) these are implemented as electrical components requiring (DC) power, which produces heat. Other components such as hard drives, power supplies, storage and network connections, basic IO (such as Keyboard, Video and Mouse and serial) etc. only support the basic computing function, yet add bulk, heat and complexity—not to mention moving parts that are more prone to failure than solid-state components.

In practice, these components are all required if the computer is to perform real-world work. In the blade paradigm, most of these functions are removed from the blade computer, being either provided by the blade enclosure (e.g. DC power supply), virtualized (e.g. iSCSI storage, remote console over IP) or discarded entirely (e.g. serial ports). The blade itself becomes vastly simpler, hence smaller and (in theory) cheaper to manufacture.

[edit] Blade enclosure

The enclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade computers require components that are bulky, hot and space-inefficient, and duplicated across many computers that may or may not be performing at capacity. By locating these services in one place and sharing them between the blade computers, the overall utilization is more efficient. The specifics of which services are provided and how vary by vendor.

HP Proliant blade enclosure (full of blades), with two 3U UPS units below.
HP Proliant blade enclosure (full of blades), with two 3U UPS units below.

[edit] Power

Computers operate over a range of DC voltages, yet power is delivered from utilities as AC, and at higher voltages than required within the computer. Converting this current requires power supply units (or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers have redundant power supplies, again adding to the bulk and heat output of the design.

The blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may be in the form of a power supply in the enclosure or a dedicated separate PSU supplying DC to multiple enclosures [1][2]. This setup reduces the number of PSUs required to provide a resilient power supply.

[edit] Cooling

During operation, electrical and mechanical components produce heat, which must be displaced to ensure the proper functioning of the components. In blade enclosures, as in most computing systems, heat is removed with fans.

A frequently underestimated problem when designing high-performance computer systems is the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer blade enclosure designs feature high speed, adjustable fans and control logic that tune the cooling to the systems requirements. [3] [4]

At the same time, the increased density of blade server configurations can still result in higher overall demands for cooling when a rack is populated at over 50%. This is especially true with early generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers.

[edit] Networking

Computers are increasingly being produced with high-speed, integrated network interfaces, and most are expandable to allow for the addition of connections that are faster, more resilient and run over different media (copper and fiber). These may require extra engineering effort in the design and manufacture of the blade, consume space in both the installation and capacity for installation (empty expansion slots) and hence more complexity. High-speed network topologies require expensive, high-speed integrated circuits and media, while most computers do not utilize all the bandwidth available.

The blade enclosure provides one or more network buses to which the blade will connect, and either presents these ports individually in a single location (versus one in each computer chassis), or aggregates them into fewer ports, reducing the cost of connecting the individual devices. These may be presented in the chassis itself, or in networking blades[5][6].

There are two types of networking modules available for blade chassis: Switching or pass-through.

[edit] Storage

While computers typically need hard-disks to store the operating system, application and data for the computer, these are not necessarily required locally. Many storage connection methods (e.g. FireWire, SATA, SCSI, DAS, Fibre Channel and iSCSI) are readily moved outside the server, though not all are used in enterprise-level installations. Implementing these connection interfaces within the computer presents similar challenges to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be removed from the blade and presented individually or aggregated either on the chassis or through other blades.

The ability to boot the blade from a storage area network (SAN) allows for an entirely disk-free blade. This may have higher processor density or better reliability than systems having individual disks on each blade.

[edit] Other blades

Since the blade enclosure provides a standard method for delivering basic services to computer devices, these can be utilised by other types of devices. Blades providing switching, routing, storage, SAN and fibre-channel access can be inserted into the enclosure to provide these services to all members of the enclosure.

Storage blades can also be used where additional local storage is desired.[7] [8]

[edit] Uses

Blade servers are ideal for specific purposes such as web hosting and cluster computing. Individual blades are typically hot-swappable. As more processing power, memory and I/O bandwidth are added to blade servers, they are being used for larger and more diverse workloads.

Although blade server technology in theory allows for open, cross-vendor solutions, at this stage of development of the technology, users find there are fewer problems when using blades, racks and blade management tools from the same vendor.

Eventual standardization of the technology might result in more choices for consumers; increasing numbers of third-party software vendors are now entering this growing field.

Blade servers are not, however, the answer to every computing problem. They may best be viewed as a form of productized server farm that borrows from mainframe packaging, cooling, and power supply technology. For large problems, server farms of blade servers are still necessary, and because of blade servers' high power density, can suffer even more acutely from the HVAC problems that affect large conventional server farms.

[edit] History

Complete microcomputers were placed on cards and packaged in standard 19-inch racks in the 1970s soon after the introduction of 8-bit microprocessors. This architecture was used in the industrial process control industry as an alternative to minicomputer control systems. Programs were stored in EPROM on early models and were limited to a single function with a small realtime executive.

The VMEBus architecture (ca. 1981) defined a computer interface which included implementation of a board-level computer that was installed in a chassis backplane with multiple slots for pluggable boards that provide I/O, memory, or additional computing. The PCI Industrial Computer Manufacturers Group PICMG developed a chassis/blade structure for the then emerging Peripheral Component Interconnect bus PCI which is called CompactPCI. Common among these chassis based computers was the fact that the entire chassis was a single system. While a chassis might include multiple computing elements to provide the desired level of performance and redundancy, there was always one board in charge, one master board coordinating the operation of the entire system. PICMG expanded the CompactPCI specification with the use of standard Ethernet connectivity between boards across the backplane. The PICMG 2.16 CompactPCI Packet Switching Backplane specification was adopted in Sept 2001 (PICMG specifications). This provided the first open architecture for a multi-server chassis. PICMG followed with the larger and more feature rich AdvancedTCA specification targeting the telecom industry's need for a high availability and dense computing platform with extended product life (10+ years). While AdvancedTCA system and board pricing is typically higher than blade servers, AdvancedTCA suppliers claim that low operating expenses and total cost of ownership can make AdvancedTCA-based solutions a cost effective alternative for many building blocks of the next generation telecom network.

The name blade server appeared when a card included the processor, memory, I/O and non-volatile program storage (flash memory or small hard disk(s)). This allowed a complete server, with its operating system and applications, to be packaged on a single card / board / blade. These blades could then operate independently within a common chassis, doing the work of multiple separate server boxes more efficiently. Less space consumption is the most obvious benefit of this packaging, but additional efficiency benefits have become clear in power, cooling, management, and networking due to the pooling or sharing of common infrastructure to supports the entire chassis, rather than providing each of these on a per server box basis.

Houston-based RLX Technologies, which consisted of mostly former Compaq Computer Corp employees, shipped the first blade server in May 2001 [9] followed soon after by Massachusetts-based Egenera, Inc. who shipped its first "BladeFrame" virtualized blade server in October 2001. RLX was acquired by Hewlett Packard (HP) in 2005 and the RLX product line is no longer sold, making the Egenera BladeFrame the longest-selling blade server on the market today.

According to research firm IDC, the major players in the blade market are tech giants HP, IBM, Sun and Egenera. Other companies competing in this market are Supermicro, Hitachi, Rackable (Hybrid Blade), Verari Systems, Dell and Intel.

[edit] See also

[edit] External links

[edit] References