Data center
An operation engineer overseeing a Network Operations Control Room of a data center.
A data center (or datacentre) is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices.
History
Data centers have their roots in the huge computer rooms of the early ages of the computing industry. Early computer systems were complex to operate and maintain, and required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised, such as standard racks to mount equipment, elevated floors, and cable trays (installed overhead or under the elevated floor). Also, old computers required a great deal of power, and had to be cooled to avoid overheating. Security was important – computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised.
During the boom of the microcomputer industry, and especially during the 1980s, computers started to be deployed everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, companies grew aware of the need to control IT resources. With the advent of client-server computing, during the 1990s, microcomputers (now called "servers") started to find their places in the old computer rooms. The availability of inexpensive networking equipment, coupled with new standards for network cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term "data center," as applied to specially designed computer rooms, started to gain popular recognition about this time.
The boom of data centers came during the dot-com bubble. Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide businesses with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such large-scale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results.
As of 2007[update], data center design, construction, and operation is a well-known discipline. Standard documents from accredited professional groups, such as the Telecommunications Industry Association, specify the requirements for data center design. Well-known operational metrics for data center availability can be used to evaluate the business impact of a disruption. There is still a lot of development being done in operation practice, and also in environmentally-friendly data center design. Data centers are typically very expensive to build and maintain. For instance, Amazon.com's new 116,000 sq ft data center in Oregon is expected to cost up to $100 million.[1]
Requirements for modern data centers
Racks of telecommunications equipment in part of a data center.
IT operations are a crucial aspect of most organizational operations. One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of both fiber optic cables and power, which includes emergency backup power generation.
Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces, provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:
- Operate and manage a carrier’s telecommunication network
- Provide data center based applications directly to the carrier’s customers
- Provide hosted applications for a third party to provide services to their customers
- Provide a combination of these and similar data center applications.
Effective data center operation requires a balanced investment in both the facility and the housed equipment. The first step is to establish a baseline facility environment suitable for equipment installation. Standardization and modularity can yield savings and efficiencies in the design and construction of telecommunications data centers.
Standardization means integrated building and equipment engineering. Modularity has the benefits of scalability and easier growth, even when planning forecasts are less than optimal. For these reasons, telecommunications data centers should be planned in repetitive building blocks of equipment, and associated power and support (conditioning) equipment when practical. The use of dedicated centralized systems requires more accurate forecasts of future needs to prevent expensive over construction, or perhaps worse — under construction that fails to meet future needs.
Data center classification
The TIA-942:Data Center Standards Overview describes the requirements for the data center infrastructure. The simplest is a Tier 1 data center, which is basically a server room, following basic guidelines for the installation of computer systems. The most stringent level is a Tier 4 data center, which is designed to host mission critical computer systems, with fully redundant subsystems and compartmentalized security zones controlled by biometric access controls methods. Another consideration is the placement of the data center in a subterranean context, for data security as well as environmental considerations such as cooling requirements.[2]
The four levels are defined, and copyrighted, by the Uptime Institute, a Santa Fe, New Mexico-based think tank and professional services organization. The levels describe the availability of data from the hardware at a location. The higher the tier, the greater the accessibility. The levels are: [3] [4] [5]
Tier Level |
Requirements |
1 |
- Single non-redundant distribution path serving the IT equipment
- Non-redundant capacity components
- Basic site infrastructure guaranteeing 99.671% availability
|
2 |
- Fulfils all Tier 1 requirements
- Redundant site infrastructure capacity components guaranteeing 99.741% availability
|
3 |
- Fulfils all Tier 1 & Tier 2 requirements
- Multiple independent distribution paths serving the IT equipment
- All IT equipment must be dual-powered and fully compatible with the topology of a site's architecture
- Concurrently maintainable site infrastructure guaranteeing 99.982% availability
|
4 |
- Fulfils all Tier 1, Tier 2 and Tier 3 requirements
- All cooling equipment is independently dual-powered, including chillers and Heating, Ventilating and Air Conditioning (HVAC) systems
- Fault tolerant site infrastructure with electrical power storage and distribution facilities guaranteeing 99.995% availability
|
Physical layout
A typical server rack, commonly seen in colocation.
A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors between them. This allows people access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers to large freestanding storage silos which occupy many tiles on the floor. Some equipment such as mainframe computers and storage devices are often as big as the racks themselves, and are placed alongside them. Very large data centers may use shipping containers packed with 1,000 or more servers each[6]; when repairs or upgrades are needed, whole containers are replaced (rather than repairing individual servers).[7]
Local building codes may govern the minimum ceiling heights.
A bank of batteries in a large data center, used to provide power until diesel generators can start.
The physical environment of a data center is rigorously controlled:
- Air conditioning is used to control the temperature and humidity in the data center. ASHRAE's "Thermal Guidelines for Data Processing Environments"[8] recommends a temperature range of 16–24 °C (61–75 °F) and humidity range of 40–55% with a maximum dew point of 15°C as optimal for data center conditions.[9] The electrical power used heats the air in the data center. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the air temperature, the server components at the board level are kept within the manufacturer's specified temperature/humidity range. Air conditioning systems help control humidity by cooling the return space air below the dew point. Too much humidity, and water may begin to condense on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapor if the humidity is too low, which can result in static electricity discharge problems which may damage components. Subterranean data centers may keep computer equipment cool while expending less energy than conventional designs.
- Modern data centers try to use economizer cooling, where they use outside air to keep the data center cool. Washington state now has a few data centers that cool all of the servers using outside air 11 months out of the year. They do not use chillers/air conditioners, which creates potential energy savings in the millions.[10].
- Backup power consists of one or more uninterruptible power supplies and/or diesel generators.
- To prevent single points of failure, all elements of the electrical systems, including backup systems, are typically fully duplicated, and critical servers are connected to both the "A-side" and "B-side" power feeds. This arrangement is often made to achieve N+1 Redundancy in the systems. Static switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
- Data centers typically have raised flooring made up of 60 cm (2 ft) removable square tiles. The trend is towards 80–100 cm (31–39 in) void to cater for better and uniform air distribution. These provide a plenum for air to circulate below the floor, as part of the air conditioning system, as well as providing space for power cabling.
- Telcordia GR-2930, NEBS: Raised Floor Generic Requirements for Network and Data Centers, presents generic engineering requirements for raised floors that fall within the strict NEBS guidelines.
- There are many types of commercially available floors that offer a wide range of structural strength and loading capabilities, depending on component construction and the materials used. The general types of raised floors include stringerless, stringered, and structural platforms, all of which are discussed in detail in GR-2930 and summarized below.
- Stringerless Raised Floors - One non-earthquake type of raised floor generally consists of an array of pedestals that provide the necessary height for routing cables and also serve to support each corner of the floor panels. With this type of floor, there may or may not be provisioning to mechanically fasten the floor panels to the pedestals. This stringerless type of system (having no mechanical attachments between the pedestal heads) provides maximum accessibility to the space under the floor. However, stringerless floors are significantly weaker than stringered raised floors in supporting lateral loads and are not recommended.
- There are two construction types of floors for network and data center equipment. Following are generic descriptions of these types of floors.
- Stringered Raised Floors - This type of raised floor generally consists of a vertical array of steel pedestal assemblies (each assembly is made up of a steel base plate, tubular upright, and a head) uniformly spaced on two-foot centers and mechanically fastened to the concrete floor. The steel pedestal head has a stud that is inserted into the pedestal upright and the overall height is adjustable with a leveling nut on the welded stud of the pedestal head.
- Structural Platforms - One type of structural platform consists of members constructed of steel angles or channels that are welded or bolted together to form an integrated platform for supporting equipment. This design permits equipment to be fastened directly to the platform without the need for toggle bars or supplemental bracing. Structural platforms may or may not contain panels or stringers.
- Data cabling is typically routed through overhead cable trays in modern data centers. But some are still recommending under raised floor cabling for security reasons and to consider the addition of cooling systems above the racks in case this enhancement is necessary. Smaller/less expensive data centers without raised flooring may use anti-static tiles for a flooring surface. Computer cabinets are often organized into a hot aisle arrangement to maximize airflow efficiency.
- Data centers feature fire protection systems, including passive and active design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a developing fire by detecting particles generated by smoldering components prior to the development of flame. This allows investigation, interruption of power, and manual fire suppression using hand held fire extinguishers before the fire grows to a large size. A fire sprinkler system is often provided to control a full scale fire if it develops. Fire sprinklers require 18 in (46 cm) of clearance (free of cable trays, etc.) below the sprinklers. Clean agent fire suppression gaseous systems are sometimes installed to suppress a fire earlier than the fire sprinkler system. Passive fire protection elements include the installation of fire walls around the data center, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems, or if they are not installed. For critical facilities these firewalls are often insufficient to protect heat-sensitive electronic equipment, however, because conventional firewall construction is only rated for flame penetration time, not heat penetration. There are also deficiencies in the protection of vulnerable entry points into the server room, such as cable penetrations, coolant line penetrations and air ducts. For mission critical data centers fireproof vaults with a Class 125 rating are necessary to meet NFPA 75[11] standards.
- Physical security also plays a large role with data centers. Physical access to the site is usually restricted to selected personnel, with controls including bollards and mantraps.[12] Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information on any of the systems within. The use of finger print recognition man traps is starting to be commonplace.
Energy use
Energy use is a central issue for data centers. Power draw for data centers ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building.[13] For higher power density facilities, electricity costs are a dominant operating expense and account for over 10% of the total cost of ownership (TCO) of a data center.[14] By 2012 the cost of power for the data center is expected to exceed the cost of the original capital investment.[15]
Greenhouse gas emissions
In 2007 the entire information and communication technologies or ICT sector was estimated to be responsible for roughly 2% of global carbon emissions with data centers accounting for 14% of the ICT footprint.[16] The US EPA estimates that servers and data centers are responsible for up to 1.5% of the total US electricity consumption[17], or roughly .5% of US GHG emissions [18], for 2007. Given a business as usual scenario greenhouse gas emissions from data centers is projected to more than double from 2007 levels by 2020.[19]
Siting is one of the factors that affect the energy consumption and environmental effects of a datacenter. In areas where climate favors cooling and lots of renewable electricity is available the environmental effects will be more moderate. Thus countries with favorable conditions, such as Finland[20], Sweden[21] and Switzerland[22], are trying to attract cloud computing data centers.
Energy efficiency
The most commonly used metric to determine the energy efficiency of a data center is power usage effectiveness, or PUE. This simple ratio is the total power entering the data center divided by the power used by the IT equipment.
Power used by support equipment, often referred to as overhead load, mainly consists of cooling systems, power delivery, and other facility infrastructure like lighting. The average data center in the US has a PUE of 2.0[23], meaning that the facility uses one Watt of overhead power for every Watt delivered to IT equipment. State-of-the-art data center energy efficiency is estimated to be roughly 1.2.[24] Some large data center operators like Microsoft and Yahoo! have published projections of PUE for facilities in development; Google publishes quarterly actual efficiency performance from data centers in operation.[25]
The U.S. Environmental Protection Agency has an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile of energy efficiency of all reported facilities.
Network infrastructure
An example of "rack mounted" servers.
Communications in data centers today are most often based on networks running the IP protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world. Redundancy of the Internet connection is often provided by using two or more upstream service providers (see Multihoming).
Some of the servers at the data center are used for running the basic Internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers.
Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, etc. Also common are monitoring systems for the network and some of the applications. Additional off site monitoring systems are also typical, in case of a failure of communications inside the data center.
Applications
Multiple racks of servers, and how a data center commonly looks.
The main purpose of a data center is running the applications that handle the core business and operational data of the organization. Such systems may be proprietary and developed internally by the organization, or bought from enterprise software vendors. Such common applications are ERP and CRM systems.
A data center may be concerned with just operations architecture or it may provide other services as well.
Often these applications will be composed of multiple hosts, each running a single component. Common components of such applications are databases, file servers, application servers, middleware, and various others.
Data centers are also used for off site backups. Companies may subscribe to backup services provided by a data center. This is often used in conjunction with backup tapes. Backups can be taken of servers locally on to tapes., however tapes stored on site pose a security threat and are also susceptible to fire and flooding. Larger companies may also send their backups off site for added security. This can be done by backing up to a data center. Encrypted backups can be sent over the Internet to another data center where they can be stored securely.
For disaster recovery, several large hardware vendors have developed mobile solutions that can be installed and made operational in very short time. Vendors such as Cisco Systems,[26] Sun Microsystems,[27][28]IBM and HP have developed systems that could be used for this purpose.[29]
See also
- Central apparatus room
- Colocation center
- Disaster recovery
- Dynamic Infrastructure
- Electrical network
- HVAC
- Internet exchange point
- Network operations center
- Peering
- Server farm
- Server room
- Server sprawl
- Sun Modular Datacenter
- Telecommunications network
- Vendor-neutral data centre
- Web hosting service
References
- ↑ http://www.datacenterknowledge.com/archives/2008/11/07/amazon-building-large-data-center-in-oregon/
- ↑ A ConnectKentucky article mentioning Stone Mountain Data Center Complex "Global Data Corp. to Use Old Mine for Ultra-Secure Data Storage Facility" (PDF). ConnectKentucky. 2007-11-01. http://connectkentucky.org/_documents/connected_fall_FINAL.pdf. Retrieved 2007-11-01.
- ↑ A definition from Webopedia "Data Center Tiers". Webopedia. 2010-02-13. http://www.webopedia.com/TERM/D/data_center_tiers.html. Retrieved 2010-02-13.
- ↑ A document from the Uptime Institute describing the different tiers (click through the download page) "Data Center Site Infrastructure Tier Standard: Topology" (PDF). Uptime Institute. 2010-02-13. http://uptimeinstitute.org/index.php?option=com_docman&task=doc_download&gid=82. Retrieved 2010-02-13.
- ↑ The rating guidelines from the Uptime Institute [http://professionalservices.uptimeinstitute.com/UIPS_PDF/TierStandard.pdf "Data Center Site Infrastructure Tier Standard: Topology"] (PDF). Uptime Institute. 2010-02-13. http://professionalservices.uptimeinstitute.com/UIPS_PDF/TierStandard.pdf. Retrieved 2010-02-13.
- ↑ "Google Container Datacenter Tour (video)". http://www.youtube.com/watch?v=zRwPSFpLX8I.
- ↑ "Walking the talk: Microsoft builds first major container-based data center". http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9075519. Retrieved 2008-09-22.
- ↑ "ASHRAE's "Thermal Guidelines for Data Processing Environments"" (PDF). http://tc99.ashraetcs.org/documents/ASHRAE_Extended_Environmental_Envelope_Final_Aug_1_2008.pdf.
- ↑ "ServersCheck's Blog on Why Humidity Monitoring". July 1, 2008. http://www.serverscheck.com/blog/2008/07/why-monitor-humidity-in-computer-rooms.html.
- ↑ "tw telecom and NYSERDA Announce Co-location Expansion". Reuters. 2009-09-14. http://www.reuters.com/article/pressRelease/idUS141369+14-Sep-2009+PRN20090914.
- ↑ Fixen, Edward L. and Vidar S. Landa,"Avoiding the Smell of Burning Data," Consulting-Specifying Engineer, May 2006, Vol. 39 Issue 5, p47-51
- ↑ 19 Ways to Build Physical Security Into a Data Center
- ↑ "Data Center Energy Consumption Trends". U.S. Department of Energy. http://www1.eere.energy.gov/femp/program/dc_energy_consumption.html. Retrieved 2010-06-10.
- ↑ J Koomey, C. Belady, M. Patterson, A. Santos, K.D. Lange. Assessing Trends Over Time in Performance, Costs, and Energy Use for Servers Released on the web August 17th, 2009.
- ↑ "Quick Start Guide to Increase Data Center Energy Efficiency". U.S. Department of Energy. http://www1.eere.energy.gov/femp/pdfs/data_center_qsguide.pdf. Retrieved 2010-06-10.
- ↑ "Smart 2020: Enabling the low carbon economy in the information age". The Climate Group for the Global e-Sustainability Initiative. http://www.smart2020.org/_assets/files/03_Smart2020Report_lo_res.pdf. Retrieved 2008-05-11.
- ↑ "Report to Congress on Server and Data Center Energy Efficiency". U.S. Environmental Protection Agency ENERGY STAR Program. http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf.
- ↑ A calculation of data center electricity burden cited in the Report to Congress on Server and Data Center Energy Efficiency and electricity generation contributions to green house gas emissions published by the EPA in the Greenhouse Gas Emissions Inventory Report Retrieved 2010-06-08.
- ↑ "Smart 2020: Enabling the low carbon economy in the information age". The Climate Group for the Global e-Sustainability Initiative. http://www.smart2020.org/_assets/files/03_Smart2020Report_lo_res.pdf. Retrieved 2008-05-11.
- ↑ Finland - First Choice for Siting Your Cloud Computing Data Center. Accessed 4 August 2010.
- ↑ Stockholm sets sights on data center customers. Accessed 4 August 2010.
- ↑ Swiss Carbon-Neutral Servers Hit the Cloud. Accessed 4 August 2010.
- ↑ "Report to Congress on Server and Data Center Energy Efficiency". U.S. Environmental Protection Agency ENERGY STAR Program. http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf.
- ↑ "Data Center Energy Forecast". Silicon Valley Leadership Group. https://microsite.accenture.com/svlgreport/Documents/pdf/SVLG_Report.pdf.
- ↑ "Google Efficiency Update". Data Center Knowledge. http://www.datacenterknowledge.com/archives/2009/10/15/google-efficiency-update-pue-of-1-22/. Retrieved 2010-06-08.
- ↑ "Info and video about Cisco's solution". Datacentreknowledge. May 15, 2007. http://www.datacenterknowledge.com/archives/2008/May/15/ciscos_mobile_emergency_data_center.html. Retrieved 2008-05-11.
- ↑ "Technical specs of Sun's Blackbox". http://www.sun.com/products/sunmd/s20/specifications.jsp. Retrieved 2008-05-11.
- ↑ And English Wiki article on Sun's modular datacentre
- ↑ Kraemer, Brian (June 11, 2008). "IBM's Project Big Green Takes Second Step". ChannelWeb. http://www.crn.com/hardware/208403225. Retrieved 2008-05-11.
External links
- Lawrence Berkeley Lab - Research, development, demonstration, and deployment of energy-efficient technologies and practices for data centers
- The Uptime Institute - Organization that defines data center reliability and conducts site certifications.