Mainframe computer
From Wikipedia, the free encyclopedia
This article may require cleanup to meet Wikipedia's quality standards. Please improve this article if you can. (June 2007) |
This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (October 2006) |
Mainframes (often colloquially referred to as Big Iron[1]) are computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, ERP, and financial transaction processing.
The term probably originated from the early mainframes, as they were housed in enormous, room-sized metal boxes or frames. [2] Later the term was used to distinguish high-end commercial machines from less powerful units which were often contained in smaller packages.
Today in practice, the term usually refers to computers compatible with the IBM System/360 line, first introduced in 1965. (IBM System z10 is the latest incarnation.) Otherwise, large systems that are not based on the System/360 are referred to as either "servers" or "supercomputers". However, "server", "supercomputer" and "mainframe" are not synonymous (see client-server).
Some non-System/360-compatible systems derived from or compatible with older (pre-Web) server technology may also be considered mainframes. These include the Burroughs large systems, the UNIVAC 1100/2200 series systems, and the pre-System/360 IBM 700/7000 series. Most large-scale computer system architectures were firmly established in the 1960s and most large computers were based on architecture established during that era up until the advent of Web servers in the 1990s. (Interestingly, the first Web server running anywhere outside Switzerland ran on an IBM mainframe at Stanford University as early as 1990. See History of the World Wide Web for details.)
There were several minicomputer operating systems and architectures that arose in the 1970s and 1980s, but minicomputers are generally not considered mainframes. (UNIX arose as a minicomputer operating system; Unix has scaled up over the years to acquire some mainframe characteristics.)
Many defining characteristics of "mainframe" were established in the 1960s, but those characteristics continue to expand and evolve to the present day.
Contents |
[edit] Description
Modern mainframe computers have abilities not so much defined by their single task computational speed (usually defined as MIPS — Millions of Instructions Per Second) as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility for older software, and high utilization rates to support massive throughput. These machines often run for years without interruption, with repairs and hardware upgrades taking place during normal operation. Software upgrades are only non-disruptive when Parallel Sysplex is in place, with true workload sharing, so one system can take over another's application, while it is being refreshed. More recently, there are several IBM mainframe installations that have delivered over a decade of continuous business service as of 2007, with hardware upgrades not interrupting service.[citation needed] Mainframes are defined by high availability, one of the main reasons for their longevity, as they are used in applications where downtime would be costly or catastrophic. The term Reliability, Availability and Serviceability (RAS) is a defining characteristic of mainframe computers. But, as with anything else, proper planning (and implementation) is required to exploit these features.
In the 1960s, most mainframes had no interactive interface. They accepted sets of punch cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. Teletype devices were also common, at least for system operators. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds or thousands of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. Many mainframes supported graphical terminals (and terminal emulation) by the 1980s (if not earlier). Nowadays most mainframes have partially or entirely phased out classic user terminal access in favor of Web user interfaces.
Historically mainframes acquired their name in part because of their substantial size and requirements for specialized heating, ventilating, air conditioning (HVAC), and electrical power. Those requirements ended by the mid-1990s, with CMOS mainframe designs replacing the older bipolar technology. In fact, in a major reversal, IBM touts the mainframe's ability to reduce data center energy costs for power and cooling and reduced physical space requirements compared to server farms.[3]
[edit] Characteristics of mainframes
Nearly all mainframes have the ability to run (or host) multiple operating systems and thereby operate not as a single computer but as a number of virtual machines. In this role, a single mainframe can replace dozens or even hundreds of smaller servers, reducing management and administrative costs while providing greatly improved scalability and reliability.
Mainframes can add or hot swap system capacity non disruptively and granularly. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer three levels of virtualization: logical partitions (LPARs, via the PR/SM facility), virtual machines (via the z/VM operating system), and through its operating systems (notably z/OS with its key-protected address spaces and sophisticated goal-oriented workload scheduling,[clarify] but also Linux, Solaris and Java). This virtualization is so thorough, so well established, and so reliable that most IBM mainframe customers run no more than two machines: one in their primary data center, and one in their backup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. All test, development, training, and production workload for all applications and all databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two mainframe installation can support continuous business service, avoiding both planned and unplanned outages.
Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the mid-1960s, mainframe designs have included several subsidiary computers (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Giga-record or tera-record files are not unusual.[4] Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it much faster.[citation needed]
This paragraph may require cleanup to meet Wikipedia's quality standards. Please improve this article if you can. (April 2008) |
Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors. Some argue that the modern mainframe is not cost-effective. Hewlett-Packard and Dell unsurprisingly take that view at least at times, and so do a few independent analysts. Sun Microsystems used to take that view but, beginning in mid-2007, started promoting its new partnership with IBM, including probable support for the company's OpenSolaris operating system running on IBM mainframes. The general consensus (held by Gartner[citation needed] and other independent analysts) is that the modern mainframe often has unique value and superior cost-effectiveness, especially for large scale enterprise computing. In fact, Hewlett-Packard also continues to manufacture its own mainframe (arguably), the NonStop system originally created by Tandem. Logical partitioning is now found in many UNIX-based servers, and many vendors are promoting virtualization technologies, in many ways validating the mainframe's design accomplishments.
Mainframes also have unique execution integrity characteristics for fault tolerant computing. z900 and z990 and System z9 and z10 servers execute each instruction twice,[citation needed] compare results, and shift workloads "in flight" to functioning processors, including spares, without any impact to applications or users. This feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing.
Despite these differences, the IBM mainframe, in particular, is still a general purpose business computer in terms of its support for a wide variety of popular operating systems, middleware, and applications.
[edit] Market
As of early 2006, IBM mainframes dominate the mainframe market at well over 90% market share. Unisys manufactures ClearPath mainframes, based on earlier Sperry and Burroughs product lines. Hitachi co-developed the zSeries z800 with IBM to share expenses. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers. Groupe Bull's DPS, Fujitsu-Siemens BS2000, and Fujitsu-ICL VME mainframes are still available in Europe. Fujitsu, Hitachi, and NEC (the "JCMs") still maintain nominal mainframe hardware businesses in their home Japanese market.
The amount of vendor investment in mainframe development varies with marketshare. Unisys, HP, Groupe Bull, Fujitsu, Hitachi, and NEC now rely primarily on commodity Intel CPUs rather than custom processors in order to reduce development expenses, and they have also cut back their mainframe software development. In contrast, IBM has its own large research and development organization designing new, homegrown CPUs, including mainframe processors, and IBM is rapidly expanding its software business, including its mainframe software portfolio, to seek additional profits.[5] [6]
Platform Solutions Inc., which was spun off former plug compatible mainframe vendor Amdahl Corp. in January 1999,PSI history. Retrieved on 2008-04-24. markets Itanium-based servers compatible with IBM System z. PSI and IBM are engaged in a series of lawsuits. IBM alleges that PSI violated its patents and refuses to license its software on PSI systems, while PSI alleges that IBM is violating anti-trust laws. In October of 2007, PSI has additionally filed a complaint with the EU concerning IBM anti-competitive behavior in the European mainframe market. [7]
[edit] History
Several manufacturers produced mainframe computers from the late 1950s through the 1970s. The group of manufacturers was first known as "IBM and the Seven Dwarfs": IBM, Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA. Later, shrinking, it was referred to as IBM and the BUNCH. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries/z9 mainframes which, along with the then Burroughs and now Unisys MCP-based mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. That said, while they can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the USA were Siemens and Telefunken in Germany, ICL in the United Kingdom, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the Strela is an example of an independently designed Soviet computer.
Shrinking demand and tough competition caused a shakeout in the market in the early 1980s — RCA sold out to UNIVAC and GE also left; Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986. In 1991, AT&T briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks.
That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Another factor currently increasing mainframe use is the development of the Linux operating system, which can run on many mainframe systems, typically in virtual machines. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.)
[edit] Mainframes vs. supercomputers
The distinction between supercomputers and mainframes is not a hard and fast one, but supercomputers generally focus on problems which are limited by calculation speed while mainframes focus on problems which are limited by input/output and reliability ("throughput computing") and on solving multiple business problems concurrently (mixed workload). The differences and similarities include:
- Both types of systems offer parallel processing. Supercomputers typically expose it to the programmer in complex manners, while mainframes typically use it to run multiple tasks. One result of this difference is that adding processors to a mainframe often speeds up the entire workload transparently.
- Supercomputers are optimized for complicated computations that take place largely in memory, while mainframes are optimized for comparatively simple computations involving huge amounts of external data. For example, weather forecasting is suited to supercomputers, and insurance business or payroll processing applications are more suited to mainframes.
- Supercomputers are often purpose-built for one or a very few specific institutional tasks (e.g. simulation and modeling). Mainframes typically handle a wider variety of tasks (e.g. data processing, warehousing). Consequently, most supercomputers can be one-off designs, whereas mainframes typically form part of a manufacturer's standard model lineup.
- Mainframes tend to have numerous ancillary service processors assisting their main central processors (for cryptographic support, I/O handling, monitoring, memory handling, etc.) so that the actual "processor count" is much higher than would otherwise be obvious. Supercomputer design tends not to include as many service processors since they don't appreciably add to raw number-crunching power.
There has been some blurring of the term "mainframe," with some PC and server vendors referring to their systems as "mainframes" or "mainframe-like." This is not widely accepted and the market generally recognizes that mainframes are genuinely and demonstrably different.
[edit] Statistics
- Historically 85% of all mainframe programs were written in the COBOL programming language. The remainder included a mix of PL/I (about 5%), Assembly language (about 7%), and miscellaneous other languages. eWeek estimates that millions of lines of net new COBOL code are still added each year, and there are nearly 1 million COBOL programmers worldwide, with growing numbers in emerging markets. Even so, COBOL is decreasing as a percentage of the total mainframe lines of code in production because Java, C, and C++ are all growing faster.
- Mainframe COBOL has recently acquired numerous Web-oriented features, such as XML parsing, with PL/I following close behind in adopting modern language features.
- 90% of IBM's mainframes have CICS transaction processing software installed.[8] Other software staples include the IMS and DB2 databases, and WebSphere MQ and WebSphere Application Server middleware.
- As of 2004, IBM claimed over 200 new (21st century) mainframe customers — customers that had never previously owned a mainframe. Many are running Linux, some exclusively. There are new z/OS customers as well.
- In May, 2006, IBM claimed that over 1,700 mainframe customers are running Linux. Nomura Securities of Japan spoke at LinuxWorld in 2006 and is one of the largest publicly known, with over 200 IFLs in operation that replaced rooms full of distributed servers.
- Most mainframes run continuously at over 70% busy. A 90% figure is typical, and modern mainframes tolerate sustained periods of 100% CPU utilization, queuing work according to business priorities without disrupting ongoing execution.
- Mainframes have a historical reputation for being "expensive," but the modern reality is much different. As of late 2006, it is possible to buy and configure a complete IBM mainframe system (with software, storage, and support), under standard commercial use terms, for about $50,000 (U.S.). The price of z/OS starts at about $1,500 (U.S.) per year, including 24x7 telephone and Web support.[9]
- Typically, a mainframe is repaired without being shut down. Also, memory, storage and processor modules of chips could be added or hot swapped without being shut down. It is not unusual for a mainframe to be continuously switched on for 6 months at a stretch
[edit] Speed and performance
The CPU speed of mainframes has historically been measured in millions of instructions per second (MIPS). MIPS have been used as an oversimplified comparative rating of the speed and capacity of mainframes. The smallest System z9 IBM mainframes today run at about 26 MIPS and the largest System z10 at about 30,657 MIPS — a 1 to 1179 performance capacity ratio. IBM's Parallel Sysplex technology can join up to 32 of these systems, making them behave like a single, logical computing facility of as much as about 981,024 MIPS.[10]
The MIPS measurement has long been known to be misleading and has often been parodied as "Meaningless Indicator of Processor Speed." The complex CPU architectures of modern mainframes have reduced the relevance of MIPS ratings to the actual number of instructions executed. Likewise, the modern "balanced performance" system designs focus both on CPU power and on I/O capacity, and virtualization capabilities make comparative measurements even more difficult. See benchmark (computing) for a brief discussion of the difficulties in benchmarking such systems. IBM has long published a set of LSPR (Large System Performance Reference) ratio tables for mainframes that take into account different types of workloads and are a more representative measurement. However, these comparisons are not available for non-IBM systems. It takes a fair amount of work (and maybe guesswork) for users to determine what type of workload they have and then apply only the LSPR values most relevant to them. Also, IBM does not measure all workloads on all possible configurations, so some estimates are inaccurate. Current processors can have up to 64 CPU's, but LSPR has not measured any over 32. This is not negligence. Rather, it is a cost item.
To give some idea of real world experience, it is typical for a single mainframe CPU to execute the equivalent of 50, 100, or even more distributed processors' worth of business activity, depending on the workloads. Merely counting processors to compare server platforms is extremely perilous.
[edit] See also
[edit] References
- ^ IBM preps big iron fiesta. The Register (July 20, 2005).
- ^ Ebbers, Mike (2006). Introduction to the New Mainframe: z/OS Basics (pdf). IBM International Technical Support Organization. Retrieved on 2007-06-01.
- ^ John Shedletsky (2007-11-20). Setting the record straight on mainframe TCO. IBM. Retrieved on 2008-04-10.
- ^ Largest Commercial Database in Winter Corp. TopTen™ Survey Tops One Hundred Terabytes. Press release. Retrieved on 2008-05-16.
- ^ IBM Opens Latin America's First Mainframe Software Center. Enterprise Networks and Servers (August 2007).
- ^ IBM Helps Clients Modernize Applications on the Mainframe. IBM (November 7, 2007).
- ^ Litigation Status on Platform Solutions, Inc. Anti-Trust Claims Against IBM. Platform Solutions Inc. (January 22, 2007).
- ^ CICS-An Introduction. IBM. Retrieved on 2006-10-22.
- ^ My Personal Mainframe?. The Mainframe Blog. Retrieved on 2006-11-30.
- ^ The 981,024 MIPS figure assumes 32 maximally configured System z10 Enterprise Class (i.e. Model 764) machines with all 64 central processors on each machine allocated to a single z/OS 1.9 (or higher) LPAR. A total of 32 such LPARs results in the cited MIPS figure (32 multiplied by 30,657). This figure is approximate and is current as of late March, 2008.
[edit] External links
- IBM Mainframe portal
- IBM eServer zSeries mainframe servers
- Univac 9400, a mainframe from the 1960s, still in use in a German computer museum
|