A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".
Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. As of May 2010[update], the Cray Jaguar is the fastest supercomputer in the world.
The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. Most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion). A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
Contents |
Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.
As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.
Technologies developed for supercomputers include:
Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD (Single Instruction Multiple Data) processing instructions for general-purpose computers.
Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU).
The current Top500 list (from May 2010) has 3 supercomputers based on GPGPUs. In particular, the number 2 supercomputer is Nebulae built by Dawning in China[1].
Supercomputers today most often use variants of Linux[2].
Until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. In similar manner, different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of computer systems such as Cray's Unicos, or Linux.
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. The base language of supercomputer code is, in general, Fortran or C, using special libraries to share data between nodes. In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize a problem for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. The new massively parallel GPGPUs have 100s of processor cores and are programmed using programming models such as CUDA and OpenCL.
Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf, WareWulf, and openMosix, which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community that often creates disruptive technology.
Supercomputers today often have a similar top-level architecture consisting of a cluster of MIMD multiprocessors, each processor of which is SIMD. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous instructions per SIMD processor. Within this hierarchy we have:
As of November 2009 the fastest supercomputer in the world is the Cray XT5 Jaguar system at National Center for Computational Sciences with more than 19000 computers and 224,000 processing elements, based on standard AMD processors.
The second fastest supercomputer and the fastest heterogeneous (or hybrid) machine is Dawning Nebulae in China. This machine is a cluster of 4640 blade servers, each with 1 NVIDIA Tesla C2050 (Fermi) GPGPU and 2 Intel Westmere CPUs. The Tesla GPUs deliver most of the Linpack performance, since each Tesla C2050 GPU has 515 Gigaflops peak double precision performance. The most remarkable thing about the hybrid supercomputers like Nebulae and the IBM Roadrunner (uses IBM Cell as coprocessor) is the low power of these systems. Nebulae for example is 2.55 Megawatts power and delivers 1.271 Petaflops/s compared to the number 1 supercomputer Jaguar (made using AMD Opteron CPUs) that consumes 7 Megawatt power and delivers 1.759 Petaflops/s. This makes Nebulae two times higher performance per watt compared to Jaguar.
In February 2009, IBM also announced work on "Sequoia," which appears to be a 20 petaflops supercomputer. This will be equivalent to 2 million laptops (whereas Roadrunner is comparable to a mere 100,000 laptops). It is slated for deployment in late 2011. [3] The Sequoia will be powered by 1.6 million cores (specific 45-nanometer chips in development) and 1.6 petabytes of memory. It will be housed in 96 refrigerators spanning roughly 3,000 square feet (280 m2) [4] .
Moore's Law and economies of scale are the dominant factors in supercomputer design. The design concepts that allowed past supercomputers to out-perform desktop machines of the time tended to be gradually incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad-core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s; most workloads requiring such a supercomputer in the 1990s can be done on workstations costing less than 4,000 US dollars as of 2010. Supercomputing is taking a step of increasing density, allowing for desktop supercomputers to become available, offering the computer power that in 1998 required a large room to require less than a desktop footprint.
In addition, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, in particular, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design, which can be programmed to act as one large computer.
Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking. Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer, by some measure. For example, GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.
Examples of special-purpose supercomputers:
In general, the speed of a supercomputer is measured in "FLOPS" (FLoating Point Operations Per Second), commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) This measurement is based on a particular benchmark, which does LU decomposition of a large matrix. This mimics a class of real-world problems, but is significantly easier to compute than a majority of actual real-world problems.
"Petascale" supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaflops range. An exaflop is one quintillion (1018) FLOPS (one million teraflops).
Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.
In November 2009, the AMD Opteron-based Cray XT5 Jaguar at the Oak Ridge National Laboratory was announced as the fastest operational supercomputer, with a sustained processing rate of 1.759 PFLOPS.[6] [7]
Some types of large-scale distributed computing for embarrassingly parallel problems take the clustered supercomputing concept to an extreme.
The fastest cluster, Folding@home, reported over 7.8 petaflops of processing power as of December 2009. Of this, 2.3 petaflops of this processing power is contributed by clients running on NVIDIA GeForce GPUs, AMD GPUs, PlayStation 3 systems and another 5.1 petaflops is contributed by their newly released GPU2 client.[8]
Another distributed computing project is the BOINC platform, which hosts a number of distributed computing projects. As of April 2010[update], BOINC recorded a processing power of over 5 petaflops through over 580,000 active computers on the network.[9] The most active project (measured by computational power), MilkyWay@home, reports processing power of over 1.4 petaflops through over 30,000 active computers.[10]
As of April 2010[update], GIMPS's distributed Mersenne Prime search currently achieves about 45 teraflops.[11]
Also a “quasi-supercomputer” is Google's search engine system with estimated total processing power of between 126 and 316 teraflops, as of April 2004.[12] In June 2006 the New York Times estimated that the Googleplex and its server farms contain 450,000 servers.[13] According to recent estimations, the processing power of Google's cluster might reach from 20 to 100 petaflops.[14]
The PlayStation 3 Gravity Grid uses a network of 16 machines, and exploits the Cell processor for the intended application, which is binary black hole coalescence using perturbation theory.[15][16] The Cell processor has a main CPU and 6 floating-point vector processors, giving the machine a net of 16 general-purpose machines and 96 vector processors. The machine has a one-time cost of $9,000 to build and is adequate for black-hole simulations, which would otherwise cost $6,000 per run on a conventional supercomputer. The black hole calculations are not memory-intensive and are locally introduced, and so are well-suited to this architecture.
Other notable computer clusters are the flash mob cluster and the Beowulf cluster. The flash mob cluster allows the use of any computer in the network, while the Beowulf cluster still requires uniform architecture.
IBM is developing the Cyclops64 architecture, intended to create a "supercomputer on a chip".
Other PFLOPS projects include one by Narendra Karmarkar in India,[17] a CDAC effort targeted for 2010,[18] and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be completed by 2011).[19]
In May 2008 a collaboration was announced between NASA, SGI and Intel to build a 1 petaflops computer, Pleiades, in 2009, scaling up to 10 PFLOPs by 2012.[20] Meanwhile, IBM is constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory, named Sequoia, which is scheduled to go online in 2011.
Given the current speed of progress, supercomputers are projected to reach 1 exaflops (1018) (one quintillion FLOPS) in 2019.[21]
Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (1021) (one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two week time span accurately.[22] Such systems might be built around 2030.[23]
This is a list of the record-holders for fastest general-purpose supercomputer in the world, and the year each one set the record. For entries prior to 1993, this list refers to various sources[24]. From 1993 to present, the list reflects the Top500 listing[25], and the "Peak speed" is given as the "Rmax" rating.
Year | Supercomputer | Peak speed (Rmax) |
Location |
---|---|---|---|
1938 | Zuse Z1 | 1 OPS | Konrad Zuse, Berlin, Germany |
1941 | Zuse Z3 | 20 OPS | Konrad Zuse, Berlin, Germany |
1943 | Colossus 1 | 5 kOPS | Post Office Research Station, Bletchley Park, UK |
1944 | Colossus 2 (Single Processor) | 25 kOPS | Post Office Research Station, Bletchley Park, UK |
1946 | Colossus 2 (Parallel Processor) | 50 kOPS | Post Office Research Station, Bletchley Park, UK |
1946 |
UPenn ENIAC (before 1948+ modifications) |
5 kOPS | Department of War Aberdeen Proving Ground, Maryland, USA |
1954 | IBM NORC | 67 kOPS | Department of Defense U.S. Naval Proving Ground, Dahlgren, Virginia, USA |
1956 | MIT TX-0 | 83 kOPS | Massachusetts Inst. of Technology, Lexington, Massachusetts, USA |
1958 | IBM AN/FSQ-7 | 400 kOPS | 25 U.S. Air Force sites across the continental USA and 1 site in Canada (52 computers) |
1960 | UNIVAC LARC | 250 kFLOPS | Atomic Energy Commission (AEC) Lawrence Livermore National Laboratory, California, USA |
1961 | IBM 7030 "Stretch" | 1.2 MFLOPS | AEC-Los Alamos National Laboratory, New Mexico, USA |
1964 | CDC 6600 | 3 MFLOPS | AEC-Lawrence Livermore National Laboratory, California, USA |
1969 | CDC 7600 | 36 MFLOPS | |
1974 | CDC STAR-100 | 100 MFLOPS | |
1975 | Burroughs ILLIAC IV | 150 MFLOPS | NASA Ames Research Center, California, USA |
1976 | Cray-1 | 250 MFLOPS | Energy Research and Development Administration (ERDA) Los Alamos National Laboratory, New Mexico, USA (80+ sold worldwide) |
1981 | CDC Cyber 205 | 400 MFLOPS | (~40 systems worldwide) |
1983 | Cray X-MP/4 | 941 MFLOPS | U.S. Department of Energy (DoE) Los Alamos National Laboratory; Lawrence Livermore National Laboratory; Battelle; Boeing |
1984 | M-13 | 2.4 GFLOPS | Scientific Research Institute of Computer Complexes, Moscow, USSR |
1985 | Cray-2/8 | 3.9 GFLOPS | DoE-Lawrence Livermore National Laboratory, California, USA |
1989 | ETA10-G/8 | 10.3 GFLOPS | Florida State University, Florida, USA |
1990 | NEC SX-3/44R | 23.2 GFLOPS | NEC Fuchu Plant, Fuchū,_Tokyo, Japan |
1993 | Thinking Machines CM-5/1024 | 59.7 GFLOPS | DoE-Los Alamos National Laboratory; National Security Agency |
Fujitsu Numerical Wind Tunnel | 124.50 GFLOPS | National Aerospace Laboratory, Tokyo, Japan | |
Intel Paragon XP/S 140 | 143.40 GFLOPS | DoE-Sandia National Laboratories, New Mexico, USA | |
1994 | Fujitsu Numerical Wind Tunnel | 170.40 GFLOPS | National Aerospace Laboratory, Tokyo, Japan |
1996 | Hitachi SR2201/1024 | 220.4 GFLOPS | University of Tokyo, Japan |
Hitachi/Tsukuba CP-PACS/2048 | 368.2 GFLOPS | Center for Computational Physics, University of Tsukuba, Tsukuba, Japan | |
1997 | Intel ASCI Red/9152 | 1.338 TFLOPS | DoE-Sandia National Laboratories, New Mexico, USA |
1999 | Intel ASCI Red/9632 | 2.3796 TFLOPS | |
2000 | IBM ASCI White | 7.226 TFLOPS | DoE-Lawrence Livermore National Laboratory, California, USA |
2002 | NEC Earth Simulator | 35.86 TFLOPS | Earth Simulator Center, Yokohama, Japan |
2004 | IBM Blue Gene/L | 70.72 TFLOPS | DoE/IBM Rochester, Minnesota, USA |
2005 | 136.8 TFLOPS | DoE/U.S. National Nuclear Security Administration, Lawrence Livermore National Laboratory, California, USA |
|
280.6 TFLOPS | |||
2007 | 478.2 TFLOPS | ||
2008 | IBM Roadrunner | 1.026 PFLOPS | DoE-Los Alamos National Laboratory, New Mexico, USA |
1.105 PFLOPS | |||
2009 | Cray Jaguar | 1.759 PFLOPS | DoE-Oak Ridge National Laboratory, Tennessee, USA |
|
|