Supercomputer

A supercomputer is a computer at the frontline of current processing capacity, particularly speed of calculation.

Supercomputers are used for highly calculation-intensive tasks such as problems including quantum physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion).

Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), which led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".

Today, supercomputers are typically one-of-a-kind custom designs produced by traditional companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s companies to gain their experience. Currently, Japan's K computer, built by Fujitsu in Kobe, Japan is the fastest in the world.[2] It is three times faster than previous one to hold that title, the Tianhe-1A supercomputer located in China.

The term supercomputer itself is rather fluid, and the speed of earlier "supercomputers" tends to become typical of future ordinary computers. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel to become the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs (see Transputer by instance). Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Opteron, or Xeon, and coprocessors like NVIDIA Tesla GPGPUs, AMD GPUs, IBM Cell, FPGAs. The architecture of today's supercomputers is implemented using highly-tuned computer clusters with thousands of commodity processors intercommunicating with custom interconnects.

Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.

Contents

History

The history of supercomputing goes back to the 1960s when a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance.[3] The CDC 6600, released in 1964, is generally considered the first supercomputer.[4][5]

Cray left CDC in 1972 to form his own company.[6] Four years after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976, and it become one of the most successful supercomputers in history.[7][8] The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaflops and was the world's fastest until 1990.[9]

While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records. Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaflops per processor.[10][11] The Hitachi SR2201 obtained a peak performance of 600 gigaflops in 1996 by using 2048 processors connected via a fast three dimensional crossbar network.[12][13][14] The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes; communicating via the Message Passing Interface.[15]

Current research using supercomputers

The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain.[16]

Modern-day weather forecasting also relies on supercomputers. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.[17]

In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale project.[18]

This is a recent list of the computers which appeared at the top of the Top500 list,[19] and the "Peak speed" is given as the "Rmax" rating. For more historical data see History of supercomputing.

Year Supercomputer Peak speed
(Rmax)
Location
2008 IBM Roadrunner 1.026 PFLOPS DoE-Los Alamos National Laboratory, New Mexico, USA
1.105 PFLOPS
2009 Cray Jaguar 1.759 PFLOPS DoE-Oak Ridge National Laboratory, Tennessee, USA
2010 Tianhe-IA 2.566 PFLOPS National Supercomputing Center, Tianjin, China
2011 Fujitsu K computer 8.162 PFLOPS RIKEN, Kobe, Japan
2011 Fujitsu K computer 10.51 PFLOPS RIKEN, Kobe, Japan

Hardware and software design

Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times — in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.

As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.

Energy consumption and heat management

A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04 Megawatts of electricity.[20] The cost to power and cool the system can be significant, e.g. 4MW at $0.10/KWh is $400 an hour or about $3.5 million per year.

Heat management is a major issue in complex electronic devices, and affects powerful computer systems in various ways.[21] The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer cooling technologies. The supercomputing awards for green computing reflect this issue.[22] [23][24]

The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with. The Cray 2 was liquid cooled, and used a Fluorinert "cooling waterfall" which was forced through the modules under pressure.[9] However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company.[25]

In the Blue Gene system IBM deliberately used low power processors to deal with heat density.[26] On the other hand, the IBM Power 775, released in 2011, has closely packed elements that require water cooling.[27] The IBM Aquasar system, on the other hand uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well.[28][29]

The energy efficiency of computer systems is generally measured in terms of "FLOPS per Watt". In 2008 IBM's Roadrunner operated at 376 MFLOPS/Watt.[30][31] In November 2010, the Blue Gene/Q reached 1684 MFLOPS/Watt.[32][33] In June 2011 the top 2 spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375 MFLOPS/W.[34]

Supercomputer challenges, technologies

Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason, hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1–5 microseconds to send a message between CPUs are typical.

Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.

Technologies developed for supercomputers include:

Processing techniques

Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD (Single Instruction Multiple Data) processing instructions for general-purpose computers.

Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU).

The current Top500 list (from May 2010) has 3 supercomputers based on GPGPUs. In particular, the number 4 supercomputer, Nebulae built by Dawning in China, is based on GPGPUs.[35]

Operating systems

Supercomputers today most often use variants of the Linux operating system as shown by the graph to the right.[36]

Until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. In a similar manner, different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of computer systems such as Cray's Unicos, or Linux.

Programming

The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. The base language of supercomputer code is, in general, Fortran or C, using special libraries to share data between nodes. In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA and OpenCL.

Software tools

Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf, WareWulf, and openMosix, which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science. Several utilities that would once have cost several thousands of dollars are now completely free thanks to the open source community that often creates disruptive technology.

Modern supercomputer architecture

Supercomputers today often have a similar top-level architecture consisting of a cluster of MIMD multiprocessors, each processor of which is SIMD, and with each multiprocessor controlling multiple co-processors. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, the number of simultaneous instructions per SIMD processor, and the type and number of co-processors. Within this hierarchy we have:

As of October 2010 the fastest supercomputer in the world is the K computer which has over 68,000 8-core processors, while Tianhe-1A system at National University of Defense Technology comes at second number with more than 14,000 multi-core processors.

In February 2009, IBM also announced work on "Sequoia," which appears to be a 20 petaflops supercomputer. This will be equivalent to 2 million laptops (whereas Roadrunner is comparable to a mere 100,000 laptops). It is slated for deployment in late 2011.[37] The Sequoia will be powered by 1.6 million cores (specific 45-nanometer chips in development) and 1.6 petabytes of memory. It will be housed in 96 refrigerators spanning roughly 3,000 square feet (280 m2).[38]

Moore's Law and economies of scale are the dominant factors in supercomputer design. The design concepts that allowed past supercomputers to out-perform desktop machines of the time tended to be gradually incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad-core Xeon workstation running at 2.66 GHz will outperform a multimillion dollar Cray C90 supercomputer used in the early 1990s; most workloads requiring such a supercomputer in the 1990s can be done on workstations costing less than 4,000 US dollars as of 2010. Supercomputing is taking a step of increasing density, allowing for desktop supercomputers to become available, offering the computer power that in 1998 required a large room to require less than a desktop footprint.

In addition, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, in particular, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design, which can be programmed to act as one large computer.

Special-purpose supercomputers

A special-purpose supercomputer is a high-performance computing device with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking. Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer, by some measure. For example, GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.

Examples of special-purpose supercomputers:

The fastest supercomputers today

Measuring supercomputer speed

In general, the speed of a supercomputer is measured in "FLOPS" (FLoating Point Operations Per Second), commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) This measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer's processor specifications and shown as Rpeak in the top500 lists)- which is generally unachievable when running real workloads, or the achievable throughput (derived from benchmarks using the Linpack benchmark and shown as Rmax in the top500 lists). The Linpack benchmark does LU decomposition of a large matrix. The Linpack performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth than Linpack, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance.

"Petascale" supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaflops range. An exaflop is one quintillion (1018) FLOPS (one million teraflops).

The TOP500 list

Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.

Current fastest supercomputer system

The K computer is the worlds fastest supercomputer at 10.51 petaFLOPS. It consists of 88,000 SPARC64 VIIIfx CPUs, and spans 864 server racks. Fujitsu was not able to give the official power consumption of the completed K cluster, but in June, when it reached a one petaflop peak, it consumed 9.89 megawatts, costing $9.89 million dollars a year.[46]

Opportunistic Supercomputing

Opportunistic Supercomputing is a form of networked grid computing whereby a “super virtual computer” of many loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However, basic grid and cloud computing approaches that rely on volunteer computing can not handle traditional supercomputing tasks such as fluid dynamic simulations.

Examples of Opportunistic Supercomputing Systems

The fastest grid computing system, Folding@home, which is based on BOINC, reported 8.8 petaflops of processing power as of May 2011. Of this, 7.1 petaflops are contributed by clients running on various GPUs, 1.8 petaflops come from PlayStation 3 systems, and the rest from various computer systems.[47]

The BOINC platform hosts a number of distributed computing projects. As of May 2011, BOINC recorded a processing power of over 5.5 petaflops through over 480,000 active computers on the network[48] The most active project (measured by computational power), MilkyWay@home, reports processing power of over 700 teraflops through over 33,000 active computers.[49]

As of May 2011, GIMPS's distributed Mersenne Prime search currently achieves about 60 teraflops through over 25,000 registered computers.[50] The Internet PrimeNet Server supports GIMPS's grid computing approach, one of the earliest and most successful grid computing projects, since 1997.

Quasi-opportunistic Supercomputing

Quasi-opportunistic Supercomputing is a form of distributed computing whereby the “super virtual computer” of a large number of networked geographically disperse computers performs huge processing power demanding computing tasks.[51] Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.[51]

Examples of Quasi-opportunistic Supercomputing Systems

The PlayStation 3 Gravity Grid[52] uses a network of 16 machines, and exploits the Cell processor for the intended application, which is performing astrophysical simulations of large supermassive black holes capturing smaller compact objects. The Cell processor has a main CPU and 6 floating-point vector processors, giving the machine a net of 16 general-purpose machines and 96 vector processors. This cluster was built in 2007 by Dr. Gaurav Khanna, a professor in the Physics Department of the University of Massachusetts Dartmouth with support from Sony Computer Entertainment and is the first PS3 cluster that generated numerical results that were published in scientific research literature.

Also a "quasi-supercomputer" is Google's search engine system with estimated total processing power of between 126 and 316 teraflops, as of April 2004.[53] In June 2006 the New York Times estimated that the Googleplex and its server farms contain 450,000 servers.[54] According to 2008 estimates, the processing power of Google's cluster might reach from 20 to 100 petaflops.[55]

Other notable computer clusters are the flash mob cluster, the Qoscos Grid and the Beowulf cluster. The flash mob cluster allows the use of any computer in the network, while the Beowulf cluster still requires uniform architecture.

Research and development

IBM is developing the Cyclops64 architecture, intended to create a "supercomputer on a chip".

Other PFLOPS projects include one by Narendra Karmarkar in India,[56] a C-DAC effort targeted for 2010,[57] and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be completed by 2011).[58]

In May 2008 a collaboration was announced between NASA, SGI and Intel to build a 1 petaflops computer, Pleiades, in 2009, scaling up to 10 PFLOPs by 2012.[59] Meanwhile, IBM is constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory, named Sequoia, based on the Blue Gene architecture which is scheduled to go online in 2011.

Given the current speed of progress, supercomputers are projected to reach 1 exaflops (1018) (one quintillion FLOPS) in 2019.[60] Using the Intel MIC multi-core processor architecture, which is Intel's response to GPU systems, SGI plans to achieve a 500 times increase in performance by 2018 to achieve an exaflop.[61] Samples of MIC chips with 32 cores which combine vector processing units with standard CPU have become available.[61]

On October 11, 2011, the Oak Ridge National Laboratory announced they were building a 20 petaflop supercomputer, named Titan, which will become operational in 2012, the hybrid Titan system will combine AMD Opteron processors with “Kepler” NVIDIA Tesla graphic processing unit (GPU) technologies.[62]

Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (1021) (one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two week time span accurately.[63] Such systems might be built around 2030.[64]

The Indian Government has committed Rs 10,000 crore to indigenously develop the world's fastest supercomputer by 2017. The Planning Commission of India has agreed to provide the funds to ISRO and to the Indian Institute of Science (IISc), Bangalore to develop a supercomputer with a performance of 132.8 exaflops. The Indian supercomputer will be used only for enhancing the country's space abilities and to predict monsoon and precise weather inputs to boost agricultural output of the country. The target being set by India is very ambitious while referring to achieving the 'Exaflop' or the next level of computing performance by 2017. ISRO has indeed planned everything very carefully to set such a target for itself. ISRO has already booked key equipments to develop the supercomputer by 2017. Most of the other gadgets will be indigenously developed in India. [65]

Applications of supercomputers

Decade Uses and computer involved
1970s Weather forecasting, aerodynamic research (Cray-1).[66]
1980s Probabilistic analysis,[67] radiation shielding modeling[68] (CDC Cyber).
1990s Brute force code breaking (EFF DES cracker),[69]

3D nuclear test simulations as a substitute for legal conduct Nuclear Proliferation Treaty (ASCI Q).[70]

2010s Molecular Dynamics Simulation (Tianhe-1A)[71]

See also

Notes

  1. ^ IBM Blue gene announcement
  2. ^ [1], New York Times, 19 June 2011. Accessed 20 June 2011
  3. ^ Hardware software co-design of a multimedia SOC platform by Sao-Jie Chen, Guang-Huei Lin, Pao-Ann Hsiung, Yu-Hen Hu 2009 ISBN pages 70-72
  4. ^ History of computing in education by John Impagliazzo, John A. N. Lee 2004 ISBN 1402081359 page 172 [2]
  5. ^ The American Midwest: an interpretive encyclopedia by Richard Sisson, Christian K. Zacher 2006 ISBN 0253348862 page 1489 [3]
  6. ^ Wisconsin Biographical Dictionary by Caryn Hannan 2008 ISBN 1878592637 pages 83-84 [4]
  7. ^ Readings in computer architecture by Mark Donald Hill, Norman Paul Jouppi, Gurindar Sohi 1999 ISBN 9781558605398 page 41-48
  8. ^ Milestones in computer science and information technology by Edwin D. Reilly 2003 ISBN 1573565210 page 65
  9. ^ a b Parallel computing for real-time signal processing and control by M. O. Tokhi, Mohammad Alamgir Hossain 2003 ISBN 9781852335991 pages 201-202
  10. ^ TOP500 Annual Report 1994.
  11. ^ N. Hirose and M. Fukuda (1997). "Numerical Wind Tunnel (NWT) and CFD Research at National Aerospace Laboratory". Proceedings of HPC-Asia '97. IEEE Computer Society. doi:10.1109/HPC.1997.592130. 
  12. ^ H. Fujii, Y. Yasuda, H. Akashi, Y. Inagami, M. Koga, O. Ishihara, M. Kashiyama, H. Wada, T. Sumimoto, Architecture and performance of the Hitachi SR2201 massively parallel processor system, Proceedings of 11th International Parallel Processing Symposium, April 1997, Pages 233-241.
  13. ^ Y. Iwasaki, The CP-PACS project, Nuclear Physics B - Proceedings Supplements, Volume 60, Issues 1-2, January 1998, Pages 246-254.
  14. ^ A.J. van der Steen, Overview of recent supercomputers, Publication of the NCF, Stichting Nationale Computer Faciliteiten, the Netherlands, January 1997.
  15. ^ Scalable input/output: achieving system balance by Daniel A. Reed 2003 ISBN 9780262681421 page 182
  16. ^ Kaku, Michio. Physics of the Future (New York: Doubleday, 2011), 65.
  17. ^ "Faster Supercomputers Aiding Weather Forecasts". News.nationalgeographic.com. 2010-10-28. http://news.nationalgeographic.com/news/2005/08/0829_050829_supercomputer.html. Retrieved 2011-07-08. 
  18. ^ Washington Post August 8, 2011
  19. ^ Intel brochure - 11/91. "Directory page for Top500 lists. Result for each list since June 1993". Top500.org. http://www.top500.org/sublist. Retrieved 2010-10-31. 
  20. ^ "NVIDIA Tesla GPUs Power World's Fastest Supercomputer" (Press release). Nvidia. 29 October 2010. http://pressroom.nvidia.com/easyir/customrel.do?easyirid=A0D622CE9F579F09&version=live&prid=678988&releasejsp=release_157. 
  21. ^ Better Computing Through CPU Cooling by Alexander A. Balandin in IEEE Spectrum, October 2009 [5]
  22. ^ "The Green 500". http://www.green500.org. 
  23. ^ "Green 500 list ranks supercomputers". iTnews Australia. http://www.itnews.com.au/News/65619,green-500-list-ranks-supercomputers.aspx. 
  24. ^ Wu-chun Feng, 2003 Making a Case for Efficient Supercomputing in ACM Queue Magazine, Volume 1 Issue 7, 10-01-2003 doi 10.1145/957717.957772 [6]
  25. ^ Computational science -- ICCS 2005: 5th international conference edited by Vaidy S. Sunderam 2005 ISBN 3540260439 pages 60-67
  26. ^ "IBM uncloaks 20 petaflops BlueGene/Q super". The Register. 2010-11-22. http://www.theregister.co.uk/2010/11/22/ibm_blue_gene_q_super/. Retrieved 2010-11-25. 
  27. ^ The Register: IBM 'Blue Waters' super node washes ashore in August
  28. ^ HPC Wire July 2, 2010
  29. ^ CNet May 10, 2010
  30. ^ "Government unveils world's fastest computer". CNN. Archived from the original on 2008-06-10. http://web.archive.org/web/20080610155646/http://www.cnn.com/2008/TECH/06/09/fastest.computer.ap/index.html. "performing 376 million calculations for every watt of electricity used." 
  31. ^ "IBM Roadrunner Takes the Gold in the Petaflop Race". http://www.hpcwire.com/topic/processors/IBM_Roadrunner_Takes_the_Gold_in_the_Petaflop_Race.html. 
  32. ^ "Top500 Supercomputing List Reveals Computing Trends". http://www.serverwatch.com/hreviews/article.php/3913536/Top500-Supercomputing-List-Reveals-Computing-Trends.htm. "IBM... BlueGene/Q system .. setting a record in power efficiency with a value of 1,680 Mflops/watt, more than twice that of the next best system." 
  33. ^ "IBM Research A Clear Winner in Green 500". http://www.datacenterknowledge.com/archives/2010/11/18/ibm-system-clear-winner-in-green-500/. 
  34. ^ Green 500 list
  35. ^ Prickett, Timothy (2010-05-31). "Nebulae #2 Supercomputer built with NVIDIA Tesla GPGPUs". Theregister.co.uk. http://www.theregister.co.uk/2010/05/31/top_500_supers_jun2010/. Retrieved 2010-10-31. 
  36. ^ a b "Top500 OS chart". Top500.org. http://www.top500.org/overtime/list/32/os. Retrieved 2010-10-31. 
  37. ^ IBM to build new monster supercomputer By Tom Jowitt , TechWorld , 02/04/2009
  38. ^ "Petaflop Sequoia Supercomputer - United States". 03.ibm.com. 2009-02-03. http://www-03.ibm.com/press/us/en/pressrelease/26599.wss. Retrieved 2010-10-31. 
  39. ^ Condon, J.H. and K.Thompson, "Belle Chess Hardware", In Advances in Computer Chess 3 (ed.M.R.B.Clarke), Pergamon Press, 1982.
  40. ^ Hsu, Feng-hsiung (2002). Behind Deep Blue: Building the Computer that Defeated the World Chess Champion. Princeton University Press. ISBN 0-691-09065-3 
  41. ^ C. Donninger, U. Lorenz. The Chess Monster Hydra. Proc. of 14th International Conference on Field-Programmable Logic and Applications (FPL), 2004, Antwerp – Belgium, LNCS 3203, pp. 927 – 932
  42. ^ J Makino and M. Taiji, Scientific Simulations with Special Purpose Computers: The GRAPE Systems, Wiley. 1998.
  43. ^ Electronic Frontier Foundation (1998). Cracking DES - Secrets of Encryption Research, Wiretap Politics & Chip Design. Oreilly & Associates Inc. ISBN 1-56592-520-3. http://cryptome.org/cracking-des/cracking-des.htm. 
  44. ^ RIKEN press release, Completion of a one-petaflops computer system for simulation of molecular dynamics
  45. ^ "D.E. Shaw Research Anton". Deshawresearch.com. http://www.deshawresearch.com/. Retrieved 2010-10-31. 
  46. ^ "Japan Pushes World’s Fastest Computer Past 10 Petaflop Barrier". Wired.com. 2. http://www.wired.com/wiredenterprise/2011/11/japanese_megamachine/. 
  47. ^ Folding@home: OS Statistics. Stanford University. http://fah-web.stanford.edu/cgi-bin/main.py?qtype=osstats. Retrieved 2011-05-28 
  48. ^ BOINCstats: BOINC Combined. BOINC. http://www.boincstats.com/stats/project_graph.php?pr=bo. Retrieved 2011-05-28 . Note these link will give current statistics, not those on the date last accessed.
  49. ^ BOINCstats: MilkyWay@home. BOINC. http://boincstats.com/stats/project_graph.php?pr=milkyway. Retrieved 2011-05-28 . Note these link will give current statistics, not those on the date last accessed.
  50. ^ "Internet PrimeNet Server Distributed Computing Technology for the Great Internet Mersenne Prime Search". GIMPS. http://www.mersenne.org/primenet. Retrieved June 6, 2011 
  51. ^ a b Kravtsov, Valentin; Carmeli, David; Dubitzky, Werner; Orda, Ariel; Schuster, Assaf; Yoshpa, Benny. "Quasi-opportunistic supercomputing in grids, hot topic paper (2007)". IEEE International Symposium on High Performance Distributed Computing. IEEE. http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.135.8993. Retrieved 4 August 2011. 
  52. ^ "PS3 Gravity Grid". Gaurav Khanna, Associate Professor, College of Engineering, University of Massachusetts Dartmouth. http://gravity.phy.umassd.edu/ps3.html. 
  53. ^ How many Google machines, April 30, 2004
  54. ^ Markoff, John; Hensell, Saul (June 14, 2006). "Hiding in Plain Sight, Google Seeks More Power". New York Times. http://www.nytimes.com/2006/06/14/technology/14search.html. Retrieved 2008-03-16. 
  55. ^ Google Surpasses Supercomputer Community, Unnoticed?, May 20, 2008.
  56. ^ Athley, Gouri Agtey; Rajeshwari Adappa (30 October 2006). "Tatas get Karmakar to make super comp". The Economic Times. http://economictimes.indiatimes.com/articleshow/msid-225517,curpg-2.cms. Retrieved 2008-03-16. 
  57. ^ C-DAC's Param programme sets to touch 10 teraflops by late 2007 and a petaflops by 2010.
  58. ^ "National Science Board Approves Funds for Petascale Computing Systems". U.S. National Science Foundation. August 10, 2007. http://www.nsf.gov/news/news_summ.jsp?cntn_id=109850. Retrieved 2008-03-16. 
  59. ^ "NASA collaborates with Intel and SGI on forthcoming petaflops super computers". Heise online. 2008-05-09. http://www.heise.de/english/newsticker/news/107683. 
  60. ^ Thibodeau, Patrick (2008-06-10). "IBM breaks petaflop barrier". InfoWorld. http://www.infoworld.com/article/08/06/10/IBM_breaks_petaflop_barrier_1.html. 
  61. ^ a b SGI, Intel plan to speed supercomputers 500 times by 2018, ComputerWorld, June 20, 2011
  62. ^ Cray announces ‘Titan’ supercomputer, KurzweilAI
  63. ^ DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing". Proceedings of the 2nd conference on Computing frontiers. pp. 391–402. ISBN 1595930191. http://portal.acm.org/citation.cfm?id=1062325. 
  64. ^ "IDF: Intel says Moore's Law holds until 2029". Heise Online. 2008-04-04. http://www.h-online.com/newsticker/news/item/IDF-Intel-says-Moore-s-Law-holds-until-2029-734779.html. 
  65. ^ "India to make World's Fastest Supercomputer". http://www.defencenews.in/defence-news-internal.asp?get=new&id=500. 
  66. ^ "The Cray-1 Computer System" (PDF). Cray Research, Inc. http://archive.computerhistory.org/resources/text/Cray/Cray.Cray1.1977.102638650.pdf. Retrieved May 25, 2011. 
  67. ^ Joshi, Rajani R. (9 June 1998). "A new heuristic algorithm for probabilistic optimization" (Subscription required). Department of Mathematics and School of Biomedical Engineering, Indian Institute of Technology Powai, Bombay, India. http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VC5-3SWXX64-8&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=0a76921c6623fa556491f2dccdf4377e. Retrieved 2008-07-01. 
  68. ^ "Abstract for SAMSY - Shielding Analysis Modular System". http://www.nea.fr/abs/html/iaea0837.html. Retrieved May 25, 2011. 
  69. ^ "EFF DES Cracker Source Code". Cosic.esat.kuleuven.be. https://www.cosic.esat.kuleuven.be/des/. Retrieved 2011-07-08. 
  70. ^ "Disarmament Diplomacy: - DOE Supercomputing & Test Simulation Programme". Acronym.org.uk. 2000-08-22. http://www.acronym.org.uk/dd/dd49/49doe.html. Retrieved 2011-07-08. 
  71. ^ "China’s Investment in GPU Supercomputing Begins to Pay Off Big Time!". Blogs.nvidia.com. http://blogs.nvidia.com/2011/06/chinas-investment-in-gpu-supercomputing-begins-to-pay-off-big-time/. Retrieved 2011-07-08. 

External links