NASA Advanced Supercomputing Division

NASA Advanced Supercomputing Division
Agency overview
Formed 1982 (1982)
Preceding agencies
  • Numerical Aerodynamic Simulation Division (1982)
  • Numerical Aerospace Simulation Division (1995)
Headquarters NASA Ames Research Center, Moffett Field, California
Agency executive
  • Piyush Mehrotra, Division Chief
Parent department Ames Research Center Exploration Technology Directorate
Parent agency National Aeronautics and Space Administration (NASA)
Website www.nas.nasa.gov
Current Supercomputing Systems
Pleiades SGI Altix ICE supercluster
Endeavour SGI UV shared-memory system
Merope[1] SGI Altix supercluster

The NASA Advanced Supercomputing (NAS) Division is located at NASA Ames Research Center, Moffett Field in the heart of Silicon Valley in Mountain View, California. It has been the major supercomputing and modeling and simulation resource for NASA missions in aerodynamics, space exploration, studies in weather patterns and ocean currents, and space shuttle and aircraft design and development for over thirty years.

The facility currently houses the petascale Pleiades and terascale Endeavour supercomputers based on SGI architecture and Intel processors, as well as disk and archival tape storage systems with a capacity of over 126 petabytes of data, the hyperwall visualization system, and one of the largest InfiniBand network fabrics in the world.[2] The NAS Division is part of NASA's Exploration Technology Directorate and operates NASA's High-End Computing Capability (HECC) Project.[3]

History

Founding

In the mid-1970s, a group of aerospace engineers at Ames Research Center began to look into transferring aerospace research and development from costly and time-consuming wind tunnel testing to simulation-based design and engineering using computational fluid dynamics (CFD) models on supercomputers more powerful than those commercially available at the time. This endeavor was later named the Numerical Aerodynamic Simulator (NAS) Project and the first computer was installed at the Central Computing Facility at Ames Research Center in 1984.

Groundbreaking on a state-of-the-art supercomputing facility took place on March 14, 1985 in order to construct a building where CFD experts, computer scientists, visualization specialists, and network and storage engineers could be under one roof in a collaborative environment. In 1986, NAS transitioned into a full-fledged NASA division and in 1987, NAS staff and equipment, including a second supercomputer, a Cray-2 named Navier, were relocated to the new facility, which was dedicated on March 9, 1987.[4]

In 1995, NAS changed its name to the Numerical Aerospace Simulation Division, and in 2001 to the name it has today.

Industry Leading Innovations

NAS has been one of the leading innovators in the supercomputing world, developing many tools and processes that became widely used in commercial supercomputing. Some of these firsts include:[5]

An image of the flowfield around the Space Shuttle Launch Vehicle traveling at Mach 2.46 and at an altitude of 66,000 feet (20,000 m). The surface of the vehicle is colored by the pressure coefficient, and the gray contours represent the density of the surrounding air, as calculated using the OVERFLOW code.

Software Development

NAS develops and adapts software in order to "compliment and enhance the work performed on its supercomputers, including software for systems support, monitoring systems, security, and scientific visualization," and often provides this software to its users through the NASA Open Source Agreement (NOSA).[7]

A few of the important software developments from NAS include:

Supercomputing History

Since its construction in 1987, the NASA Advanced Supercomputing Division has housed and operated some of the most powerful supercomputers in the world. Many of these computers include testbed systems built to test new architecture, hardware, or networking set-ups that might be utilized on a larger scale.[4][6] Peak performance is shown in Floating Point Operations Per Second (FLOPS).

Computer Name Architecture Peak Performance Number of CPUs Installation Date
Cray XMP-12 210.53 megaflops 1 1984
Navier Cray 2 1.95 gigaflops 4 1985
Chuck Convex 3820 1.9 gigaflops 8 1987
Pierre Thinking Machines CM2 14.34 gigaflops 16,000 1987
43 gigaflops 48,000 1991
Stokes Cray 2 1.95 gigaflops 4 1988
Piper CDC/ETA-10Q 840 megaflops 4 1988
Reynolds Cray Y-MP 2.54 gigaflops 8 1988
2.67 gigaflops 8 1990
Lagrange Intel iPSC/860 7.68 gigaflops 128 1990
Gamma Intel iPSC/860 7.68 gigaflops 128 1990
von Karman Convex 3240 200 megaflops 4 1991
Boltzman Thinking Machines CM5 16.38 gigaflops 128 1993
Sigma Intel Paragon 15.60 gigaflops 208 1993
von Neumann Cray C90 15.36 gigaflops 16 1993
Eagle Cray C90 7.68 gigaflops 8 1993
Grace Intel Paragon 15.6 gigaflops 209 1993
Babbage IBM SP-2 34.05 gigaflops 128 1994
42.56 gigaflops 160 1994
da Vinci SGI Power Challenge 16 1994
SGI Power Challenge XL 11.52 gigaflops 32 1995
Newton Cray J90 7.2 gigaflops 36 1996
Piglet SGI Origin 2000/250 MHz 4 gigaflops 8 1997
Turing SGI Origin 2000/195 MHz 9.36 gigaflops 24 1997
25 gigaflops 64 1997
Fermi SGI Origin 2000/195 MHz 3.12 gigaflops 8 1997
Hopper SGI Origin 2000/250 MHz 32 gigaflops 64 1997
Evelyn SGI Origin 2000/250 MHz 4 gigaflops 8 1997
Steger SGI Origin 2000/250 MHz 64 gigaflops 128 1997
128 gigaflops 256 1998
Lomax SGI Origin 2800/300 MHz 307.2 gigaflops 512 1999
409.6 gigaflops 512 2000
Lou SGI Origin 2000/250 MHz 4.68 gigaflops 12 1999
Ariel SGI Origin 2000/250 MHz 4 gigaflops 8 2000
Sebastian SGI Origin 2000/250 MHz 4 gigaflops 8 2000
SN1-512 SGI Origin 3000/400 MHz 409.6 gigaflops 512 2001
Bright Cray SVe1/500 MHz 64 gigaflops 32 2001
Chapman SGI Origin 3800/400 MHz 819.2 gigaflops 1,024 2001
1.23 teraflops 1,024 2002
Lomax II SGI Origin 3800/400 MHz 409.6 gigaflops 512 2002
Kalpana[9] SGI Altix 3000 [10] 2.66 teraflops 512 2003
Cray X1[11] 204.8 gigaflops 2004
Columbia SGI Altix 3000[12] 63 teraflops 10,240 2004
SGI Altix 4700 10,296 2006
85.8 teraflops[13] 13,824 2007
Schirra IBM POWER5+[14] 4.8 teraflops 640 2007
RT Jones SGI ICE 8200, Intel Xeon "Harpertown" Processors 43.5 teraflops 4,096 2007
Pleiades SGI ICE 8200, Intel Xeon "Harpertown" Processors[15] 487 teraflops 51,200 2008
544 teraflops[16] 56,320 2009
SGI ICE 8200, Intel Xeon "Harpertown"/"Nehalem" Processors[17] 773 teraflops 81,920 2010
SGI ICE 8200/8400, Intel Xeon "Harpertown"/"Nehalem"/"Westmere" Processors[18] 1.09 petaflops 111,104 2011
SGI ICE 8200/8400/X, Intel Xeon "Harpertown"/"Nehalem"/"Westmere"/"Sandy Bridge" Processors[19] 1.24 petaflops 125,980 2012
SGI ICE 8200/8400/X, Intel Xeon "Nehalem"/"Westmere"/"Sandy Bridge"/"Ivy Bridge" Processors[20] 2.87 petaflops 162,496 2013
3.59 petaflops 184,800 2014
SGI ICE 8400/X, Intel Xeon "Westmere"/"Sandy Bridge"/"Ivy Bridge"/"Haswell" Processors[21] 4.49 petaflops 198,432 2014
5.35 petaflops[22] 210,336 2015
Endeavour SGI UV 2000, Intel Xeon "Sandy Bridge" Processors[23] 32 teraflops 1,536 2013
Merope SGI ICE 8200, Intel Xeon "Harpertown" Processors[20] 61 teraflops 5,120 2013
SGI ICE 8400, Intel Xeon "Nehalem"/"Westmere" Processors[21] 141 teraflops 1,152 2014
Computer Name Architecture Peak Performance Number of CPUs Installation Date

Storage Resources

Disk Storage

In 1987, NAS partnered with the Defense Advanced Research Projects Agency (DARPA) and the University of California, Berkeley in the Redundant Array of Inexpensive Disks (RAID) project, which sought to create a storage technology that combined multiple disk drive components into one logical unit. Completed in 1992, the RAID project lead to the distributed data storage technology used today.[4]

The NAS facility currently houses disk mass storage on an SGI parallel DMF cluster with high-availability software consisting of four 32-processor front-end systems, which are connected to the supercomputers and the archival tape storage system. The system has 64 GB of memory per front-end[24] and 25 petabytes (PB) of RAID disk capacity.[25] Data stored on disk is regularly migrated to the tape archival storage systems at the facility to free up space for other user projects being run on the supercomputers.

Archive and Storage Systems

In 1987, NAS developed the first UNIX-based hierarchical mass storage system, named NAStore. It contained two StorageTek 4400 cartridge tape robots, each with a storage capacity of approximately 1.1 terabytes, cutting tape retrieval time from 4 minutes to 15 seconds.[4]

With the installation of the Pleiades supercomputer in 2008, the StorageTek systems that NAS had been using for 20 years were unable to meet the needs of the greater number of users and increasing file sizes of each project's datasets.[26] In 2009, NAS brought in Spectra Logic T950 robotic tape systems which increased the maximum capacity at the facility to 16 petabytes of space available for users to archive their data from the supercomputers.[27] As of March 2014, the NAS facility increased the total archival storage capacity of the Spectra Logic tape libraries to 126 petabytes.[24] SGI's Data Migration Facility (DMF) and OpenVault manage disk-to-tape data migration and tape-to-disk de-migration for the NAS facility.

As of March 2014, there is over 30 petabytes of unique data stored in the NAS archival storage system.[24]

Data Visualization Systems

In 1984, NAS purchased 25 SGI IRIS 1000 graphics terminals, the beginning of their long partnership with the Silicon Valley-based company, which made a significant impact on post-processing and visualization of CFD results run on the supercomputers at the facility.[4] Visualization became a key process in the analysis of simulation data run on the supercomputers, allowing engineers and scientists to view their results spatially and in ways that allowed for a greater understanding of the CFD forces at work in their designs.

Hyperwall displaying multiple images
Hyperwall displaying one single image
The hyperwall visualization system at the NAS facility allows researchers to view multiple simulations run on the supercomputers, or a single large image or animation.

The hyperwall

In 2002, NAS visualization experts developed a visualization system called the "hyperwall" which included 49 linked LCD panels that allowed scientists to view complex datasets on a large, dynamic seven-by-seven screen array. Each screen had its own processing power, allowing each one to display, process, and share datasets so that a single image could be displayed across all screens or configured so that data could be displayed in "cells" like a giant visual spreadsheet.[28]

The second generation "hyperwall-2" was developed in 2008 by NAS in partnership with Colfax International and is made up of 128 LCD screens arranged in an 8x16 grid 23 feet wide by 10 feet tall. It is capable of rendering one quarter billion pixels, making it the highest resolution scientific visualization system in the world.[29] It contains 128 nodes, each with two quad-core AMD Opteron (Barcelona) processors and a Nvidia GeForce 480 GTX graphics processing unit (GPU) for a dedicated peak processing power of 128 teraflops across the entire system—100 times more powerful than the original hyperwall.[30] The hyperwall-2 is directly connected to the Pleiades supercomputer's filesystem over an InfiniBand network, which allows the system to read data directly from the filesystem without needing to copy files onto the hyperwall-2's memory.

In 2014, the hyperwall was upgraded with new hardware: 128 Intel Xeon "Ivy Bridge" processors and NVIDIA Geforce 780 Ti GPUs. The upgrade increased the system's peak processing power from 9 teraflops to 57 teraflops, and now has nearly 400 gigabytes of graphics memory.[31]

Concurrent Visualization

An important feature of the hyperwall technology developed at NAS is that it allows for "concurrent visualization" of data, which enables scientists and engineers to analyze and interpret data while the calculations are running on the supercomputers. Not only does this show the current state of the calculation for runtime monitoring, steering, and termination, but it also "allows higher temporal resolution visualization compared to post-processing because I/O and storage space requirements are largely obviated... [and] may show features in a simulation that would otherwise not be visible."[32]

The NAS visualization team developed a configurable concurrent pipeline for use with a massively parallel forecast model run on the Columbia supercomputer in 2005 to help predict the Atlantic hurricane season for the National Hurricane Center. Because of the deadlines to submit each of the forecasts, it was important that the visualization process would not significantly impede the simulation or cause it to fail.

References

  1. "Merope Supercomputer homepage". NAS.
  2. "NASA Advanced Supercomputing Division: Integrated High-End Computing Environment" (PDF). NAS. 2013.
  3. "NAS Homepage - About the NAS Division". NAS. External link in |publisher= (help)
  4. 1 2 3 4 5 6 7 "NASA Advanced Supercomputing Division 25th Anniversary Brochure (PDF)" (PDF). NAS 2008. External link in |publisher= (help)
  5. "NAS homepage: Division History". NAS. External link in |publisher= (help)
  6. 1 2 "NAS High-Performance Computer History". Gridpoints: 1A–12A. Spring 2002.
  7. "NAS Software and Datasets". NAS. External link in |publisher= (help)
  8. "NASA Cart3D Homepage".
  9. "NASA to Name Supercomputer After Columbia Astronaut". NAS May 2005. External link in |publisher= (help)
  10. "NASA Ames Installs World's First Alitx 512-Processor Supercomputer". NAS November 2003. External link in |publisher= (help)
  11. "New Cray X1 System Arrives at NAS". NAS April 2004. External link in |publisher= (help)
  12. "NASA Unveils Its Newest, Most Powerful Supercomputer". NASA October 2004. External link in |publisher= (help)
  13. "Columbia Supercomputer Legacy homepage". NASA. External link in |publisher= (help)
  14. "NASA Selects IBM for Next-Generation Supercomputing Applications". NASA June 2007. External link in |publisher= (help)
  15. "NASA Supercomputer Ranks Among World’s Fastest – November 2008". NASA November 2008. External link in |publisher= (help)
  16. "’Live’ Integration of Pleiades Rack Saves 2 Million Hours". NAS February 2010. External link in |publisher= (help)
  17. "NASA Supercomputer Doubles Capacity, Increases Efficiency". NASA June 2010. External link in |publisher= (help)
  18. "NASA's Pleiades Supercomputer Ranks Among World's Fastest". NASA June 2011. External link in |publisher= (help)
  19. "Pleiades Supercomputer Gets a Little More Oomph". NASA June 2012. External link in |publisher= (help)
  20. 1 2 "NASA's Pleiades Supercomputer Upgraded, Harpertown Nodes Repurposed". NAS August 2013. External link in |publisher= (help)
  21. 1 2 "NASA's Pleiades Supercomputer Upgraded, Gets One Petaflops Boost". NAS October 2014. External link in |publisher= (help)
  22. "Pleiades Supercomputer Performance Leaps to 5.35 Petaflops with Latest Expansion". NAS January 2015. External link in |publisher= (help)
  23. "Endeavour Supercomputer Resource homepage". NAS. External link in |publisher= (help)
  24. 1 2 3 "HECC Archival Storage System Resource homepage". NAS. External link in |publisher= (help)
  25. "NASA Advanced Supercomputing Division Brochure" (PDF). NAS. 2013.
  26. "NAS Silo, Tape Drive, and Storage Upgrades - SC09" (PDF). NAS November 2009. External link in |publisher= (help)
  27. "New NAS Data Archive System Installation Completed". NAS. 2009.
  28. "Mars Flyer Debuts on Hyperwall". NAS September 2003. External link in |publisher= (help)
  29. "NASA Develops World's Highest Resolution Visualization System". NAS June 2008. External link in |publisher= (help)
  30. "NAS Visualization Systems Overview". NAS. External link in |publisher= (help)
  31. "NAS hyperwall Visualization System Upgraded with Ivy Bridge Nodes". NAS October 2014. External link in |publisher= (help)
  32. Ellsworth, David; Bryan Green; Chris Henze; Patrick Moran; Timothy Sandstrom (September–October 2006). "Concurrent Visualization in a Production Supercomputing Environment" (PDF). IEEE Transactions on Visualization and Computer Graphics 12 (5).

External links

NASA Advanced Supercomputing Resources

Other Online Resources

This article is issued from Wikipedia - version of the Saturday, January 30, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.