IBM General Parallel File System

IBM GPFS
Developer(s) IBM
Stable release 3.4 / July 2010
Operating system AIX / Linux / Windows Server
Type File system
License Proprietary
Website www.ibm.com
IBM GPFS
Developer IBM
Full name IBM General Parallel File System
Introduced 1998 (AIX)
Limits
Max file size 299 bytes
Max number of files 2 billion (231)
Max filename length 256 UTF-8
Max volume size 299 bytes (4 PiB tested)
Features
File system permissions POSIX
Supported operating systems AIX, Linux, Windows Server

The General Parallel File System (GPFS) is a high-performance shared-disk clustered file system developed by IBM. It is used by some of the supercomputers on the Top 500 List.[1] For example, GPFS is the filesystem of the ASC Purple Supercomputer[2] which is composed of more than 12,000 processors and has 2 petabytes of total disk storage spanning more than 11,000 disks.

In common with typical cluster filesystems, GPFS provides concurrent high-speed file access to applications executing on multiple nodes of clusters. It can be used with AIX 5L clusters, Linux clusters, on Microsoft Windows Server, or a heterogeneous cluster of AIX, Linux and Windows nodes. In addition to providing filesystem storage capabilities, GPFS provides tools for management and administration of the GPFS cluster and allows for shared access to file systems from remote GPFS clusters.

GPFS has been available on IBM's AIX since 1998, on Linux since 2001 and on Microsoft Windows Server since 2008, and is offered as part of the IBM System Cluster 1350.

Contents

History

GPFS began as the Tiger Shark file system, a research project at IBM's Almaden Research Center as early as 1993. Shark was initially designed to support high throughput multimedia applications. This design turned out to be well suited to scientific computing.[3]

Another ancestor of GPFS is IBM's Vesta filesystem, developed as a research project at IBM's Thomas J. Watson Research Center between 1992-1995.[4] Vesta introduced the concept of file partitioning to accommodate the needs of parallel applications that run on high-performance multicomputers with parallel I/O subsystems. With partitioning, a file is not a sequence of bytes, but rather multiple disjoint sequences that may be accessed in parallel. The partitioning is such that it abstracts away the number and type of I/O nodes hosting the filesystem, and it allows a variety of logical partitioned views of files, regardless of the physical distribution of data within the I/O nodes. The disjoint sequences are arranged to correspond to individual processes of a parallel application, allowing for improved scalability.[5]

Vesta was commercialized as the PIOFS filesystem around 1994,[6] and was succeeded by GPFS around 1998.[7][8] The main difference between the older and newer filesystems was that GPFS replaced the specialized interface offered by Vesta/PIOFS with the standard Unix API: all the features to support high performance parallel I/O were hidden from users and implemented under the hood.[3][8] Today, GPFS is used by many of the top 500 supercomputers listed on the Top 500 Supercomputing Sites web site. Since inception GPFS has been successfully deployed for many commercial applications including: digital media, grid analytics and scalable file service.

Versions

Architecture

GPFS provides high performance by allowing data to be accessed over multiple computers at once. Most existing file systems are designed for a single server environment, and adding more file servers does not improve performance. GPFS provides higher input/output performance by "striping" blocks of data from individual files over multiple disks, and reading and writing these blocks in parallel. Other features provided by GPFS include high availability, support for heterogeneous clusters, disaster recovery, security, DMAPI, HSM and ILM.

According to (Schmuck and Haskin), a file that is written to the filesystem is broken up into blocks of a configured size, less than 1 Megabyte each. These blocks are distributed across multiple filesystem nodes, so that a single file is fully distributed across the disk array. This results in high reading and writing speeds for a single file, as the combined bandwidth of the many physical drives is high. This makes the filesystem vulnerable to disk failures -any one disk failing would be enough to lose data. To prevent data loss, the filesystem nodes have RAID controllers — multiple copies of each block are written to the physical disks on the individual nodes. It is also possible to opt out of RAID-replicated blocks, and instead store two copies of each block on different filesystem nodes.

Other features of the filesystem include

It is interesting to compare this with Hadoop's HDFS filesystem, which is designed to store similar or greater quantities of data on commodity hardware — that is, datacenters without RAID disks and a Storage Area Network (SAN).

  1. HDFS also breaks files up into blocks, and stores them on different filesystem nodes.
  2. HDFS does not expect reliable disks, so instead stores copies of the blocks on different nodes. The failure of a node containing a single copy of a block is a minor issue, dealt with by re-replicating another copy of the set of valid blocks, to bring the replication count back up to the desired number. In contrast, while GPFS supports recovery from a lost node, it is a more serious event, one that may include a higher risk of data being (temporarily) lost.
  3. GPFS makes the location of the data transparent — applications are not expected to know or care where the data lies. In contrast, Google GFS and Hadoop HDFS both expose that location, so that MapReduce programs can be run near the data. This eliminates the need for the SAN, though it does require programs to be written using the MapReduce programming paradigm.
  4. GPFS supports full Posix filesystem semantics. Neither Google GFS nor Hadoop HDFS do so.
  5. GPFS distributes its directory indices and other metadata across the filesystem. Hadoop, in contrast, keeps this on the Namenode, a large server which must store all index information in-RAM. This machine becomes a Single Point of Failure in a large cluster. When the Namenode is down, so is the entire cluster.
  6. GPFS breaks files up into small blocks. Hadoop HDFS likes blocks of 64MB or more, as this reduces the storage requirements of the Namenode. Small blocks or many small files fill up a filesystem's indices fast, so limit the filesystem's size.

Despite these differences, it is not possible to state which filesystem is better — it merely reflects different design decisions. GPFS is General, and used with high-end hardware for scaling and reliability. In contrast, the MapReduce-centric filesystems are optimised for commodity hardware and massively parallel programs written in the MapReduce style.

Information Lifecycle Management (ILM) tools

Storage pools allow for the grouping of disks within a file system. Tiers of storage can be created by grouping disks based on performance, locality or reliability characteristics. For example, one pool could be high performance fibre channel disks and another more economical SATA storage.

A fileset is a sub-tree of the file system namespace and provides a way to partition the namespace into smaller, more manageable units. Filesets provide an administrative boundary that can be used to set quotas and be specified in a policy to control initial data placement or data migration. Data in a single fileset can reside in one or more storage pools. Where the file data resides and how it is migrated is based on a set of rules in a user defined policy.

There are two types of user defined policies in GPFS: File placement and File management. File placement policies direct file data as files are created to the appropriate storage pool. File placement rules are determined by attributes such as file name, the user name or the fileset. File management policies allow the file's data to be moved or replicated or files deleted. File management policies can be used to move data from one pool to another without changing the file's location in the directory structure. File management policies are determined by file attributes such as last access time, path name or size of the file.

The GPFS policy processing engine is scalable and can be run on many nodes at once. This allows management policies to be applied to a single file system with billions of files and complete in a few hours.

See also

References

  1. ^ Schmuck, Frank; Roger Haskin (January 2002). "GPFS: A Shared-Disk File System for Large Computing Clusters" (pdf). Proceedings of the FAST'02 Conference on File and Storage Technologies. Monterey, California, USA: USENIX. pp. 231–244. ISBN 1-880446-03-0. http://www.usenix.org/events/fast02/full_papers/schmuck/schmuck.pdf. Retrieved 2008-01-18. 
  2. ^ "Storage Systems - Projects - GPFS". IBM. http://www.almaden.ibm.com/StorageSystems/projects/gpfs/. Retrieved 2008-06-18. 
  3. ^ a b May, John M. (2000). Parallel I/O for High Performance Computing. Morgan Kaufmann. p. 92. ISBN 1558606645. http://books.google.com/?id=iLj516DOIKkC&pg=PA92&lpg=PA92&dq=shark+vesta+gpfs. Retrieved 2008-06-18. 
  4. ^ Corbett, Peter F.; Feitelson, Dror G.; Prost, J.-P.; Baylor, S. J. (1993). "Parallel access to files in the Vesta file system". Supercomputing. Portland, Oregon, United States: ACM/IEEE. pp. 472–481. doi:10.1145/169627.169786. 
  5. ^ Corbett, Peter F.; Feitelson, Dror G. (August 1996). "The Vesta parallel file system" (pdf). Transactions on Computer Systems (ACM) 14 (3): 225–264. doi:10.1145/233557.233558. http://www.cs.umd.edu/class/fall2002/cmsc818s/Readings/vesta-tocs96.pdf. Retrieved 2008-06-18. 
  6. ^ Corbett, P. F.; D. G. Feitelson, J.-P. Prost, G. S. Almasi, S. J. Baylor, A. S. Bolmarcich, Y. Hsu, J. Satran, M. Snir, R. Colao, B. D. Herr, J. Kavaky, T. R. Morgan, and A. Zlotek (1995). "Parallel file systems for the IBM SP computers" (pdf). IBM System Journal 34 (2): 222–248. doi:10.1147/sj.342.0222. http://www.research.ibm.com/journal/sj/342/corbett.pdf. Retrieved 2008-06-18. 
  7. ^ Barris, Marcelo; Terry Jones, Scott Kinnane, Mathis Landzettel Safran Al-Safran, Jerry Stevens, Christopher Stone, Chris Thomas, Ulf Troppens (September 1999) (pdf). Sizing and Tuning GPFS. IBM Redbooks, International Technical Support Organization. see page 1 ("GPFS is the successor to the PIOFS file system"). http://www.redbooks.ibm.com/redbooks/pdfs/sg245610.pdf. 
  8. ^ a b Snir, Marc (June 2001). "Scalable parallel systems: Contributions 1990-2000" (pdf). HPC seminar, Computer Architecture Department, Universitat Politècnica de Catalunya. http://research.ac.upc.edu/HPCseminar/SEM0001/snir.pdf. Retrieved 2008-06-18. 

External links