Grid computing

From Wikipedia, the free encyclopedia

Grid computing is an emerging computing model that provides the ability to perform higher throughput computing by taking advantage of many networked computers to model a virtual computer architecture that is able to distribute process execution across a parallel infrastructure. Grids use the resources of many separate computers connected by a network (usually the Internet) to solve large-scale computation problems. Grids provide the ability to perform computations on large data sets, by breaking them down into many smaller ones, or provide the ability to perform many more computations at once than would be possible on a single computer, by modeling a parallel division of labor between processes. Today resource allocation in a grid is done in accordance with SLAs (service level agreements).

Contents

[edit] Origins

Like the Internet, the Grid Computing evolved from the computational needs of "big science". The Internet was developed to meet the need for a common communication medium between large, federally funded computing centers. These communication links led to resource and information sharing between these centers and eventually to provide access to them for additional users. Ad hoc resource sharing 'procedures' among these original groups pointed the way toward standardization of the protocols needed to communicate between any administrative domain. The current Grid technology can be viewed as an extension or application of this framework to create a more generic resource sharing context.

Fully functional proto-Grid systems date back to the early 1970s with the Distributed Computing System[1] (DCS) project at the University of California, Irvine. David Farber was the main architect. This system was well known enough to merit coverage and a cartoon depiction in Business Week on 14 July 1973. The caption read "The ring acts as a single, highly flexible machine in which individual units can bid for jobs". In modern terminology ring = network, and units = computers, very similar to how computational capabilities are utilized on the Grid. The project's final report was published in 1977 [2]. This technology was mostly abandoned in the 1980s as the administrative and security issues involved in having machines you did not control do your computation were (and are still by some) seen as insurmountable.

The ideas of the Grid were brought together by Ian Foster, Carl Kesselman and Steve Tuecke, the so called "fathers of the Grid." They led the effort to create the Globus Toolkit incorporating not just CPU management (examples: cluster management and cycle scavenging) but also storage management, security provisioning, data movement, monitoring and a toolkit for developing additional services based on the same infrastructure including agreement negotiation, notification mechanisms, trigger services and information aggregation. In short, the term Grid has much further reaching implications than the general public believes. While Globus Toolkit remains the de facto standard for building Grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise Grid.

The remainder of this article discusses the details behind these notions.

[edit] Features

Grid computing offers a model for solving massive computational problems by making use of the unused resources (CPU cycles and/or disk storage) of large numbers of disparate computers, often desktop computers, treated as a virtual cluster embedded in a distributed telecommunications infrastructure. Grid computing's focus on the ability to support computation across administrative domains sets it apart from traditional computer clusters or traditional distributed computing.

Grids offer a way to solve Grand Challenge problems like protein folding, financial modelling, earthquake simulation, and climate/weather modeling. Grids offer a way of using the information technology resources optimally inside an organization. They also provide a means for offering information technology as a utility bureau for commercial and non-commercial clients, with those clients paying only for what they use, as with electricity or water.

Grid computing has the design goal of solving problems too big for any single supercomputer, whilst retaining the flexibility to work on multiple smaller problems. Thus Grid computing provides a multi-user environment. Its secondary aims are better exploitation of available computing power and catering for the intermittent demands of large computational exercises.

This approach implies the use of secure authorization techniques to allow remote users to control computing resources.

Grid computing involves sharing heterogeneous resources (based on different platforms, hardware/software architectures, and computer languages), located in different places belonging to different administrative domains over a network using open standards. In short, it involves virtualizing computing resources.

Grid computing is often confused with cluster computing. The key difference is that a cluster is a single set of nodes sitting in one location, while a Grid is composed of many clusters and other kinds of resources (e.g. networks, storage facilities).

Functionally, one can classify Grids into several types:

  • Computational Grids (including CPU scavenging Grids) which focuses primarily on computationally-intensive operations.
  • Data Grids or the controlled sharing and management of large amounts of distributed data.
  • Equipment Grids which have a primary piece of equipment e.g. a telescope, and where the surrounding Grid is used to control the equipment remotely and to analyse the data produced.

Grid computing is presently being applied successfully by the National Science Foundation's National Technology Grid, NASA's Information Power Grid, Pratt & Whitney, Bristol-Myers Squibb, Co., and American Express.[citation needed]

[edit] Definitions

The term Grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid.

Today there are many definitions of Grid computing:

  • Plaszczak/Wellner define Grid technology as "the technology that enables resource virtualization, on-demand provisioning, and service (resource) sharing between organizations."
  • IBM defines Grid Computing as "the ability, using a set of open standards and protocols, to gain access to applications and data, processing power, storage capacity and a vast array of other computing resources over the Internet. A Grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of resources distributed across 'multiple' administrative domains based on their (resources) availability, capacity, performance, cost and users' quality-of-service requirements" [4]
  • An earlier example of the notion of computing as utility was in 1965 by MIT's Fernando Corbató. Fernando and the other designers of the Multics operating system envisioned a computer facility operating "like a power company or water company". http://www.multicians.org/fjcc3.html
  • Buyya defines Grid as "a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed autonomous resources dynamically at runtime depending on their availability, capability, performance, cost, and users' quality-of-service requirements".[5]
  • CERN, one of the largest users of Grid technology, talk of The Grid: "a service for sharing computer power and data storage capacity over the Internet." [6]
  • Pragmatically, Grid computing is attractive to geographically-distributed non-profit collaborative research efforts like the NCSA Bioinformatics Grids such as BIRN: external Grids.
  • Grid computing is also attractive to large commercial enterprises with complex computation problems who aim to fully exploit their internal computing power: internal Grids.
  • A recent survey (done by Heinz Stockinger in spring 2006; to be published in the Journal of Supercomputing in early 2007) presents a snapshot on the view in 2006.

Grids can be categorized with a three stage model of departmental Grids, enterprise Grids and global Grids. These correspond to a firm initially utilising resources within a single group i.e. an engineering department connecting desktop machines, clusters and equipment. This progresses to enterprise Grids where non-technical staff's computing resources can be used for cycle-stealing and storage. A global Grid is a connection of enterprise and departmental Grids which can be used in a commercial or collaborative manner.

Grid computing is a subset of distributed computing.

[edit] Conceptual framework

Grid computing reflects a conceptual framework rather than a physical resource. The Grid approach is utilized to provision a computational task with administratively-distant resources. The focus of Grid technology is associated with the issues and requirements of flexible computational provisioning beyond the local (home) administrative domain.

[edit] Virtual organization

A Grid environment is created to address resource needs. The use of that resource(s) (eg. CPU cycles, disk storage, data, software programs, peripherals) is usually characterized by its availability outside of the context of the local administrative domain. This 'external provisioning' approach entails creating a new administrative domain referred to as a Virtual organization (VO) with a distinct and separate set of administrative policies (home administration policies plus external resource administrative policies equals the VO (aka your Grid) administrative policies). The context for a Grid 'job execution' is distinguished by the requirements created when operating outside of the home administrative context. Grid technology (aka. middleware) is employed to facilitate formalizing and complying with the Grid context associated with your application execution.Tats Grid

Virtual Organizations accessing different and overlapping sets of resources
Enlarge
Virtual Organizations accessing different and overlapping sets of resources

[edit] Resources

One characteristic that currently distinguishes Grid computing from distributed computing is the abstraction of a 'distributed resource' into a Grid resource. One result of abstraction is that it allows resource substitution to be more easily accomplished. Some of the overhead associated with this flexibility is reflected in the middleware layer and the temporal latency associated with the access of a Grid (or any distributed) resource. This overhead, especially the temporal latency, must be evaluated in terms of the impact on computational performance when a Grid resource is employed.

Web based resources or Web based resource access is an appealing approach to Grid resource provisioning. A recent GGF (Global Grid Forum) Grid middleware evolutionary development "re-factored" the architecture/design of the Grid resource concept to reflect using the W3C WSDL (Web Service Description Language) to implement the concept of a WS-Resource. The stateless nature of the Web, while enhancing the ability to scale, can be a concern for applications that migrate from a stateful protocol for accessing resources to the Web-based stateless protocol. The GGF WS-Resource concept includes discussions on accommodating the statelessness associated with Web resources access.

[edit] State-of-the-art, 2005

The conceptual framework and ancillary infrastructure are evolving at a fast pace and include international participation. The business sector is actively involved in commercialization of the Grid framework. The "big science" sector is actively addressing the development environment and resource (aka performance) monitoring aspects. Activity is also observed in providing Grid-enabled versions of HPC (High Performance Computing) tools. Activity in the domains of "little science" appears to be scant at this time. The treatment in the GGF documentation series reflects the HPC roots of the Grid concept framework; this bias should not be interpreted as a restriction in the application of the Grid conceptual framework in its application to other research domains or other computational contexts.

Substantial experience is being built through the operation of various Grids, most notable of them being the EGEE infrastructure supporting LCG, the Large Hadron Collider Computing Grid [1]. LCG is driven by CERN's need to handle a huge amount of data, produced at a rate of almost a gigabyte per second (10 petabytes per year), a history not unlike that of the production NorduGrid. A list of active sites participating within LCG can be found online [2] as can real time monitoring of the EGEE infrastructure [3]. The relevant software and documentation is also publicly accessible [4].

[edit] Grid-enabling organizations and offerings

[edit] Global Grid Forum (now Open Grid Forum or OGF)

The Global Grid Forum (GGF) has the purpose of defining specifications for Grid computing. GGF is a collaboration between industry and academia with significant support from both.

[edit] Globus Alliance

The Globus Alliance implements some of the standards developed at the GGF through the Globus Toolkit (Grid middleware). As a middleware component, it provides a standard platform for services to build upon, but Grid computing also needs other components, and many other tools operate to support a successful Grid environment.

Globus has implementations of the GGF-defined protocols to provide:

  1. Resource management: Grid Resource Allocation & Management Protocol (GRAM)
  2. Information Services: Monitoring and Discovery Service (MDS)
  3. Security Services: Grid Security Infrastructure (GSI)
  4. Data Movement and Management: Global Access to Secondary Storage (GASS) and GridFTP

A number of tools function along with Globus to make Grid computing a more robust platform, useful to high-performance computing communities. They include:

XML-based web services offer a way to access the diverse services and applications in a distributed environment. As of 2003 the worlds of Grid computing and of web services have started to converge to offer Grid as a web service (Grid Service). The Open Grid Services Architecture (OGSA) has defined this environment, which will offer functionality adhering to the semantics of the Grid Service. The vision of OGSA is to describe and to build a well-defined suite of standard interfaces and behaviours that serve as a common framework for all Grid-enabled systems and applications.

[edit] Commercial offerings

Computing vendors offer Grid solutions which are based either on the Globus Toolkit, or a proprietary architecture. Confusion remains in that vendors may badge their computing on demand or cluster offerings as Grid computing.

[edit] See also

Concepts and related technology
Alliances and organizations
Production grids
Standards and APIs
Software implementations & middleware

[edit] References

[edit] Notes

  1. ^ David J. Farber; K. Larson (Sept 1970). "The Architecture of a Distributed Computer System - An Informal Description". Technical Report Number 11, University of California, Irvine.
  2. ^ Mockapetris, Paul V.; David J. Farber (1977). "The Distributed Computer System (DCS): Its Final Structure". Technical Report, University of California, Irvine.
  3. ^ What is the Grid? A Three Point Checklist (pdf).
  4. ^ IBM Solutions Grid for Business Partners: Helping IBM Business Partners to Grid-enable applications for the next phase of e-business on demand.
  5. ^ A Gentle Introduction to Grid Computing and Technologies (pdf). Retrieved on 2005-05-06.
  6. ^ The Grid Café - What is Grid?. CERN. Retrieved on 2005-02-04.

[edit] Bibliography

[edit] External links

Upcoming Events
News & info
Portals and Grid Projects
Articles
Projects for end-user participation


Topics in Parallel Computing  v  d  e 
General High-performance computing
Theory SpeedupAmdahl's lawFlynn's TaxonomyCost EfficiencyGustafson's Law
Elements ProcessThreadFiberParallel Random Access Machine
Coordination MultiprocessingMultitaskingMemory coherencyCache coherencyBarrierSynchronizationDistributed computingGrid computing
Programming Programming modelImplicit parallelismExplicit parallelism
Hardware Computer clusterBeowulfSymmetric multiprocessingNon-Uniform Memory AccessCache only memory architectureAsymmetric multiprocessingSimultaneous multithreadingShared memoryDistributed memoryMassively parallel processingSuperscalar processingVector processingSupercomputer
Software Distributed shared memoryApplication checkpointing
Problems Embarrassingly parallelGrand Challenge