[go: nahoru, domu]

Jump to content

Grid computing

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jan Hidders (talk | contribs) at 17:11, 29 May 2006 (→‎External links: stamford => standford). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Grid computing is an emerging computing model that provides the ability to perform higher throughput computing by taking advantage of many networked computers to model a virtual computer architecture that is able to distribute process execution across a parallel infrastructure. Grids use the resources of many separate computers connected by a network (usually the Internet) to solve large-scale computation problems. Grids provide the ability to perform computations on large data sets, by breaking them down into many smaller ones, or provide the ability to perform many more computations at once than would be possible on a single computer, by modeling a parallel division of labour between processes. Today resource allocation in a grid is done in accordance with SLAs (service level agreements).

Origins

Like the Internet, the Grid Computing concept has evolved from the computational needs of "big science". The Internet was developed to meet the need for a common communication medium between large, federally funded computing centers. These communication links led to resource and information sharing between these centers and eventually to provide access to them for additional users. Ad hoc resource sharing 'procedures' among these original groups pointed the way toward standardization of the protocols needed to communicate between any administrative domain. The current Grid technology can be viewed as an extension or application of this framework to create a more generic resource sharing context.

The ideas of the Grid were brought together by Ian Foster, Carl Kesselman and Steve Tuecke, the so called "fathers of the Grid". They lead the effort to create the Globus Toolkit incorporating not just CPU management (e.g. cluster management and cycle scavenging) but also storage management, security provisioning, data movement, monitoring and a toolkit for developing additional services based on the same infrastructure including agreement negotiation, notification mechanisms, trigger services and information aggregation. In short, the term Grid has much further reaching implications than the general public believes. While Globus Toolkit remains the de facto standard for building Grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise Grid.

The remainder of this article discusses the details behind these notions.

Common features

Grid computing offers a model for solving massive computational problems by making use of the unused resources (CPU cycles and/or disk storage) of large numbers of disparate computers, often desktop computers, treated as a virtual cluster embedded in a distributed telecommunications infrastructure. Grid computing's focus on the ability to support computation across administrative domains sets it apart from traditional computer clusters or traditional distributed computing.

Grids offer a way to solve Grand Challenge problems like protein folding, financial modelling, earthquake simulation, and climate/weather modelling. Grids offer a way of using the information technology resources optimally inside an organization. They also provide a means for offering information technology as a utility bureau for commercial and non-commercial clients, with those clients paying only for what they use, as with electricity or water.

Grid computing has the design goal of solving problems too big for any single supercomputer, whilst retaining the flexibility to work on multiple smaller problems. Thus Grid computing provides a multi-user environment. Its secondary aims are better exploitation of available computing power and catering for the intermittent demands of large computational exercises.

This approach implies the use of secure authorization techniques to allow remote users to control computing resources.

Grid computing involves sharing heterogeneous resources (based on different platforms, hardware/software architectures, and computer languages), located in different places belonging to different administrative domains over a network using open standards. In short, it involves virtualizing computing resources.

Grid computing is often confused with cluster computing. The key difference is that a cluster is a single set of nodes sitting in one location, while a Grid is composed of many clusters and other kinds of resources (e.g. networks, storage factilities).

Functionally, one can classify Grids into several types:

  • Computational Grids (including CPU scavenging Grids) which focuses primarily on computationally-intensive operations
  • Data Grids or the controlled sharing and management of large amounts of distributed data
  • Equipment Grids which have a primary piece of equipment e.g. a telescope, and where the surrounding Grid is used to control the equipment remotely and to analyse the data produced.

Definitions of Grid computing

The term Grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power Grid.

Today there are many definitions of Grid computing:

  • The definitive definition of a Grid is provided by Ian Foster in his article "What is the Grid? A Three Point Checklist"[1] The three points of this checklist are:
    • Computing resources are not administered centrally.
    • Open standards are used.
    • Non trivial quality of service is achieved.
  • Plaszczak/Wellner define Grid technology as "the technology that enables resource virtualization, on-demand provisioning, and service (resource) sharing between organizations."
  • IBM says, "Grid is the ability, using a set of open standards and protocols, to gain access to applications and data, processing power, storage capacity and a vast array of other computing resources over the Internet. A Grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of resources distributed across multiple administrative domains based on the resources availability, capacity, performance, cost and users' quality-of-service requirements" [2]
  • Buyya defines Grid as "a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed autonomous resources dynamically at runtime depending on their availability, capability, performance, cost, and users' quality-of-service requirements".[3]
  • CERN, one of the largest users of Grid technology, talk of The Grid: "a service for sharing computer power and data storage capacity over the Internet." [4]
  • Pragmatically, Grid computing is attractive to geographically-distributed non-profit collaborative research efforts like the NCSA Bioinformatics Grids such as BIRN: external Grids.
  • Grid computing is also attractive to large commercial enterprises with complex computation problems who aim to fully exploit their internal computing power: internal Grids.

Grids can be categorized with a three stage model of departmental Grids, enterprise Grids and global Grids. These correspond to a firm initially utilising resources within a single group i.e. an engineering department connecting desktop machines, clusters and equipment. This progresses to enterprise Grids where non-technical staff's computing resources can be used for cycle-stealing and storage. A global Grid is a connection of enterprise and departmental Grids which can be used in a commercial or collaborative manner.

Grid computing is a subset of distributed computing.

Conceptual framework

Grid computing reflects a conceptual framework rather than a physical resource. The Grid approach is utilized to provision a computational task with administratively-distant resources. The focus of Grid technology is associated with the issues and requirements of flexible computational provisioning beyond the local (home) administrative domain.

Virtual organization

A Grid environment is created to address resource needs. The use of that resource(s) (eg. CPU cycles, disk storage, data, software programs, peripherals) is usually characterized by its availability outside of the context of the local administrative domain. This 'external provisioning' approach entails creating a new administrative domain referred to as a Virtual Organization (VO) with a distinct and separate set of administrative policies (home administration policies plus external resource administrative policies equals the VO [aka your Grid] administrative policies). The context for a Grid 'job execution' is distinguished by the requirements created when operating outside of the home administrative context. Grid technology (aka. middleware) is employed to facilitate formalizing and complying with the Grid context associated with your application execution.

Resource utilization

One characteristic that currently distinguishes Grid computing from distributed computing is the abstraction of a 'distributed resource' into a Grid resource. One result of abstraction is that it allows resource substitution to be more easily accomplished. Some of the overhead associated with this flexibility is reflected in the middleware layer and the temporal latency associated with the access of a Grid (or any distributed) resource. This overhead, especially the temporal latency, must be evaluated in terms of the impact on computational performance when a Grid resource is employed.

Web based resources or Web based resource access is an appealing approach to Grid resource provisioning. A recent GGF Grid middleware evolutionary development 're-factored' the architecture/design of the Grid resource concept to reflect using the W3C WSDL (Web Service Description Language) to implement the concept of a WS-Resource. The stateless nature of the Web, while enhancing the ability to scale, can be a concern for applications that migrate from a stateful resource access context to the Web-based stateless resource access context. The GGF WS-Resource concept includes discussions on accommodating the statelessness associated with Web resources access.

State-of-the-art, 2005

The conceptual framework and ancillary infrastructure are evolving at a fast pace and include international participation. The business sector is actively involved in commercialization of the Grid framework. The 'big science' sector is actively addressing the development environment and resource (aka performance) monitoring aspects. Activity is also observed in providing Grid-enabled versions of HPC (High Performance Computing) tools. Activity in the domains of 'little science' appears to be scant at this time. The treatment in the GGF documentation series reflects the HPC roots of the Grid concept framework; this bias should not be interpreted as a restriction in the application of the Grid conceptual framework in its application to other research domains or other computational contexts.

Substantial experience is being built through the operation of various Grids, most notable of them being the EGEE infrastructure supporting LCG, the LHC Computing Grid [1]. LCG is driven by CERN's need to handle a huge amount of data, produced at a rate of almost a gigabyte per second (10 petabytes per year), a history not unlike that of the production NorduGrid. A list of active sites participating within LCG can be found online [2] as can real time monitoring of the EGEE infrastructure [3]. The relevant software and documentation is also publicly accessible [4].

Grid-enabling organizations and offerings

The Global Grid Forum

The Global Grid Forum (GGF) has the purpose of defining specifications for Grid computing. GGF is a collaboration between industry and academia with significant support from both.

The Globus Alliance

The Globus Alliance implements some of the standards developed at the GGF through the Globus Toolkit (Grid middleware). As a middleware component, it provides a standard platform for services to build upon, but Grid computing also needs other components, and many other tools operate to support a successful Grid environment.

Globus has implementations of the GGF-defined protocols to provide:

  1. Resource management: Grid Resource Allocation & Management Protocol (GRAM)
  2. Information Services: Monitoring and Discovery Service (MDS)
  3. Security Services: Grid Security Infrastructure (GSI)
  4. Data Movement and Management: Global Access to Secondary Storage (GASS) and GridFTP

A number of tools function along with Globus to make Grid computing a more robust platform, useful to high-performance computing communities. They include:

XML-based web services offer a way to access the diverse services/applications in a distributed environment. As of 2003 the worlds of Grid computing and of web services have started to converge to offer Grid as a web service (Grid Service). The Open Grid Services Architecture (OGSA) has defined this environment, which will offer several functionalities adhering to the semantics of the Grid Service. The vision of OGSA is to describe and to build a well-defined suite of standard interfaces and behaviours that serve as a common framework for all Grid-enabled systems and applications.

Commercial Grid computing offerings

Computing vendors offer Grid solutions which are based either on the Globus Toolkit, or a proprietary architecture. Confusion remains in that vendors may badge their computing on demand or cluster offerings as Grid computing.

See also

Concepts & related technology
Alliances and organizations
Production grids
Standards and APIs
Software implementations & middleware

References

  1. ^ "What is the Grid? A Three Point Checklist" (pdf).
  2. ^ "IBM Solutions Grid for Business Partners: Helping IBM Business Partners to Grid-enable applications for the next phase of e-business on demand" (PDF).
  3. ^ "A Gentle Introduction to Grid Computing and Technologies" (pdf). Retrieved 2005-05-06.
  4. ^ "The Grid Café - What is Grid?". CERN. Retrieved 2005-02-04.

External links

Upcoming Events
News & info
Blogs
Mailing Lists
Networks and alliances
Vendors
Portal
Articles
Tools, Frameworks, Middleware, Freeware
Projects for end-user participation (see also the List of distributed computing projects for more)
  • Einstein@Home Search data from the Laser Interferometer Gravitational wave Observatory (LIGO) in the US and from the GEO 600 gravitational wave observatory in Germany for signals coming from rapidly rotating neutron stars, known as pulsars.
  • LHC@home Improve the design of the CERN LHC particle accelerator.
  • Climateprediction.net Improve the accuracy of long-term climate prediction.
  • Predictor@home Solve biomedical questions and investigate protein-related diseases.
  • How you can fight against diseases using your computer.
  • WorldCommunityGrid.org A more recently created grid with the aim of running multiple projects on a single grid. From the home page "World Community Grid's mission is to create the largest public computing grid benefiting humanity. Our work is built on the belief that technological innovation combined with visionary scientific research and large-scale volunteerism can change our world for the better. "
  • Folding@home Protein-Folding project by Stanford University
Research