Condor High-Throughput Computing System

From Wikipedia, the free encyclopedia

Condor
Developed by University of Wisconsin-Madison
Latest release 7.0.2 Stable / June 10, 2008
OS Microsoft Windows, Mac OS X, Linux, AIX, FreeBSD, Solaris, HPUX, OSF, Tru64
Genre High-Throughput Computing
License Apache License 2.0
Website Official website

Condor is a high-throughput computing software framework for coarse-grained distributed parallelization of computationally intensive tasks. [1] It can be used to manage workload on a dedicated cluster of computers, and/or to farm out work to idle desktop computers — so-called cycle scavenging. Condor runs on Linux, Unix, Mac OS X, FreeBSD, and contemporary Windows operating systems. Condor can seamlessly integrate both dedicated resources (rack-mounted clusters) and non-dedicated desktop machines (cycle scavenging) into one computing environment.

Condor is developed by the Condor team at the University of Wisconsin-Madison and is freely available for use. Condor follows an open source philosophy (it's licensed under the Apache License 2.0)[2].

By way of example, the NASA Advanced Supercomputing facility (NAS) Condor pool consists of approximately 350 SGI and Sun workstations purchased and used for software development, visualization, email, document preparation, etc. [3] Each workstation runs a daemon that watches user I/O and CPU load. When a workstation has been idle for two hours, a job from the batch queue is assigned to the workstation and will run until the daemon detects a keystroke, mouse motion, or high non-Condor CPU usage. At that point, the job will be removed from the workstation and placed back on the batch queue.

Condor can run both sequential and parallel jobs. Sequential jobs can be run in several different "universes", including "vanilla" which provides the ability to run most "batch ready" programs, and "standard universe" in which the target application is re-linked with the Condor I/O library which provides for remote job I/O and job checkpointing. Condor also provides a "local universe" which allows jobs to run on the "submit host".

In the world of parallel jobs, Condor supports the standard MPI and PVM (Goux, et al. 2000) in addition to its own Master Worker "MW" library for extremely parallel tasks.

Condor-G allows Condor jobs to be forwarded to foreign job schedulers. Currently, Torque/PBS and LSF are supported. Support for Sun Grid Engine is currently under development as part of the EGEE project.

Other Condor features include "DAGMan" which provides a mechanism to describe job dependencies, and the ability to use Condor as the front-end to submit jobs to other distributed computing systems (such as Globus). The Condor Project is an active participant in the grid computing field.

Condor is one of the job scheduler mechanisms supported by GRAM (Grid Resource Allocation Manager), a component of the Globus Toolkit.

[edit] References

  1. ^ Thain, Douglas (2005). "Distributed Computing in Practice: the Condor Experience.". Concurrency and Computation: Practice and Experience 17 (2-4): 323-356. doi:10.1002/cpe.938. 
  2. ^ Condor License Agreement
  3. ^ Condor Testimonials of High Throughput Computing

[edit] External links

Languages