Beowulf (computing)
From Wikipedia, the free encyclopedia
Beowulf is designed for high-performance parallel computing clusters on inexpensive personal computer hardware. The name comes from the main character in the Old English epic poem Beowulf.
Originally developed by Thomas Sterling and Donald Becker at NASA, Beowulf systems are now deployed worldwide, chiefly in support of scientific computing.
A Beowulf cluster is a group of usually identical PC computers running a Free and Open Source Software (FOSS) Unix-like operating system, such as BSD, Linux or Solaris. They are networked into a small TCP/IP LAN, and have libraries and programs installed which allow processing to be shared among them.
There is no particular piece of software that defines a cluster as a Beowulf. Commonly used parallel processing libraries include MPI (Message Passing Interface) and PVM (Parallel Virtual Machine). Both of these permit the programmer to divide a task among a group of networked computers, and recollect the results of processing. A common misconception is that arbitrary software will run faster on a Beowulf. Software must be revised to take advantage of the cluster. Specifically, it must perform multiple independent parallel operations that can be distributed among the available processors.
Contents |
[edit] Definition (original Beowulf HOWTO)
The following is the definition of a Beowulf cluster from the original Beowulf HOWTO published by Jacek Radajewski and Douglas Eadline under the Linux Documentation Project in 1998.
- Beowulf is a multi-computer architecture which can be used for parallel computations. It is a system which usually consists of one server node, and one or more client nodes connected together via Ethernet or some other network. It is a system built using commodity hardware components, like any PC capable of running a Unix-like operating system, with standard Ethernet adapters, and switches. It does not contain any custom hardware components and is trivially reproducible. Beowulf also uses commodity software like the Linux or Solaris operating system, Parallel Virtual Machine (PVM) and Message Passing Interface (MPI). The server node controls the whole cluster and serves files to the client nodes. It is also the cluster's console and gateway to the outside world. Large Beowulf machines might have more than one server node, and possibly other nodes dedicated to particular tasks, for example consoles or monitoring stations. In most cases client nodes in a Beowulf system are dumb, the dumber the better. Nodes are configured and controlled by the server node, and do only what they are told to do. In a disk-less client configuration, client nodes don't even know their IP address or name until the server tells them what it is.
- One of the main differences between Beowulf and a Cluster of Workstations (COW) is the fact that Beowulf behaves more like a single machine rather than many workstations. In most cases client nodes do not have keyboards or monitors, and are accessed only via remote login or possibly serial terminal. Beowulf nodes can be thought of as a CPU + memory package which can be plugged in to the cluster, just like a CPU or memory module can be plugged into a motherboard.
- Beowulf is not a special software package, new network topology or the latest kernel hack. Beowulf is a technology of clustering computers to form a parallel, virtual supercomputer. Although there are many software packages such as kernel modifications, PVM and MPI libraries, and configuration tools which make the Beowulf architecture faster, easier to configure, and much more usable, one can build a Beowulf class machine using standard Linux distribution without any additional software. If you have two networked computers which share at least the
/home
file system via NFS, and trust each other to execute remote shells (rsh), then it could be argued that you have a simple, two node Beowulf machine.
[edit] Operating systems
Presently, there are a number of Linux distributions and one BSD that are designed for building Beowulf clusters. These include:
- ClusterKnoppix (based on Knoppix) - Last update 2004-08-31
- ParallelKnoppix[1] (Also based on Knoppix) - Last update 2008-05-29
- PelicanHPC[2] (based on Debian Live[3])
- dyne:bolic (geared towards multimedia production)
- Rocks Cluster Distribution
- Scyld
- DragonFly BSD
- Bootable Cluster CD - Last update 2006-12-06
- Quantian (Live DVD with scientific applications, based on Knoppix and ClusterKnoppix) - Last update 2006-02-26.
A cluster can be set up by using Knoppix bootable CDs in combination with OpenMosix. The computers will automatically link together, without need for complex configurations, to form a Beowulf cluster utilizing all CPUs and RAM in the cluster. A Beowulf cluster is scalable to a nearly unlimited number of computers, limited only by the overhead of the network.
[edit] Examples
- Kentucky Linux Athlon Testbed (KLAT2)
- Stone Soupercomputer
- Carnegie Mellon University, Process Systems Engineering Beowulf Cluster
- Southampton University, Information Systems Services, Iridis Beowulf Cluster
- Asgard - Beowulf Computing at the Swiss Federal Institute of Technology
- Sub-$2500 "Microwulf" Beowulf at Calvin College
- LittleFe.net, home of the LittleFe, LittleFeAR, and LittlEfika projects
[edit] Name
The name for the system was bestowed by Dr. Sterling because the poem describes Beowulf as having "thirty men's heft of grasp in the gripe of his hand." [4]
[edit] Popular Culture
"Can you imagine a Beowulf cluster of these?" is a long running joke on Slashdot, being posted any time an article mentions a new CPU or computer.
[edit] Other software solutions
- See also: Category:Job scheduling
[edit] See also
- Computer cluster
- Grid computing
- Alewife (multiprocessor), a predecessor to Beowulf developed at MIT using customized SPARC processors
[edit] External links
- Beowulf.org
- Home build
- Cluster Monkey Free On-line Cluster Magazine
- LinuxHPC.org
- MPI homepage
- KLAT2
- Cluster Builder
- NPACI Rocks
- Project Kusu HPC Cluster Toolkit
- Engineering a Beowulf-style Compute Cluster
- KASY0 (Kentucky ASYmmetric Zero)
|