Bulk Synchronous Parallel
From Wikipedia, the free encyclopedia
The Bulk Synchronous Parallel computer is a model for designing parallel algorithms. It serves a similar purpose to the PRAM model. BSP differs from PRAM by not taking communication and synchronization for granted. An important part of analysing a BSP algorithm rests on quantifying the synchronisation and communication needed.
Contents |
[edit] The model
A BSP computer consists of processors connected by a communication network. Each processor has a fast local memory, and may follow different threads of computation.
A BSP computation proceeds in a series of global supersteps. A superstep consists of three ordered stages:
- Concurrent computation : Several computations take place on every participating processor. Each process only uses values stored in the local memory of the processor. The computations are independent in the sense that they occur asynchronously of all the others.
- Communication : At this stage, the processes exchange data between themselves.
- Barrier synchronisation : When a process reaches this point (the barrier), it waits until all other processes have finished their communication actions.
The figure below shows this in a diagrammatic form. The processes are not regarded as having a particular linear order (from left to right or otherwise), and may be mapped to processors in any way.
[edit] Communication
In many parallel programming systems, communications are considered at the level of individual actions: sending and receiving a message, memory to memory transfer, etc. This is difficult to work with, since there are many simultaneous communication actions in a parallel program, and their interactions are typically complex. In particular, it is difficult to say much about the time any single communication action will take to complete.
The BSP model considers communication actions en masse. This has the effect that an upper bound on the time taken to communicate a set of data can be given. BSP considers all communication actions of a superstep as one unit, and assumes all messages have a fixed size.
The maximum number of incoming or outgoing messages for a superstep is denoted by h. The ability of a communication network to deliver data is captured by a parameter g, defined such that it takes time hg for a processor to deliver h messages of size 1.
A message of length m obviously takes longer to send than a message of size 1. However, the BSP model does not make a distinction between a message length of m or m messages of length 1. In either case the cost is said to be mhg.
The parameter g is dependent on the following factors:
- The protocols used to interact within the communication network.
- Buffer management by both the processors and the communication network.
- The routing strategy used in the network.
- The BSP runtime system.
A value for g is, in practice, determined empirically for each parallel computer. Note that g is not the normalised single-word delivery time, but the single-word delivery time under continuous traffic conditions.
[edit] Barriers
On most of today's architectures, barrier synchronisation is often expensive, so should be used sparingly. However, future architecture developments may make them much cheaper. The cost of barrier synchronisation is influenced by a couple of issues:
- The cost imposed by the variation in the completion time of the participating concurrent computations. Take the example where all but one of the processes have completed their work for this superstep, and are waiting for the last process, which still has a lot of work to complete. The best that an implementation can do is ensure that each process works on roughly the same problem size.
- The cost of reaching a globally-consistent state in all of the processors. This depends on the communication network, but also on whether there is special-purpose hardware available for synchronising, and on the way in which interrupts are handled by processors.
The cost of a barrier synchronisation is denoted by l. In practice, a value of l is determined empirically.
Barriers are potentially costly, but have a number of attractions. They do not introduce the possibility of deadlock or livelock, since barriers do not create circular data dependencies. Therefore tools to detect and deal with them are unnecessary. Barriers also permit novel forms of fault tolerance.
[edit] The Cost of a BSP algorithm
The cost of a superstep is determined as the sum of three terms; the cost of the longest running local computation, the cost of global communication between the processors, and the cost of the barrier synchronisation at the end of the superstep. The cost of one superstep for p processors:
where wi is the cost for the local computation in process i, and hi is the number of messages sent or received by process i. Note that homogeneous processors are assumed here. It is more common for the expression to be written as w + hg + l where w and h are maxima. The cost of the algorithm then, is the sum of the costs of each superstep.
where S is the number of supersteps.
W, H, and S are usually modelled as functions, that vary with problem size. These three characteristics of a BSP algorithm are usually described in terms of asymptotic notation, e.g. .
[edit] References
- D.B. Skillicorn, Jonathan Hill, W. F. McColl, Questions and answers about BSP (1996)
[edit] See also
- Computer cluster
- Concurrent computing
- Concurrency
- Grid computing
- Parallel computing
- ScientificPython