Concurrent computing
From Wikipedia, the free encyclopedia
Concurrent computing is the concurrent (simultaneous) execution of multiple interacting computational tasks. These tasks may be implemented as separate programs, or as a set of processes or threads created by a single program. The tasks may also be executing on a single processor, several processors in close proximity, or distributed across a network. Concurrent computing is related to parallel computing, but focuses more on the interactions between tasks. Correct sequencing of the interactions or communications between different tasks, and the coordination of access to resources that are shared between tasks, are key concerns during the design of concurrent computing systems. Pioneers in the field of concurrent computing include Edsger Dijkstra, Per Brinch Hansen, and C. A. R. Hoare.
Contents |
[edit] Concurrent interaction and communication
In some concurrent computing systems communication between the concurrent components is hidden from the programmer (e.g., by using futures), while in others it must be handled explicitly. Explicit communication can be divided into two classes:
- Shared memory communication
- Concurrent components communicate by altering the contents of shared memory locations (exemplified by Java and C#). This style of concurrent programming usually requires the application of some form of locking (e.g., mutexes, semaphores, or monitors) to coordinate between threads.
- Message passing communication
- Concurrent components communicate by exchanging messages (exemplified by Erlang and occam). The exchange of messages may be carried out asynchronously (sometimes referred to as "send and pray", although it is standard practice to resend messages that are not acknowledged as received), or may use a rendezvous style in which the sender blocks until the message is received. Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust, although slower, form of concurrent programming. A wide variety of mathematical theories for understanding and analyzing message-passing systems are available, including the Actor model, and various process calculi.
[edit] Coordinating access to resources
One of the major issues in concurrent computing is preventing concurrent processes from interfering with each other. For example, consider the following algorithm for making withdrawals from a checking account represented by the shared resource balance
:
1 bool withdraw(int withdrawal) { 2 if( balance > withdrawal ) { 3 balance = balance - withdrawal; 4 return true; 5 } else return false; 6 }
Suppose balance=500
, and two concurrent processes make the calls withdraw(300)
and withdraw(350)
. If line 2 in both operations executes before line 3 both operations will find that balance > withdrawal
evaluates to true
, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources require the use of concurrency control, or non-blocking algorithms.
Because concurrent systems rely on the use of shared resources (including communications mediums), concurrent computing in general requires the use of some form of arbiter somewhere in the implementation to mediate access to these resources. This introduces the possibility that problems with unbounded nondeterministic decisions can arise, although careful design of the arbiters can reduce the probability of these situations arising in practice to near zero.
Unfortunately, while many solutions exist to the problem of a conflict over one resource, many of those "solutions" have their own concurrency problems such as deadlock when more than one resource is involved.
[edit] Concurrent programming languages
Concurrent programming languages are programming languages that use language constructs for concurrency. These constructs may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory) or futures (known also as promises).
Today, the most commonly used programming languages that have specific constructs for concurrency are Java and C#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang is probably the most widely used in industry at present.
Many concurrent programming languages have been developed more as research languages (e.g. Pict) rather than as languages for production use. However, languages such as Erlang, Limbo, and occam have seen industrial use at various times in the last 20 years. Languages in which concurrency plays an important role include:
- Ada
- Afnix – concurrent access to data is protected automatically (previously called Aleph, but unrelated to Alef)
- Alef – concurrent language with threads and message passing, used for systems programming in early versions of Plan 9 from Bell Labs
- Alice – extension to Standard ML, adds support for concurrency via futures.
- CDL (Concurrent Description Language), a machine-translatable, composable, object-oriented, visual programming language.
- ChucK – domain specific programming language for audio.
- Cilk – a concurrent C
- Cω – C Omega, a research language extending C#, uses asynchronous communication
- Concurrent C.
- Concurrent Clean – a functional programming language, similar to Haskell
- Concurrent Pascal – by Brinch-Hansen
- Corn
- Curry
- E – uses promises, ensures deadlocks cannot occur
- Eiffel – through its SCOOP mechanism based on the concepts of Design by Contract
- Erlang – uses asynchronous message passing with nothing shared
- Janus features distinct "askers" and "tellers" to logical variables, bag channels; is purely declarative
- Join Java – concurrent language based on the Java programming language
- Joule – dataflow language, communicates by message passing
- Limbo – relative of Alef, used for systems programming in Inferno (operating system)
- Oz – multiparadigm language, supports shared-state and message-passing concurrency, and futures
- MultiLisp – Scheme variant extended to support parallelism
- occam – influenced heavily by Communicating Sequential Processes (CSP).
- occam-π – a modern variant of occam, which incorporates ideas from Milner's π-calculus
- Pict – essentially an executable implementation of Milner's π-calculus
- SALSA – actor language with token-passing, join, and first-class continuations for distributed computing over the Internet
- SR – research language
Many other languages provide support for concurrency in the form of libraries (on level roughly comparable with the above list).
[edit] Models of concurrency
There are several models of concurrent computing, which can be used to understand and analyze concurrent systems. These models include:
- The Actor model
- Petri nets
- Process calculi such as
[edit] See also
[edit] Bibliography
- Filman, Robert E., Daniel P. Friedman. Coordinated Computing: Tools and Techniques for Distributed Software origyear=1984. New York: McGraw-Hill, 370. ISBN 0-07-022439-0.