Concurrent computing
From Wikipedia, the free encyclopedia
This article or section is missing citations or needs footnotes. Using inline citations helps guard against copyright violations and factual inaccuracies. (December 2006) |
Concurrent computing is the concurrent (simultaneous) execution of multiple interacting computational tasks. These tasks may be implemented as separate programs, or as a set of processes or threads created by a single program. The tasks may also be executing on a single processor, several processors in close proximity, or distributed across a network. Concurrent computing is related to parallel computing, but focuses more on the interactions between tasks. Correct sequencing of the interactions or communications between different tasks, and the coordination of access to resources that are shared between tasks, are key concerns during the design of concurrent computing systems. Pioneers in the field of concurrent computing include Edsger Dijkstra, Per Brinch Hansen, and C. A. R. Hoare.
Contents |
[edit] Concurrent interaction and communication
In some concurrent computing systems communication between the concurrent components is hidden from the programmer (e.g., by using futures), while in others it must be handled explicitly. Explicit communication can be divided into two classes:
- Shared memory communication
- Concurrent components communicate by altering the contents of shared memory locations (exemplified by Java and C#). This style of concurrent programming usually requires the application of some form of locking (e.g., mutexes (means mutual exclusion), semaphores, or monitors) to coordinate between threads.
- Message passing communication
- Concurrent components communicate by exchanging messages (exemplified by Erlang and occam). The exchange of messages may be carried out asynchronously (sometimes referred to as "send and pray", although it is standard practice to resend messages that are not acknowledged as received), or may use a rendezvous style in which the sender blocks until the message is received. Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust, although slower, form of concurrent programming. A wide variety of mathematical theories for understanding and analyzing message-passing systems are available, including the Actor model, and various process calculi. Message passing can be efficiently implemented on symmetric multiprocessors using shared coherent memory.
[edit] Coordinating access to resources
One of the major issues in concurrent computing is preventing concurrent processes from interfering with each other. For example, consider the following algorithm for making withdrawals from a checking account represented by the shared resource balance
:
1 bool withdraw(int withdrawal) { 2 if( balance > withdrawal ) { 3 balance = balance - withdrawal; 4 return true; 5 } 6 return false; 7 }
Suppose balance=500
, and two concurrent processes make the calls withdraw(300)
and withdraw(350)
. If line 2 in both operations executes before line 3 both operations will find that balance > withdrawal
evaluates to true
, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources require the use of concurrency control, or non-blocking algorithms.
Because concurrent systems rely on the use of shared resources (including communications mediums), concurrent computing in general requires the use of some form of arbiter somewhere in the implementation to mediate access to these resources.
Unfortunately, while many solutions exist to the problem of a conflict over one resource, many of those "solutions" have their own concurrency problems such as deadlock when more than one resource is involved.
[edit] Advantages
This article does not cite any references or sources. (December 2006) Please help improve this article by adding citations to reliable sources. Unverifiable material may be challenged and removed. |
- Increased application throughput - the number of tasks done in certain time period will increase.
- High responsiveness for input/output - input/output-intensive applications mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task.
- More appropriate program struct - some problems and problem domains are well-suited to representation as concurrent tasks or processes.
[edit] Concurrent programming languages
Concurrent programming languages are programming languages that use language constructs for concurrency. These constructs may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory) or futures (known also as promises).
Today, the most commonly used programming languages that have specific constructs for concurrency are Java and C#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang is probably the most widely used in industry at present.
Many concurrent programming languages have been developed more as research languages (e.g. Pict) rather than as languages for production use. However, languages such as Erlang, Limbo, and occam have seen industrial use at various times in the last 20 years. Languages in which concurrency plays an important role include:
- Ada
- Afnix – concurrent access to data is protected automatically (previously called Aleph, but unrelated to Alef)
- Alef – concurrent language with threads and message passing, used for systems programming in early versions of Plan 9 from Bell Labs
- Alice – extension to Standard ML, adds support for concurrency via futures.
- CDL (Concurrent Description Language), a machine-translatable, composable, object-oriented, visual programming language.
- Cilk – a concurrent C
- Cω – C Omega, a research language extending C#, uses asynchronous communication
- Concurrent C.
- Concurrent Clean – a functional programming language, similar to Haskell
- Concurrent ML – a concurrent extension of Standard ML
- Concurrent Pascal – by Brinch-Hansen
- Corn
- Compute Unified Device Architecture (CUDA)
- Curry
- E – uses promises, ensures deadlocks cannot occur
- Eiffel – through its SCOOP mechanism based on the concepts of Design by Contract
- Erlang – uses asynchronous message passing with nothing shared
- Janus features distinct "askers" and "tellers" to logical variables, bag channels; is purely declarative
- JoCaml
- Join Java – concurrent language based on the Java programming language
- Joule – dataflow language, communicates by message passing
- Concurrent Haskell – lazy, pure functional language operating concurrent processes on shared memory
- Limbo – relative of Alef, used for systems programming in Inferno (operating system)
- MultiLisp – Scheme variant extended to support parallelism
- occam – influenced heavily by Communicating Sequential Processes (CSP).
- occam-π – a modern variant of occam, which incorporates ideas from Milner's π-calculus
- Orc – a heavily concurrent, nondeterministic language based on Kleene algebra.
- Oz – multiparadigm language, supports shared-state and message-passing concurrency, and futures
- Pict – essentially an executable implementation of Milner's π-calculus
- SALSA – actor language with token-passing, join, and first-class continuations for distributed computing over the Internet
- Scala – a general purpose programming language designed to express common programming patterns in a concise, elegant, and type-safe way
- SR – research language
Many other languages provide support for concurrency in the form of libraries (on level roughly comparable with the above list).
[edit] Models of concurrency
It has been suggested that Synchronous rendezvous be merged into this article or section. (Discuss) |
There are several models of concurrent computing, which can be used to understand and analyze concurrent systems. These models include:
- The Actor model
- Petri nets
- Process calculi such as
[edit] See also
- Message passing programming
- Ptolemy Project
- Race condition#Computing
- Critical section
- Transaction processing
- Flow-based programming
[edit] Bibliography
- Filman, Robert E.; Daniel P. Friedman. Coordinated Computing: Tools and Techniques for Distributed Software origyear=1984. New York: McGraw-Hill, 370. ISBN 0-07-022439-0.
- Taubenfeld, Gadi (2006). Synchronization Algorithms and Concurrent Programming. Pearson / Prentice Hall, 433. ISBN 0131972596.
[edit] External links
|