Concurrency (computer science)

From Wikipedia, the free encyclopedia

Wikiquote has a collection of quotations related to:
The "Dining Philosophers", a classic problem involving concurrency and shared resources
Enlarge
The "Dining Philosophers", a classic problem involving concurrency and shared resources

In computer science, concurrency is a property of systems which consist of computations that execute overlapped in time, and which may permit the sharing of common resources between those overlapped computations. Or in Edsger Dijkstra's own words, "Concurrency occurs when two or more execution flows are able to run simultaneously." Concurrent use of shared resources is the source of many difficulties. Race conditions involving shared resources can result in unpredictable system behavior. The introduction of mutual exclusion can prevent race conditions, but can lead to problems such as deadlock, and starvation. The design of concurrent systems often entails finding reliable techniques for coordinating their execution, data exchange, memory allocation, and execution scheduling to minimize response time and maximise throughput.

Contents

[edit] Theory

Concurrency theory has been an active field of research in theoretical computer science since Carl Adam Petri's seminal work on the Petri net in the early 1960s. Since that time, a wide variety of theoretical models, logics, and tools for understanding concurrent systems have been developed.

[edit] Models

A variety of formalisms for modelling and understanding concurrent systems have been developed, including:

Some of these models of concurrency are primarily intended to support reasoning and specification, while others can be used through the entire development cycle, including design, implementation, proof, testing and simulation of concurrent systems.

[edit] Logics

Various types of temporal logic can be used to help reason about concurrent systems. Some of these logics, such as linear temporal logic and computational tree logic, allow assertions to be made about the sequences of states that a concurrent system can pass through. Others, such as action computational tree logic, Hennessy-Milner logic, and Lamport's temporal logic of actions, build their assertions from sequences of actions (changes in state). The principal application of all of these logics is in writing specifications for concurrent systems.

[edit] Practice

Concurrent programming encompasses the programming languages and algorithms used to implement concurrent systems. Concurrent programming is usually considered to be more general than parallel programming because it can involve arbitrary and dynamic patterns of communication and interaction, whereas parallel systems generally have a predefined and well-structured communications pattern. The base goals of concurrent programming include correctness, performance and robustness. Concurrent systems such as operating systems are generally designed to operate indefinitely and not terminate unexpectedly. Some concurrent systems implement a form of transparent concurrency, in which concurrent computational entities may compete for and share a single resource, but the complexities of this competition and sharing are shielded from the programmer.

Because they use shared resources, concurrent systems in general require the inclusion of some kind of arbiter somewhere in their implementation (often in the underlying hardware), to control access to those resources. The use of arbiters introduces the possibility of unbounded nondeterministic decisions, which can have consequences for both the correctness and performance of a concurrent system.

[edit] See also

[edit] External links