Consensus (computer science)
From Wikipedia, the free encyclopedia
Consensus is a problem in distributed computing that encapsulates the task of group agreement in the presence of faults.[1]
In particular, any process in the group may crash at any time. Consensus is fundamental to core techniques in fault tolerance, such as state machine replication.
Contents |
[edit] Problem Description
A process is called "correct" if it does not fail at any point during its execution. Unlike Terminating Reliable Broadcast, the typical Consensus problem does not label any single process as a "sender". Every process "proposes" a value; the goal of the protocol is for all correct processes to choose a single value from among those proposed. A process may perform many I/O operations during protocol execution, but must eventually "decide" a value by passing it to the application on that process that invoked the Consensus protocol.
Valid consensus protocols must provide important guarantees to all processes involved. All correct processes must eventually decide the same value, for example, and that value must be one of those proposed. A correct process is therefore guaranteed that the value it decides was also decided by all other correct processes, and can act on that value accordingly.
More precisely, a Consensus protocol must satisfy the four formal properties below.
- Termination: every correct process decides some value.
- Validity: if all processes propose the same value v, then every correct process decides v.
- Integrity: every correct process decides at most one value, and if it decides some value v, then v must have been proposed by some process.
- Agreement: if a correct process decides v, then every correct process decides v.
The possibility of faults in the system makes these properties more difficult to satisfy. A simple but invalid Consensus protocol might have every process broadcast its proposal to all others, and have a process decide on the smallest value received. Such a protocol, as described, does not satisfy Agreement if faults can occur: if a process crashes after sending its proposal to some processes, but before sending it to others, then the two sets of processes may decide different values.
[edit] Impossibility
Please help improve this section by expanding it. Further information might be found on the talk page or at requests for expansion. |
Consensus has been shown to be impossible to solve in several models of distributed computing.
In an asynchronous system, where processes have no common clock and run at arbitrarily varying speeds, the problem is impossible to solve if one process may crash and processes communicate by sending messages to one another [2]. The technique used to prove this result is sometimes called an FLP impossibility proof, named after its creators, Michael J. Fischer, Nancy A. Lynch and Michael S. Paterson, who won the Dijkstra Prize for this result. The technique has been widely used to prove other impossibility results. For example, a similar proof can be used to show that consensus is also impossible in asynchronous systems where processes communicate by reading and writing shared variables if one process may crash [3].
The FLP result does not state that consensus can never be reached: merely that under the model's assumptions, no algorithm can always reach consensus in bounded time. There exist algorithms, even under the asynchronous model, that can reach consensus with probability one. The FLP proof hinges on demonstrating the existence of an order of message receipts that causes the system to never reach consensus. This "bad" input however may be vanishingly unlikely in practice.
In a synchronous system, where all processes run at the same speed, consensus is impossible if processes communicate by sending messages to one another and one third of the processes can experience Byzantine failures[4].
[edit] Important Consensus Protocols
Please help improve this section by expanding it. Further information might be found on the talk page or at requests for expansion. |
Google has implemented a distributed lock service library called Chubby [5]. Chubby maintains locks information in small files which are stored in a replicated database to achieve high availability in the face of failures. The database is implemented on top of a fault-tolerant log layer which is based on the Paxos consensus algorithm. In this scheme, Chubby clients communicate with the Paxos master in order to access/update the replicated log, i.e., read/write to the files [6].
[edit] Context in Distributed Computing
Please help improve this section by expanding it. Further information might be found on the talk page or at requests for expansion. |
[edit] References
- ^ Lamport, Leslie; Marshall Pease and Robert Shostak (April 1980). "Reaching Agreement in the Presence of Faults". Journal of the ACM 27 (2): 228--234. doi: . 10.1145/322186.322188.
- ^ Fischer, Michael J.; Nancy A. Lynch; Michael S. Paterson (April 1985). "Impossibility of Distributed Consensus with One Faulty Process". Journal of the ACM 32 (2): 374–382. doi: .
- ^ Loui, M. C. & Abu-Amara, H. H. (1987), “Memory requirements for agreement among unreliable asynchronous processes”, in Preparata, F. P., Advances in Computing Research, vol. 4, Greenwich, Connecticut: JAI Press, pp. 163-183
- ^ Fischer, Michael J.; Nancy A. Lynch; Michael Merritt (1986). "Easy impossibility proofs for distributed consensus problems". Distributed Computing 1 (1): 26–39. Springer. doi: .
- ^ Burrows, M. (2006). "The Chubby lock service for loosely-coupled distributed systems".: 335-350, USENIX Association Berkeley, CA, USA.
- ^ C., Tushar; Griesemer, R; Redstone J. (2007). "Paxos Made Live - An Engineering Perspective". Proceedings of the twenty-sixth annual ACM symposium on Principles of distributed computing: 398-407, Portland, Oregon, USA: ACM Press New York, NY, USA. doi:http://doi.acm.org/10.1145/1281100.1281103. Retrieved on 2008-02-06.