Lock-free and wait-free algorithms
From Wikipedia, the free encyclopedia
It has been suggested that this article or section be merged into Non-blocking_synchronization. (Discuss) |
The introduction to this article provides insufficient context for those unfamiliar with the subject. Please help improve the article with a good introductory style. |
In contrast to algorithms that protect access to shared data with locks, lock-free and wait-free algorithms are specially designed to allow multiple threads to read and write shared data concurrently without corrupting it. "Lock-free" refers to the fact that a thread cannot lock up: every step it takes brings progress to the system. This means that no synchronization primitives such as mutexes or semaphores can be involved, as a lock-holding thread can prevent global progress if it is switched out. "Wait-free" refers to the fact that a thread can complete any operation in a finite number of steps, regardless of the actions of other threads. All wait-free algorithms are lock-free, but the reverse is not necessarily true.
Lock-free algorithms are one kind of non-blocking synchronization.
Contents |
[edit] Motivation
The traditional approach to multi-threaded programming is to use locks to synchronize access to shared resources. Synchronization primitives such as mutexes, semaphores, and critical sections are all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free.
Blocking a thread is undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything. If the blocked thread is performing a high-priority or real-time task, it is highly undesirable to halt its progress. Other problems are less obvious. Certain interactions between locks can lead to error conditions such as deadlock, livelock, and priority inversion. Using locks also involves a trade-off between coarse-grained locking which can significantly reduce opportunities for parallelism, and fine-grained locking which requires more careful design and is more prone to bugs.
[edit] The lock-free approach
Writing a program that uses lock-free data structures is not a simple matter of merely rewriting the algorithms you would normally protect with a mutex to be lock-free. Because lock-free algorithms are so difficult to write, researchers focus on writing lock-free versions of basic data structures such as stacks, queues, sets, and hash tables. These allow programs to easily exchange data between threads asynchronously.
For example, consider a banking program where each thread represents a virtual teller. A lock-based approach to making a deposit could be to have one teller lock an account to make a deposit, so that two tellers don't try to deposit into the same account simultaneously. To make the process lock-free, rather than designing a lock-free "deposit" algorithm you might have the teller submit a "deposit" request asynchronously to a centralized thread that handled all deposits.
The previous example is a little misleading. There are formal definitions of the term "lock-free" which try to disallow anything that "looks" like a lock. The idea is that even if a thread crashes (or gets held up by something like priority inversion), the rest of the processes can still carry on in some way. Having a centralized thread would probably fail this definition (depending on how you define things, does a process that submits deposits that never get processed make progress?). A common approach to satisfying the formal definition is recursive helping. In the banking program example, a recursive helping approach would allow other threads to complete the deposit of a stopped or slow thread if it was getting in the way of their actions. Recursive helping in a way emphasizes the fault-tolerance aspect of lock-freedom over the performance aspect.
While some might say that the formal definition does not really capture the right idea (or should have a different name), it is something that should be kept in mind when claiming that an algorithm is lock-free.
[edit] Implementation
Lock-free and wait-free algorithms are written using atomic primitives that the hardware must provide. The most notable of these is "compare and swap" (often notated "CAS"),
(although requiring to have actual atomic primitives may not be a hard requirement. See for example: Dekker's_algorithm)
CAS(addr, old, new) = atomic if *addr = old then *addr := new ; return true else return false endif endatomic
The CAS takes three arguments: a memory address, an old value, and a new value. If the address contains the old value, it is replaced with the new value, otherwise it is unchanged. Critically, the hardware guarantees that this "comparison and swap" operation is executed atomically. The success of this operation is then reported back to the program. This allows an algorithm to read a value from memory, modify it, and write it back only if no other thread modified it in the meantime.
For example, consider a different implementation of the banking program where each thread represents a virtual teller. The teller reads the current value of the account (old value), adds an amount and uses CAS to attempt to update the account balance. If no other thread has modified the account balance in the intervening period, the CAS will succeed, and the account balance will be updated. However, if a concurrent modification has occurred, the CAS will fail, and the teller will retry the update (by first fetching the new account balance). Each teller will perform this CAS operation in a loop, retrying until they are successful. This algorithm is lock-free but not wait-free, since other threads may keep writing new values and make the failing teller try again indefinitely.
This approach can be extended using a universal construction, due to Herlihy [1], to any data structure. Which is essentially that the data structure is updated in a purely functional way, then compare and swap is used to swing the pointer over to the new version of the data structure. However, this is a mostly theoretical construct.
[edit] References
[edit] See also
- ABA problem
- Concurrency control
- Deadlock
- Lock (software engineering)
- Memory barrier
- Mutual exclusion
- Non-blocking synchronization
- Pre-emptive multitasking
- Priority inversion
- Read-copy-update
- Resource starvation
- Room synchronization
- Software transactional memory
[edit] External links
- Survey "Some Notes on Lock-Free and Wait-Free Algorithms" by Ross Bencina
java.util.concurrent.atomic
– supports lock-free and thread-safe programming on single variables- System.Threading.Interlocked - Provides atomic operations for variables that are shared by multiple threads (.NET Framework)
- The Jail-Ust Container Library
- Practical lock-free data structures
- Thesis "Efficient and Practical Non-Blocking Data Structures" (1414 KB) by Håkan Sundell
- WARPing - Wait-free techniques for Real-time Processing
- Non-blocking Synchronization: Algorithms and Performance Evaluation. (1926 KB) by Yi Zhang
- "Design and verification of lock-free parallel algorithms" by Hui Gao
- "Asynchronous Data Sharing in Multiprocessor Real-Time Systems Using Process Consensus" by Jing Chen and Alan Burns
- Discussion "lock-free versus lock-based algorithms"
- Atomic Ptr Plus Project - collection of various lock-free synchronization primitives
- AppCore: A Portable High-Performance Thread Synchronization Library - An Effective Marriage between Lock-Free and Lock-Based Algorithms
- WaitFreeSynchronization and LockFreeSynchronization at the Portland Pattern Repository
- Multiplatform library with atomic operations