Lock-free and wait-free algorithms
From Wikipedia, the free encyclopedia
In contrast to algorithms that protect access to shared data with locks, lock-free and wait-free algorithms are specially designed to allow multiple threads to read and write shared data concurrently without corrupting it. "Lock-free" refers to the fact that a thread cannot lock up: every step it takes brings progress to the system. This means that no synchronization primitives such as mutexes or semaphores can be involved, as a lock-holding thread can prevent global progress if it is switched out. "Wait-free" refers to the fact that a thread can complete any operation in a finite number of steps, regardless of the actions of other threads. It is possible for an algorithm to be lock-free but not wait-free.
Contents |
[edit] Motivation
The traditional approach to multi-threaded programming is to use locks to synchronize access to shared resources. Synchronization primitives such as mutexes, semaphores, and critical sections are all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free.
Blocking a thread is undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything. If the blocked thread is performing a high-priority or real-time task, it is highly undesirable to halt its progress. Other problems are less obvious. Certain interactions between locks can lead to error conditions such as deadlock, livelock, and priority inversion. Using locks also involves a trade-off between coarse-grained locking which can significantly reduce opportunities for parallelism, and fine-grained locking which requires more careful design and is more prone to bugs.
[edit] The lock-free approach
Writing a program that uses lock-free data structures is not a simple matter of merely rewriting the algorithms you would normally protect with a mutex to be lock-free. Because lock-free algorithms are so difficult to write, researchers focus on writing lock-free versions of basic data structures such as stacks, queues, sets, and hash tables. These allow programs to easily exchange data between threads asynchronously.
For example, consider a banking program where each thread represents a virtual teller. A lock-based approach to making a deposit could be to have one teller lock an account to make a deposit, so that two tellers don't try to deposit into the same account simultaneously. To make the process lock-free, rather than designing a lock-free "deposit" algorithm you might have the teller submit a "deposit" request asynchronously to a centralized thread that handled all deposits.
[edit] Implementation
Lock-free and wait-free algorithms are written using atomic primitives that the hardware must provide. The most notable of these is "compare and swap" (often notated "CAS"), which takes three arguments: a memory address, an old value, and a new value. If the address contains the old value, it is replaced with the new value, otherwise it is unchanged. Critically, the hardware guarantees that this "comparison and swap" operation is executed atomically. The success of this operation is then reported back to the program. This allows an algorithm to read a datum from memory, modify it, and write it back only if no other thread modified it in the meantime.
For example, consider a different implementation of the banking program where each thread represents a virtual teller. The teller reads the current value of the account (old value), adds an amount and uses CAS to attempt to update the account balance. If no other thread has modified the account balance in the intervening period, the CAS will succeed, and the account balance will be updated. However, if a concurrent modification has occurred, the CAS will fail, and the teller will retry the update (by first fetching the new account balance). Each teller will perform this CAS operation in a loop, retrying until they are successful. This algorithm is lock-free but not wait-free, since other threads may keep writing new values and make the failing teller try again indefinitely.
[edit] See also
- Concurrency control
- Deadlock
- Lock (software engineering)
- Memory barrier
- Mutual exclusion
- Non-blocking synchronization
- Pre-emptive multitasking
- Priority inversion
- Read-copy-update
- Resource starvation
- Room synchronization
- Software transactional memory
[edit] External links
- Survey "Some Notes on Lock-Free and Wait-Free Algorithms" by Ross Bencina
java.util.concurrent.atomic
– supports lock-free and thread-safe programming on single variables- The Jail-Ust Container Library
- Practical lock-free data structures
- Thesis "Efficient and Practical Non-Blocking Data Structures" (1414 KB) by Håkan Sundell
- WARPing - Wait-free techniques for Real-time Processing
- Non-blocking Synchronization: Algorithms and Performance Evaluation. (1926 KB) by Yi Zhang
- "Design and verification of lock-free parallel algorithms" by Hui Gao
- "Asynchronous Data Sharing in Multiprocessor Real-Time Systems Using Process Consensus" by Jing Chen and Alan Burns
- Discussion "lock-free versus lock-based algorithms"
- Atomic Ptr Plus Project - collection of various lock-free synchronization primitives
- AppCore: A Portable High-Performance Thread Synchronization Library - An Effective Marriage between Lock-Free and Lock-Based Algorithms
- WaitFreeSynchronization and LockFreeSynchronization at the Portland Pattern Repository
- Multiplatform library with atomic operations