Skip list
From Wikipedia, the free encyclopedia
A skip list is a probabilistic data structure, based on parallel linked lists, with efficiency comparable to a binary search tree (order O(log n) average time for most operations).
Basically, a skip list is an augmentation of an ordered linked list with additional forward links, added in a randomized way with a Geometric/Negative Binomial distribution (See William Pugh's original paper), so that a search in the list may quickly skip parts of the list (hence the name). Insert, search and delete operations are performed in logarithmic randomized time.
Discovered in 1990 by William Pugh, the skip list is among the youngest of the important algorithms used in computer science (Pugh 1990).
Contents |
[edit] Description
A skip list is built in layers. The bottom layer is an ordinary sorted linked list. Each higher layer acts as an "express lane" for the lists below, where an element in layer i appears in layer i+1 with some fixed probability p. On average, each element appears in 1/(1-p) lists, and the tallest element (usually a special head element at the front of the skip list) appears in O(log1/p n) lists.
1 1-----4---6 1---3-4---6-----9 1-2-3-4-5-6-7-8-9-10
To search for a target element, start with the head element and the top list and scan along each linked list until you reach the last element in it that is less than or equal to the target. The expected number of steps in each linked list is easily seen to be 1/p, by tracing the search path backwards from the target until reaching an element that appears in the next higher list. So the total cost of a search is O(log1/p n / p), which is O(log n) when p is a constant. By choosing different values of p, it is possible to trade search costs against storage costs.
Insertions and deletions are implemented much like the corresponding linked-list operations, except that "tall" elements must be inserted into or deleted from more than one linked list.
Also, Θ(n) operations, which force us to visit every node in ascending order (PrintEntireListToScreen(), etc.) provide us with the opportunity to perform a behind-the-scenes derandomization of the level structure of the skip-list in an optimal way (Choose the level of the i'th finite node to be 1 plus the number of times we can repeatedly divide i by 2 before it becomes odd. Also, i=0 for the negative infinity header as we have the usual special case of choosing the highest possible level for negative and/or positive infinite nodes.). However this also allows someone to know where all of the higher-than-level 1 nodes are and delete them.
Alternatively, we could make the level structure quasi-random in the following way:
make all nodes level 1 j = 1 while the number of nodes at level j > 1 for each i'th node at level j if i is odd if i is not the last node at level j randomly choose whether or not to promote it to level j+1 else do not promote end if else if i is even and node i-1 was not promoted promote it to level j+1 end if end for j = j + 1 end while
Like the derandomized version, we only want to quasi-randomize when we have some other reason to be running a Θ(n) operation (which visits every node?) anyway.
The advantage of this quasi-randomness is that it doesn't give away nearly as much level-structure related information to an adversarial user as the de-randomized one. This is desirable because an adversarial user who is able to tell which nodes are not at the lowest level can pessimize performance by simply deleting higher-level nodes. The search performance is still guaranteed to be logarithmic.
One may be tempted to make the following "optimization": In the part which says "Next, for each i'th...", forget about doing a coin-flip for each even-odd pair. Just flip a coin once to decide whether or not to promote only the even ones or only the odd ones. Instead of Θ(n lg n) coin flips, we would only have Θ(lg n) of them. Unfortunately, this gives the adversarial user a 50/50 chance of being correct upon guessing that all of the even numbered nodes (among the ones at level 1 or higher) are higher than level one. This is despite the property that he/she has a very low probability of guessing that a particular node is at level N for some integer N.
Let us prove these two claims concerning the advantages of quasi-randomness over the totally derandomized version. First, to prove that the search time is guaranteed to be logarithmic. Suppose we search for and find node n where n is the position of the found node among the nodes of level 1 or higher. If n is even, then there is a 50/50 chance that it is higher than level 1. This time, though, if it is not higher than level 1 then node n-1 is guaranteed to be higher than level 1. If n is odd, then there is a 50/50 chance that it is higher than level 1. Suppose that it is not; there is a 50/50 chance that node n-1 is higher than level 1. Suppose that this is not either; we are guaranteed that node n-2 is higher than level 1. We repeat the analysis for nodes of level 2 or higher, level 3 or higher, etc. always keeping in mind that n is the position of the node among the ones of level k or higher for integer k. So the search time is constant in the best case (if the found node is the highest possible level) and 2 times the worst case for the search time for the totally derandomized skip-list (because we have to keep moving left twice rather than keep moving left once).
Next, let's prove something about the probability of an adversarial user's guess of a node being level k or higher being correct. First, the adversarial user has a 50/50 chance of correctly guessing that a particular node is level 2 or higher. This event is independent of whether or not he/she correctly guesses at some other node being level 2 or higher. If he/she knows the positions of two consecutive nodes of level 2 or higher, and knows that the one on the left is in an odd numbered position among the nodes of level 2 or higher, he/she has a 50/50 chance of correctly guessing which one is of level 3 or higher. So, his/her probability of being correct, when guessing that a node is level 3 or higher, is 1/4. Inductively continuing this analysis, we see that his/her probability of guessing that a particular node is level k or higher is 1/(2^(k-1)).
The above analyses only work when the number of nodes is a power of two. However, because of the third rule which says, "Finally, if i is odd and also the last node at level 1 then do not promote." (where we substitute the appropriate level number for 1) it becomes a sequence of exact-power-of-two-sized skiplists, concatenated onto each other, for which the analysis does work. In fact, the exact powers of two correspond to the binary representation for the number of nodes in the whole list.
A skip list, upon which we have not recently performed either of the above mentioned Θ(n) operations, does not provide the same absolute worst-case performance guarantees as more traditional balanced tree data structures, because it is always possible (though with very low probability) that the coin-flips used to build the skip list will produce a badly balanced structure. However, they work well in practice, and the randomized balancing scheme has been argued to be easier to implement than the deterministic balancing schemes used in balanced binary search trees. Skip lists are also useful in parallel computing, where insertions can be done in different parts of the skip list in parallel without any global rebalancing of the data structure.
There has been some evidence that skip lists have worse real-world performance and space requirements than B trees due to memory locality and other issues [1].
[edit] History
Skip lists were discovered by William Pugh. He details how they work in Skip lists: a probabilistic alternative to balanced trees in Communications of the ACM, June 1990, 33(6) 668-676. See also citations and downloadable documents.
To quote the inventor:
- Skip lists are a probabilistic data structure that seem likely to supplant balanced trees as the implementation method of choice for many applications. Skip list algorithms have the same asymptotic expected time bounds as balanced trees and are simpler, faster and use less space.
These claims of space and speed advantages were subsequently disputed, however [2].
[edit] References
- Pugh, William (June 1990). "Skip lists: a probabilistic alternative to balanced trees". Communications of the ACM 33: 668-676.