Spreadsort

From Wikipedia, the free encyclopedia

Spreadsort is a relatively new sorting algorithm invented by Steven J. Ross in 2002.[1] It combines concepts from distribution-based sorts, such as radix sort and bucket sort, with partitioning concepts from comparison sorts such as quicksort and mergesort. In experimental results it was shown to be highly efficient, often outperforming traditional algorithms such as quicksort, particularly on distributions exhibiting structure.

Quicksort identifies a pivot element in the list and then partitions the list into two sublists, those elements less than the pivot and those greater than the pivot. Spreadsort generalizes this idea by partitioning the list into n/c partitions at each step, where n is the total number of elements in the list and c is a small constant (in practice usually between 4 and 8 when comparisons are slow, or much larger in situations where they are fast). It uses distribution-based techniques to accomplish this, first locating the minimum and maximum value in the list, and then dividing the region between them into n/c equal-sized bins.

In the case where the number of bins is at least the number of elements, spreadsort degenerates to bucket sort and the sort completes. Otherwise, each bin is sorted recursively. The algorithm uses heuristic tests to determine whether each bin would be more efficiently sorted by spreadsort or some other classical sort algorithm, then recursively sorts the bin.

Like other distribution-based sorts, spreadsort has the weakness that the programmer is required to provide a means of converting each element into a numeric key, for the purpose of identifying which bin it falls in. Although it is possible to do this for arbitrary-length elements such as strings by considering each element to be followed by an infinite number of minimum values, and indeed for any datatype possessing a total order, this can be more difficult to implement correctly than a simple comparison function, especially on complex structures. Poor implementation of this value function can result in clustering that harms the algorithm's relative performance.

[edit] Performance

The worst-case performance of spreadsort ultimately depends on what sort it switches to on smaller bins — O(n log n) if it uses a worst-case O(n log n) sort such as mergesort or heapsort, and O(n2) if it uses quicksort. In the case of distributions where the size of the key in bits k times 2 is roughly the square of the log of the list size n or smaller (2k < log(n)2), it does better in the worst case, achieving O(n·(k - log(n)).5) worst-case time.

Experiments were done comparing an optimized version of spreadsort to the highly-optimized C++ std::sort, typically implemented with quicksort switching to insertion sort on small sublists. On lists of integers spreadsort showed a 20% time improvement for sparse, random lists, and an improvement of up to seven times for lists with a small range of values or large clusters.[2]

In space performance, spreadsort is fairly weak: in its most efficient form, it is not an in-place algorithm, using O(n) extra space; in experiments, about 20% more than quicksort using a c of 4-8 and .5% more than quicksort in its most modern optimized form (much larger c). Although it uses asymptotically more space than the O(log n) overhead of quicksort or the O(1) overhead of heapsort, it uses considerably less space than the basic form of mergesort, which uses auxilary space equal to the space occupied by the list.

Spreadsort also works efficiently on problems too large to fit in memory and thus requiring disk access.

[edit] Two Levels are as Good as Any

An interesting result for algorithms of this general type (splitting based on the radix, then comparison-based sorting) is that they are O(n) for any continuous integrable function. [3] This result can be obtained by forcing Spreadsort to always iterate at least twice if the bin size after the first iteration is above some constant value. If data is known to come in based upon some continuous integrable function at all times, this modification of Spreadsort can attain some peformance improvement over the basic algorithm, and will have better worst-case performance. If this restriction cannot usually be depended on, this change will add a little extra runtime overhead to the algorithm and gain little.

[edit] References

  1. ^ Steven J. Ross. The Spreadsort High-performance General-case Sorting Algorithm. Parallel and Distributed Processing Techniques and Applications, Volume 3, pp.1100–1106. Las Vegas Nevada. 2002.
  2. ^ Personal communication from Steve J. Ross. "I'll note again that std::sort is much more competitive against Spreadsort than Quicksort is; the speedup I saw dropped as low as 20% on integers (though it got as high as 7X for chunky data)."
  3. ^ Markku Tamminen: Two Levels are as Good as Any. J. Algorithms 6(1): 138-144 (1985)