Talk:Sorting algorithm

From Wikipedia, the free encyclopedia

This article is part of a WikiProject to improve Wikipedia's articles related to Computer science. For guidelines see WikiProject Computer science and Wikipedia:Contributing FAQ.
Former FA This article is a former featured article candidate. Please view its sub-page to see why the nomination failed. For older candidates, please check the archive.

Contents

[edit] Algorithm tester

Here's a Python module for testing comparison-based sort routines written in Python. (Obviously this won't work for radix and trie sorts, but it should work for most others.) The purpose of this module is to verify that code presented as an implementation of a sort algorithm does, in fact, sort correctly. (Whether it is an implementation of the right algorithm is a little more difficult to verify mechanically.)

Reasons for giving algorithm implementations in Python rather than pseudocode are described on talk:Algorithm.

I call this module sorttest.py.

"""This is a Python module for testing sort routines.  The sort routines
must obey the following interface:

def sort(array, cmp=lambda x, y: x > y):

It doesn't matter what it returns, but it must not throw an exception
in any of the tests.

Note that in Python, the parameter names are part of the interface, so
the parameters must be called 'array' and 'cmp'.  (There can be
additional optional parameters, though.)

When it returns, it should have mutated the array 'array' so that it
contains the same items (in the sense of 'is'), but possibly in a
different order.  The new order must be such that, for any i in
range(len(array-1)), cmp(array[i], array[i+1]) is false.  So, by
default, it sorts ascending.

It must not mutate anything the array points to --- that's cheating
--- but this is hard to test.
"""

import random

def bubblesort(array, cmp=lambda x, y: x > y):
    """The simplest working sort routine, for testing purposes.

    This is here to make it possible to test sort-testing routines."""
    indices = range(len(array))
    indices.reverse()
    for j in indices:
        for i in range(j):
            if cmp(array[i], array[i+1]):
                (array[i], array[i+1]) = (array[i+1], array[i])

OutOfOrder = "sorttest.OutOfOrder"
ScrewedUpArray = "sorttest.ScrewedUpArray"

def brokensort1(array, cmp=lambda x, y: x > y):
    """Broken 'sort' routine that overwrites the whole array with 1 element."""
    for i in range(len(array)): array[i] = array[0]

def brokensort2(array, cmp=lambda x, y: x > y):
    """Broken 'sort' routine that bubblesorts backwards."""
    bubblesort(array, lambda x, y, cmp=cmp: cmp(y, x))

def testsort_onearray(array, sort, cmp=None):
    """Given a sort routine and an array, raise an exception if it sorts wrong.
    """
    arraycopy = array[:]  # copy array
    if cmp is None: sort(array)
    else: sort(array, cmp)

    # verify that the array is in order
    if cmp is None: cmp = lambda x, y: x > y
    for i in range(len(array)-1):
        if cmp(array[i], array[i+1]):
            raise OutOfOrder, (i, arraycopy, array, array[i], array[i+1])

    # verify that it consists of the elements of the original array:
    # doesn't contain any fewer elements:
    if len(array) != len(arraycopy):
        raise ScrewedUpArray, ("array length changed", arraycopy, len(array),
                               len(arraycopy))
    # and doesn't contain any more copies of any element than the original
    # array:
    idmap = {}
    for i in arraycopy:
        if not idmap.has_key(id(i)): idmap[id(i)] = 0
        idmap[id(i)] = idmap[id(i)] + 1
    for i in array:
        if not idmap.has_key(id(i)):
            raise ScrewedUpArray, ("element wasn't in original array",
                                   arraycopy, i)
        idmap[id(i)] = idmap[id(i)] - 1
        if idmap[id(i)] < 0:
            raise ScrewedUpArray, ("element was in original array fewer times",
                                   arraycopy, i)

def qwi(string):
    """Turn a string containing a list of ints into a list of ints."""
    return map(int, string.split())

def testsort(sort):
    """Test a sort routine on a variety of inputs.  Main entry point."""
    
    def backwards(x, y): return x < y
    
    # simplest case: empty array
    testsort_onearray([], sort)
    testsort_onearray([], sort, backwards)
    # sorting a short already-in-order array
    testsort_onearray([1, 2, 3], sort)
    testsort_onearray([3, 2, 1], sort, backwards)
    # an actual array that needs some sorting
    testsort_onearray([4, 2, 5, 3, 6, 0], sort)
    testsort_3 3 3 4 5 5'), sort, backwards)
    testsort_onearray(qwi('5 5 5 4 3 2 1 1'), sort)
    # more more-or-less random tests with lists of integers
    testsort_onearray(qwi('1 33 1 3 1 3 42 1 5 5 1 -1 17 40'), sort)
    testsort_onearray(qwi('1 1 1 1 1'), sort)
    testsort_onearray([1], sort)
    # I'd like to use a bigger random list, but O(N^2) sorts get too slow
    rg = random.Random()
    testsort_onearray([rg.randint(0, 1000) for x in range(100)], sort)
    # verify that it works on things other than integers
    testsort_onearray('vow to bring enlightenment to all sentient beings'
                      .split(), sort)
    testsort_onearray(map(lambda x: [x], qwi('1 3 1 4 5')), sort,
                      lambda x, y: x[0] > y[0])

def test():
    """Test routine to verify that sort-testing routines work.

    This routine runs when the module loads to ensure that it still
    works correctly.

    """

    testsort_onearray([], bubblesort)
    testsort_onearray([], bubblesort, lambda x, y: x < y)
    testsort_onearray([1, 2, 3], bubblesort)
    testsort_onearray([1, 2, 3], bubblesort, lambda x, y: x < y)
    testsort_onearray([3, 2, 1], bubblesort)
    testsort_onearray([3, 2, 1], bubblesort, lambda x, y: x < y)
    testsort_onearray(map(int, '2 3 3 1 41 31 1 0 1'.split()), bubblesort)

    ok = 0
    try: testsort_onearray([1, 2], brokensort2)
    except: ok = 1
    assert ok

    ok = 0
    try: testsort_onearray([1, 2], brokensort1)
    except: ok = 1
    assert ok

    testsort(bubblesort)

    ok = 0
    try: testsort(brokensort1)
    except: ok = 1
    assert ok

    ok = 0
    try: testsort(brokensort2)
    except: ok = 1
    assert ok

test()

[edit] Selection sort

Could somebody elaborate a bit on the remark to Selection sort please?

works in-place, but loses stability or gains complexity

--HJH

I think they mean the following: if you use the obvious in-place implementation of selection sort (i.e.: find the smallest element in the list of not-yet-sorted elements, then swap it to the correct position), then it won't stable anymore. If you want to keep it stable, you need to use a more complex algorithm. AxelBoldt 18:39 Oct 17, 2002 (UTC)

This is true of all unstable sorting algorithms, so I don't think it's necessary. Martin

Combsort comments moved to talk:comb sort

[edit] List or table?

I considered using an unordered list instead of a table - please take a look and decide which is (A) easiest to read and (B) easiest to edit... Martin

  • Bubble sort: O(n²), but O(n) on already sorted data; works in-place; stable
  • Cocktail sort (bidirectional bubble sort): O(n²), but O(n) on already sorted data; works in-place; stable
  • Comb sort: O(n log n); works in-place; stable
  • Selection sort: O(n²); works in-place, but loses stability or gains complexity; unstable
  • Straight insertion sort: O(n²), but O(n) on already sorted data; works in-place; stable
  • Bucket sort: O(n); requires O(n) extra memory; stable
  • Counting sort: O(n+k); requires O(n+k) extra memory; stable
  • Heapsort: O(n log n); works in-place; unstable
  • Smoothsort: O(n log n), but O(n) on already sorted data; works in-place; unstable
  • Merge sort: O(n log n); requires O(n) extra memory; stable
  • Quicksort: O(n log n), but O(n²) in worst case; requires O(log n) extra memory; unstable
  • binary tree sort: O(n log n), but O(n²) on already sorted data; requires O(n) extra memory, typically several pointers per input item; stable
  • Pigeonhole sort: O(n+k); requires O(k) extra memory; can only be used when all keys are unique
  • Radix sort: O(n log k); requires O(n) extra space; stable
  • Shell sort: Worst case O(n1.5), Average case O(n1.25, Best case O(n log n) on already sorted data; works in-place; stable
  • Bogosort: Worst case O(n!), Average case O((n/2)!), Best case O(n), depending on luck; requires O(n) extra space; unstable

The table is easier to read and not that difficult to edit (in fact it may be easier). The list gains flexibility, but it's not really needed. Verbose descriptions should be handled in the individual articles. This is just a quick overview and comparison, which tables are well suited for in this case. -nknight 10:25 Apr 26, 2003 (UTC)

Thanks for the feedback - I was uncertain, hence why I wasn't bold... :)
Another suggestion - what if the complexity columns were merged into one column? Might save space/width/... Martin 13:31 Apr 26, 2003 (UTC)

I found and corrected a Big Fat omission from the list of mathematical topics: It did not link to this page! Now it does, but it does not link to most of the separate articles on individual sort algorithms. Since there are only 24 hours in a day, I will not fix that now, so this notice is your chance to beat me to it. Michael Hardy 23:39 May 1, 2003 (UTC)

Is sort algorithm a mathematical topic? I think it is a computer science topic. Anyway, about table. I don't like the current table, which is not easy to read (remember rendering of tables heavily depends on your environment) and not eay to edit. But I don't see the list version you posted better. I will edit the article by my version, which looks better to me. -- Taku 00:10 May 2, 2003 (UTC)

It is a computer science topic, but it is also a mathematical topic. Michael Hardy 00:15 May 2, 2003 (UTC)

The table was moved here. The article used to have a table but was converted to have a list. The list is usually more readable.

test

Sorting algorithms
sort algorithm runtime order extra memory stability note
Bogosort O((n/2)!) O(n) stable intentionally poor algorithm
Bubble sort O(n²) stable
Cocktail sort O(n²) stable bidirectional bubble sort
Comb sort O(n log n) unstable
Selection sort O(n²) unstable
Insertion sort O(n²) unstable [?]
Bucket sort O(n) O(n) stable
Counting sort O(n+k) O(n+k) stable
Heapsort O(n log n) unstable
Smoothsort O(n log n) unstable
Merge sort O(n log n) stable
Quicksort O(n log n) unstable
Binary tree sort O(n log n) O(n) stable
Pigeonhole sort O(n+k) O(k) stable
Radix sort O(nk) O(n) stable
Shell sort O(n1.25) stable

-- Taku 22:03, Oct 28, 2003 (UTC)

I guess I should have looked at this page before I inserted the table. But I think a table would be best: it allows one to quickly find an algorithm with an acceptible complexity. Someone said that we shouldn't try to make that information the focus because the reader can go to the individual page, but I will argue that the over view section should make comparisons easier, and not simply list them. -Shawn P. 'Indefual' Conroy

I think it is a matter of preference. The list looks much nicer to me and the list still has the same information as that the table can have. The alternative is to have a separate article of the table with more complete information such as if the algorithm is implemented using work-in-place. I must say that the trick is the table can be rendered very ugly in a certain envrionment while the list is always more liable. -- Taku 21:31, Oct 29, 2003 (UTC)

Bring back the table. Then you can also add the best, average and worst runtime orders, like this: -- Dissident 05:00, 11 Apr 2004 (UTC)

Sorting algorithms
sort algorithm runtime order extra memory stability note
best average worst
Hmm, I didn't notice this section when I added the current table. I like the table because it has a lot more information and makes comparison easier. The list is sparser and somewhat easier to edit, but I think that's outweighed by the table's usefulness. Deco 17:27, 9 December 2005 (UTC)

[edit] Theta notation

Is there any reason why the algorithms asymtotic growth is shown in big-oh-notation instead of big-theta-notation? I mean, isn't a tight fit better than an upper bound (for instance, you could correctly say that heap sort is O(n^10))? Gkhan

[edit] what about alphabet size?

Shouldn't alphabet size be mentioned somewhere? For an alphabet of size m, an input of size n contains n / log m elements. Also, it takes O(log m) to do an operation on an element, or pair of elements. Some examples: pigeon hole sort: not really O(n), but O(n + m log n - log log m). Radix sort: not O(n), but O(n log m). Heap sort: not O(n log n), but O(n log n log m). And so on. This might be interesting for people who deal with alphabets that do not fit into a single machine word.

Yes I think we should be as rigorous as possible. Currently we take a different approach, where there are n elements and the keys are of size k, meaning that the input size is n*k. Is that good enough? --P3d0 18:32, 2 June 2006 (UTC)

[edit] Permutation sort?

Should a page be added for it or should it be included with Bogosort? Should Bogosort be secondary to permutation sort?

-- UTSRelativity 20:23 2004-12-12 (UTC)

[edit] Speed of in-place merge sort

An anonymous user changed my addition

to

  • In-place merge sort - O(n log n); several times slower, more complicated

I added it based on this:

"In-Place Mergesort is yet another abomination. Mergesort is supposed to run in O(n log n), but the implementation here runs in O(n * n)." [1]

Can anyone give me an implementation of in-place merge sort that's O(n log n)? -- Smjg 17:06, 1 Mar 2005 (UTC)

There are some here: [2] The abstract for this paper reads:
Two in-place variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant requires at most Nlog2N + O(N) comparisons and 3Nlog2N + O(N) moves to sort N elements. The second, more advanced variant requires at most Nlog2N + O(N) comparisons and \varepsilon N\log_2N moves, for any fixed \varepsilon>0 and any N>N(\varepsilon). In theory, the second one is superior to advanced versions of heapsort. In practice, due to the overhead in the index manipulation, our fastest in-place mergesort behaves still about 50 per cent slower than the bottom-up heapsort. However, our implementations are practical compared to mergesort algorithms based on in-place merging.
Bkell 18:25, 7 August 2005 (UTC)

[edit] Other sort pages

I found these pages which should be integrated here or merged in with other sort articles:

Daniel Quinlan 04:58, Mar 8, 2005 (UTC)

That page gave "Double bubble sort" as a misleading name for insertion sort. But AFAICS this is just wrong - other websites give it as meaning cocktail sort. As such, I'm changing it to redirect there. -- Smjg 21:55, 30 Mar 2005 (UTC)
It's not a cocktail sort as it doesn't alternate direction. It is not the same, it get's it's name because the first bubble find something out of place and then starts a bubble in the opposite direction to correct the order. It is very efficent for data that is almost in order.--Jirate 22:13, 2005 Mar 30 (UTC)
What are you referring to as "it" here? We're talking about two things:
  • What the Wikipedia article refers to as "double bubble sort", which is insertion sort
  • What other websites refer to as "double bubble sort", which is cocktail sort.
What was the point of reverting it to be a duplicate of Insertion sort? We should make up our mind: either
  • Redirect to Cocktail sort, what the other sites I checked use it to mean (the only exception being a Wikipedia mirror) and a more intuitive meaning IMO
  • Redirect to Insertion sort, which it currently describes
  • Get rid of the page, considering that the title's miscapitalised anyway. -- Smjg 12:08, 31 Mar 2005 (UTC)
It is not an isertion sort. I think you don't actually understand either elgorythm.--Jirate 12:38, 2005 Mar 31 (UTC)
Why are you being so vague? What do you consider to be the difference between them? Simply that your "double bubble sort" wastes time doing whole swaps? Or is there some other subtle difference that really renders them distinct algorithms? -- Smjg 15:46, 31 Mar 2005 (UTC)
The wasteful swaping bit is the bubble. [3] is what I understand to be an insertion sort as opposed to a bubble. Have you checked Knuth?--Jirate 19:53, 2005 Mar 31 (UTC)
The other point is that I think you insertion sort is actually not an insertion sort. It a modified double bubble.--Jirate 12:40, 2005 Mar 31 (UTC)
Do you know of an algorithm that is more accurately called insertion sort? -- Smjg 15:46, 31 Mar 2005 (UTC)
[4] --Jirate 19:53, 2005 Mar 31 (UTC)
That site seems to be down at the moment, but the version in Google's cache is exactly insertion sort as I understand it and that article has it. -- Smjg 09:36, 1 Apr 2005 (UTC)

AFAIK, both are implementations of insertion sort. The implementation Jirate points to doesn't have the wasteful swapping (xor swap or not). —UTSRelativity 01:15, 1 Apr 2005 (UTC)

[5][6] [7][8]--Jirate 01:34, 2005 Apr 1 (UTC)
This is I understand it is an insertion sort [9], which is different and requires an extra variable, which can be vital on very small processors.--Jirate 01:41, 2005 Apr 1 (UTC)
The links for double-bubble sort seems to describe cocktail sort anyhow. — UTSRelativity 00:10, 2 Apr 2005 (UTC)

[edit] Removed "Girish sort" link

I removed this text:

link and text removed by shankar see note below

The link takes you to some guy's home page with a collection of unremarkable sorting algorithms, (at least one of which, "sort1", is O(n^2)). Also, his so-called "new technique" that was "till now considered as impossible" is, as far as I can tell, a trivial algorithm:

  1. Convert linked list into array
  2. Sort array
  3. Convert array back into linked list

I do not think this page is of general interest, so I have removed it. --P3d0 18:18, Jun 8, 2005 (UTC)

Any new invention will seem trivial after it is invented and explained. Your own Insertion sort page under variants section describes binary search on linked list as impossible. Anyway I agree that you were right to remove the link to the page as the speed of the functions were not satisfactory. I was in too big a hurry to get nowhere and was fool enough to submit it here without testing its speed. I have removed the page. Shankar.

No, these really are trivial. Researchers haven't written about them because they're so obvious and inefficient that they're unremarkable. I encourage you to keep thinking about the problem, but to really be remarkable your algorithm has to be either better than all known algorithms in some notable way or a useful conceptual/educational tool, which these are not. Deco 20:12, 10 December 2005 (UTC)
Oh, and binary search on linked lists is impossible, at least in O(log n) time without some supporting data structure. The same article does discuss the use of search trees or other data structures to index linked lists. Converting to an array is an O(n) operation, so it's just not really useful. Deco 21:43, 10 December 2005 (UTC)

[edit] Flash Sort ?

I didn't know this http://www.neubert.net/FSOIntro.html sorting algorithm but it claims quite efficient results: FlashSort sorts n elements in O(n) time. Flash-Sort uses a vector L(k) of length m in a first step for the classification of the elements of array A. (k small compared to n). If this is true, this beats them all. But isn't it another kind of BucketSort ? I didn't have time to test, sorry. Anyone for this ?

The author's assertion that this algorithm is O(n) is simply incorrect. For instance, there's no reason almost all the items wouldn't end up in the same class, and then this algorithm degenerates to insertion sort. For uniformly distributed data, it does have an average running time of O(n), but it certainly doesn't break any known laws of computation. Having said that, I do think it deserves to be mentioned on our page. --P3d0 01:49, August 20, 2005 (UTC)
Concur with P3d0. It's an interesting algorithm but is only O(n) on average over uniformly random inputs (not even over permutations of any fixed input, which would be more useful). Not the sort to end all sorts. Deco 05:02, 20 August 2005 (UTC)

[edit] Exchange Sort ?

Should the exchange sort algorithm be added here? AnthonyWong

Exchange sort is bubble sort. I've added the appropriate redirect. Deco 01:13, 24 August 2005 (UTC)

[edit] Bitonic sort

I have sorting algorithms but I am developing a thing which is using this. I'm somewhat surprised it's not on wikipedia yet. Bitonic sort seems to be a merge-sort like (in the sense it has merges in it) which is designed to scale on parallel hardware. I've searched the merge sort page but it does not seems to be a simple renaming.

Maybe I'll write something on this myself but I would rather spend this time on Berkeley sockets, I wouldn't do asyntotical analisys anyway. (Whoops, i forgot to sign the message) MaxDZ8 11:42, 1 November 2005 (UTC)


I managed to try it out a little. Looks like bitonic sort is a direct application of sorting networks, another thing which lacks from wikipedia. In fact, its page links to bitonic sort. As for the performances, they shall be O(n log^2(n)) and effectively allows a ton of parallelization. On my system (which is NOT parallel) it has similar performances to HeapSort. I'll work to try it on a system with more execution units. MaxDZ8 18:49, 13 November 2005 (UTC)

[edit] New tables

Hi everyone. I decided to reorganize the lists as tables so that we could show and compare numerous properties of all the sorts side-by-side. I tried to keep them narrow enough for small screens and tried them at a variety of font sizes for poor wrapping. I think they turned out well, but I appreciate any feedback you guys have. Deco 00:09, 7 December 2005 (UTC)

I appreciate the fixups to the table, but I wanted to note that the · characters in the headers really are important. Without these, at narrow screen widths, the header wraps, resulting in the big-O times wrapping in nasty-looking places, such as after the O or between n and log(n). I chose this instead of using a "width" attribute because it adjusts automatically to different font sizes. Since we do have a lot of readers at low resolutions still, I think this is important. Deco 19:22, 8 December 2005 (UTC)
Maybe a better way to solve this problem would be to put the notes as footnotes at the bottom of the table, so that we can get rid of the last column, which is by far the widest, and empty for most rows. —Bkell 01:55, 9 December 2005 (UTC)
Aha, I found a better way. The TD tag has a "nowrap" attribute that's perfect for just this situation. It might also be a good idea to move the notes down, but it's a bit of a tradeoff (clarity versus table width), so I'm not sure which way to go there. Deco 04:56, 9 December 2005 (UTC)

[edit] Language support

I added a section for language support. I feel like this belongs in this article, as the particular sort algorithm used by an implementation is often either left unspecified in the language specification or varies from version to version. It's also nice to discuss them all in one place. I've used a bulleted list format, as I don't expect to say more than a couple sentences for each language. This section is currently incomplete — please feel free to add more, but please keep the most popular languages near the top, since I think these will be the ones that interest most readers. Deco 21:20, 16 December 2005 (UTC)

[edit] Broken link

This link under "See also" is broken:

Perhaps it should point to Wikibooks:ComputerScience:Algorithms? Camillus (talk) 23:00, 25 March 2006 (UTC)

[edit] Sorting algorithm times

I removed a note that someone added a couple days ago insinuating that the run times, especially worst case run times, of some or many of the sorts listed in the table were incorrect. I'm not an expert on sorting algorithms or even computer science but I do know that selection sort definitely has worst case time O(n^2) not O(n^n). I don't think the person who posted the previous comment had any idea what they were talking about.

That said I was looking through it and I'm pretty sure that worst case for bucket sort should be O(n). The only way for it to be O(n^2) is if you are unable to use any data structure other than an array to store the buckets, but even if that's the case a typical array resizing would give O(n log(n)) for inserting everything into the same bucket. Regardless I was under that the impression that worst case times were for the best implementation, and using any sort of linked list for the buckets returns the worst case time to O(n). Anyway I'm pretty sure I'm right about that, but not confident enough to change it myself -- so if somebody who is more of an expert could either change that or tell me why I'm wrong that would be good. --An Adverb 06:47, 5 April 2006 (UTC)

No sorting algorithm can possibly beat O(nlogn) asymptotic complexity. This goes for bucket sort too. The reason is that there are n! possible permutations, so it takes O(nlogn) bits just to describe which permutation is the right one, let alone put the items in that order. If you would like to describe why you think bucket sort is O(n) then perhaps we can explain where you're mistaken. --P3d0 16:56, 5 April 2006 (UTC)

No comparison sorting algorithm can ever exceed O(nlogn) average or worst case time, but bucket sort does not compare elements to each other and is therefore not subject to the theoretical bound of O(nlogn) time. Since bucket sort is not a comparison sort it is not able to produce a sorted list on a general data set -- bucket sort is only guaranteed to actually sort a list if the number of buckets is equal to the domain of the set in question, and even if you have the processor capacity a O(n) function of the domain is probably greater than a O(nlogn) function on the data set -- and in the cases where it isn't it wouldn't usually make much sense to be storing the data in a list anyway. Bucket sort isn't anything magical by being O(n) time because it doesn't solve the same problem as the fastest sorting algorithms like quick sort. Note that all of the sorts listed of "indexing" type are less than O(nlogn) and none of them are comparison sorts. I'm quite sure that worst case for bucket sort is O(n) or possibly O(nlogn) depending on how you are 'allowed' to implement bucket sort and still call it bucket sort. --An Adverb 17:36, 5 April 2006 (UTC)

The "function of the domain" is not relevant. Computational complexity is defined in terms of the size of the input. Furthermore, my argument about the impossibility of an O(n) sort has nothing to do with comparisons. It's a simple pigeonhole argument based on how many possible answers there are: you need O(nlogn) bits to specify one of n! possibilities. --P3d0 03:00, 11 April 2006 (UTC)
Afraid he's right, P3d0. See comparison sort. The O(n) upper bound assumes that word-size operations are constant-time, but that's the only caveat. The fact that you need O(n log n) bits to specify an ordering doesn't mean it has to perform O(n log n) comparisons to sort a list of n numbers, because you can encode more than a single bit of information in an arithmetic operation like an increment. In any case if Introductions to Algorithms says it's O(n), it's O(n) - you can't argue with a seminal work. Deco 10:08, 28 May 2006 (UTC)
I think the n they are referring to is not the number of items to be sorted, but rather the size of the input. With an average key size of k, that means n/k items. I was assuming n items. My mistake. --P3d0 18:51, 2 June 2006 (UTC)

[edit] Quick sort

The worst case of quick sort is O(n log n) , not O(n2). If we choos median as pivot , the best , worst and average case will be O(n log n). —The preceding unsigned comment was added by 87.107.51.195 (talkcontribs).

Ok, and just how do you find the median element? --P3d0 23:03, 14 April 2006 (UTC)
Actually, if a bad pivot is chosen, the worst case is O(n2). Using a median-of-three (or more!) the worst case is very rare, but clever hackers can exploit it. Read more here and Here. --DevastatorIIC 08:27, 24 April 2006 (UTC)
There is a worst-case linear algorithm for finding the median (ref selection algorithm). This variant of quicksort is very slow in practice and so never used. I mentioned this at quicksort. Deco 10:03, 28 May 2006 (UTC)

In the discussion of quicksort, it is written:

This can be done efficiently in linear time and in-place.

Here, in-place is a link to in-place. However, the second sentence of the second paragraph of that article says:

In reality this is not sufficient (as the case of quicksort demonstrates) nor is it necessary; the output space may be constant, or may not even be counted, for example if the output is to a stream.

It is also mentioned in the quicksort article that this sort takes Ω(logn) storage space.

The piece of text that you quoted out of context refers not to the quicksort procedure but to the partition subprocedure, which is in fact linear and (typically) in-place. Deco 07:41, 29 September 2006 (UTC)
Sorry, you're right, and I should have read more carefully. I wonder if anyone else found it unclear, though? JadeNB 03:22, 30 September 2006 (UTC)
Good question. It could definitely use clarification. Deco 03:37, 30 September 2006 (UTC)

[edit] Table coloring

I thought I'd add a little color to the table. Green for good, red for bad (kinda like here) (the neutral color is #ffffdd, if anyone wants to add that). I marked what I understand as good as green, and bad as red. I left the sorts with space requirements blank as it isn't really bad. Maybe neutral.

I also cleaned up the table internally a little. --DevastatorIIC 08:34, 24 April 2006 (UTC)

It looks great. --P3d0 16:39, 28 April 2006 (UTC)

[edit] Complexity in terms of key size k

A number of the algorithms have an incorrect complexity in terms of the key size k. For example, Pigeonhole sort is O(n+2k), not O(n+k) as stated, because for keys of size k there are O(2k) possible keys. Unless anyone has any objections, I'm going to replace the occurrences of k with 2k. --P3d0 13:58, 25 April 2006 (UTC)

Ok, I've made the change. --P3d0 16:38, 28 April 2006 (UTC)

Depends how you define k of course - my understanding was that it was the number of key values rather than the key size (just as n is the number of elements). But, hey, whatever. Deco 10:10, 28 May 2006 (UTC)
Complexity is conventionally defined in terms of the problem size. If you use other definitions, you risk ending up with misleading statements like:
I can do Integer factorization in O(n) time!*
*(where n is the magnitude of the integer). --P3d0 20:57, 1 June 2006 (UTC)
Of course you're right, but in sorting and other analysis of algorithms it's conventional to make assumptions that would not be made in complexity, such as that the values are constant-sized, that an index into the list is constant-sized, and to measure efficiency in terms of operations like exchanges or comparisons instead of in terms of basic Turing machine or RAM model operations. In this case I agree that letting k be the key size is better as long as the articles are consistent with this. Deco 20:08, 2 June 2006 (UTC)
At least state that k is the key size in bits. Wait, I did that. advance512 21:20, 4 June 2006 (UTC)

[edit] Rapid Sort

How is this "the fastest known computer and electronic sort available"? As far as I can tell, it doesn't even work or make sense. (24.36.245.194 09:26, 28 May 2006 (UTC))

It isn't. It's just a rather difficult to read description of counting sort - which is a fine sort in limited cases, but hardly the end-all be-all. The name "rapid sort" was invented by the author. It's up for deletion at Wikipedia:Articles for deletion/Rapid sort, come contribute your voice. Deco 10:02, 28 May 2006 (UTC)
Hey guys you aren't bothering me by deleting it. Believe me I know the road that was traveled that led to its development and the discovery of its elegance as the result of making the effort to improve other sorts. Trashing it will only deprive others of its knowledge and well if that's what you guys are all about then you won't last long on this Wiki. ...IMHO (Talk) 17:53, 28 May 2006 (UTC)


Also unlike the counting sort the Rapid sort can be implemented directly in hardware. all you need is random access memory and a circuit to increment the memory contents the data to be sorted is used to address and a standard counter to increment the memory address to extract the count and write the retreive the address as data. Why do you want keep people from knowing about this sort anyway? You aren't the only people who are interested in and want to learn everything there is to know about sort routines. Give other people a break for a change. ...IMHO (Talk) 22:07, 28 May 2006 (UTC)
For the same reason we don't document obscure garage bands or obscure works of fan fiction. It's our policy. You can still publish your ideas on your own website or on many other relevant forums, and solicit feedback there, and I don't discourage you from doing so. Many sorting algorithms can be implemented in hardware, and a lot of them are faster than the one you describe. Deco 22:22, 28 May 2006 (UTC)
Why isn't the Counting sort included in the Sorting algorithm article yet being used as the basis for excluding the Rapid sort on the presumption of the Counting sort's existence? I think the article is incomplete without publication of at least one of them or that perhaps there is an altogether different motive for publication of the article. ...IMHO (Talk) 14:17, 3 June 2006 (UTC)
Counting sort is included in the table. It's not listed under "popular" searches because that's just a summary of a few very commonly used searches, and counting sort is a special case sort that isn't really commonly used in practice (just look at the library support section to see what I mean). The article would be huge if we even summarized all sorting algorithms in it. Deco 13:56, 4 June 2006 (UTC)
I hate to be the one to tell you this but computer algorithms including sorting algorithms are by nature now very numerous. When I started programming in the early sixties there were not that many which had been widely published. Case in point is Seward’s Counting sort whose publication was limited to his Master’s thesis in 1954. This is probably the reason why so many other sorts exist today. There has never been a proper and dynamic classification. Each algorithm has been developed independently of one another due to the failure to recognize the need for proper algorithm classification. Such classification demands that all characteristics necessary to identify each and every algorithm be included rather than excluded from the table. I don't want to be cruel but quite frankly if you were a geneticist and computer algorithms were phenotypes you would be out of a job! The proper way to handle a multitude of similar algorithms is to create a table which includes as many possible multiple state characteristics as your mind can devise such that no algorithm can be excluded from the table. The next step is to optimize the order of the characteristics such that the greatest separatory value is achieved with the least number of characteristics. Also you have to indulge dynamic classification and be willing and able to add new algorithms and new multiple state characteristics and reclassify each and every one. Sorry to be the one to tell you this but this is how modern classification of algorithms works. By the way I am not unfamiliar with the consequences of failing to publish and classify every single algorithm out there because believe it or not I more frequently than not run into the Rapid sort published online in some user or "developer" forum under the name of Quick sort of all things. Until we have a universal classification table which can contain each and every algorithm in the wild this unmanaged conbobulation of algorithms will continue to exist. Your approach is the equivalent of creating a personal dictionary, excluding any algorithm that is not to your liking and then calling it universal. By so doing you are violating the basis of the scientific method and should be denied publication yourself. ...IMHO (Talk) 15:15, 4 June 2006 (UTC)
If you think there should be a summary of counting sort that would be okay with me, it's somewhat common... just keep it terse, and keep the number of summaries down, and list the most well known ones first, since those would be the ones readers are most interested in. Deco 06:07, 5 June 2006 (UTC)
Diego R. Martinez or the originator Harold H. Seward are the right people to do a summary of the Counting sort since I am not familiar with it and have never used it beyond doing an initial performance test which only confirmed that the Rapid sort is about 3.5 times faster. My focus right now is on a proper and practical classification of sorting algorithms, i.e., the characteristics which need to be included such as Class and Class parameters for each sort. ...IMHO (Talk) 00:33, 6 June 2006 (UTC)

[edit] In preparation for further discussion...

[edit] Please use the following criteria for writing Visual Basic programs to perform any sort(s) of your choosing:

(You will only need to add a Command Button to the form to print out the list of sorted number.)

  1. Option Explicit
  2. '------Rapid sort - (Local telephone number example) -----------
  3. 'Please use the following template to write a
  4. 'programs for each of the sort routine(s) you choose
  5. 'to compare in terms of:
  6. '1.) Total number of items sorted.
  7. '2.) Total primary array size to accomodate the sort.
  8. '3.) Total secondary array size to accomodate the sort.
  9. '4.) Total length of key.
  10. '5.) Total time for setup and sort.
  11. 'Once you have done a number of sorts then
  12. 'please provide me with the results.
  13. 'Thanks. User Pce3@ij.net
  14. '--------------------------------------------------
  15. Dim a() As Long, b() As Long
  16. Dim i As Long, n As Long, j As Long
  17. Dim start As Double, finish As Double
  18. Const one As Integer = 1
  19. Const zero As Integer = 0
  20. Const maxi As Long = 9999999
  21. Const topnum As Long = 8888888
  22. Const botnum As Long = 1111111
  23. Const elements As Long = botnum
  24. '-----------------------------------------------------------
  25. Private Sub Command1_Click()
  26. For i = one To j
  27. Debug.Print i; b(i)
  28. Next
  29. End Sub
  30. '-----------------------------------------------------------
  31. Private Sub Form_Load()
  32. ReDim a(maxi), b(maxi)
  33. '------------------------------------------------------------
  34. start = Timer
  35. '------------------------------------------------------------
  36. For i = one To botnum
  37. n = Int(topnum * Rnd) + botnum
  38. a(n) = a(n) + one' (Check sort precursor uses a(n) = one here instead)
  39. Next
  40. '------------------------------------------------------------
  41. For i = botnum To topnum
  42. If a(i) > zero Then
  43. For n = one To a(i)
  44. b(j) = i: j = j + one
  45. Next
  46. End If
  47. Next
  48. '------------------------------------------------------------
  49. finish = Timer - start
  50. '------------------------------------------------------------
  51. Debug.Print
  52. Debug.Print "RESULTS: (Rapid sort)"
  53. Debug.Print
  54. Debug.Print "1.) Total number of items sorted:...................... "; Format(botnum, " #,###,###,###")
  55. Debug.Print "2.) Total primary array size to accomodate the sort:... "; Format(maxi, " #,###,###,###")
  56. Debug.Print "3.) Total secondary array size to accomodate the sort:. "; Format(j, " #,###,###,###")
  57. Debug.Print "4.) Total length of key:............................... "; Len(Str$(botnum))-one; " decimal digits."
  58. Debug.Print "5.) Total time for setup and sort:..................... "; finish; " seconds."
  59. Debug.Print
  60. '---------------------------------------------------------------
  61. End
  62. End Sub
RESULTS: (Rapid sort)
1.) Total number of items sorted:...................... 1,111,111
2.) Total primary array size to accomodate the sort:... 9,999,999
3.) Total secondary array size to accomodate the sort:. 972,250
4.) Total length of key:............................... 7 decimal digits.
5.) Total time for setup and sort:..................... 1.46899999999791 seconds.


...IMHO (Talk) 14:57, 1 June 2006 (UTC)

[edit] Check sort - precursor to the Rapid sort used to eliminate duplicate phone numbers

The only difference in the Rapid sort and the Check sort in terms of this code is the elimination of the so called "counting" function, i.e., a(n)=a(n)+one versus a(n)=one [Note: allow me to state here that while a(n)=one does not appear to increment the value of a(n) in the absence of a plus sign it actually does increment the value of a(n) from zero to one, hence adding a checkmark or count of one.) as in line number 38 above which was not initially incorporated into the hardware version called a "Instant sort" which used a single bit data bus random access memory chip (such as the 1972 variety of an NTE2102 1024x1 static ram chip). The only other name I recall we used to refer to either sort is the "Simple sort" and is another reason why I think both sorts probably existed before my possible recreation of them in 1972 since we also had computer programming students there from both Honeywell and Verizon and several other companies and government institutions who had small mainframes (Univac 9400’s and IBM 360’s) during that time with need of training for their employees. Our DP manager was a graduate of Vanderbilt in Electrical Engineering so the idea for them could have come literally from anywhere. My creation of both sorts could have originated through the process of osmosis. I do recall however that they occurred through my efforts to improve the Shell-Metzner sort routine at possibly even a later date maybe as late as the '90's. Hope this will be of some assistance in your quest to achieve an accurate historical record. ...IMHO (Talk) 07:17, 2 June 2006 (UTC)

[edit] Counting sort - Visual Basic version

(Note: to accommodate this Visual Basic and avoid negative index numbers and use of reserve words the value of xend has been used for certain calculations instead of xmin and most variable names have been changed.)


  1. Option Explicit
  2. '-- Counting sort (Example with identical local telephone number data) --------
  3. Dim aarray()
  4. Dim i As Long
  5. Dim s As Long
  6. Dim c As Long
  7. Dim range
  8. Dim ccount() As Long
  9. Dim scratch() As Long
  10. Dim j As Long
  11. Dim start As Double
  12. Dim finish As Double
  13. Const one As Integer = 1
  14. Const zero As Integer = 0
  15. Const xmax As Long = 9999999 ' same variable function as maxi
  16. Const xmin As Integer = zero
  17. Const xend As Long = 1111111
  18. Private Sub Command1_Click()
  19. For j = one To xend
  20. Debug.Print j; aarray(j)
  21. Next
  22. End Sub
  23. Private Sub Form_Load()
  24. start = Timer
  25. '-----------------------------------------------------------
  26. ReDim x(xend + one)
  27. ReDim aarray(xend + one)
  28. '------------------------------------------------------------
  29. For j = one To xend
  30. aarray(j) = Int((xmax - xend) * Rnd) + xend
  31. Next
  32. '------------------------------------------------------------
  33. GoSub csort
  34. '------------------------------------------------------------
  35. finish = Timer - start
  36. '------------------------------------------------------------
  37. Debug.Print
  38. Debug.Print "RESULTS: (Counting sort - Visual Basic)"
  39. Debug.Print
  40. Debug.Print "1.) Total number of items sorted:.......................... "; Format(xend, " #,###,###,###")
  41. Debug.Print "2.) Total Array (array) size to accommodate the sort:...... "; Format(xend, " #,###,###,###")
  42. Debug.Print "3.) Total Count (array) size to accommodate the sort:...... "; Format(range - one, " #,###,###,###")
  43. Debug.Print "4.) Total Scratch (array) size to accommodate the sort:.... "; Format(xend, " #,###,###,###")
  44. Debug.Print "5.) Total length of key:................................... "; Len(Str$(xmax)) - one; " decimal digits."
  45. Debug.Print "6.) Total time for setup and sort:......................... "; finish; " seconds."
  46. Debug.Print
  47. Exit Sub
  48. '------------------------------------------
  49. csort:
  50. range = xmax - xend + one
  51. ReDim ccount(xmax)
  52. ReDim scratch(range + one)
  53. '------------------------------------------
  54. For i = zero To range - one
  55. ccount(i) = zero
  56. Next
  57. '------------------------------------------
  58. For i = zero To xend - one
  59. c = aarray(i) + (one - xmin)
  60. ccount(c) = ccount(c) + one
  61. 'Debug.Print i; c, aarray(i)
  62. Next
  63. '------------------------------------------
  64. For i = one To range - one
  65. ccount(i) = ccount(i) + ccount(i - one)
  66. 'Debug.Print i, ccount(i)
  67. Next
  68. '------------------------------------------
  69. For i = zero To xend - one
  70. c = aarray(i) - xmin
  71. s = ccount(c)
  72. scratch(s) = aarray(i)
  73. ccount(c) = ccount(c) + one
  74. '------------------------------------------
  75. Next
  76. For i = zero To xend - one
  77. aarray(i) = scratch(i)
  78. Next
  79. '------------------------------------------
  80. Return
  81. End Sub
RESULTS: (Counting sort - Visual Basic)
1.) Total number of items sorted:.......................... 1,111,111
2.) Total Array (array) size to accommodate the sort:...... 1,111,111
3.) Total Count (array) size to accommodate the sort:...... 8,888,888
4.) Total Scratch (array) size to accommodate the sort:.... 1,111,111
5.) Total length of key:................................... 7 decimal digits.
6.) Total time for setup and sort:......................... 4.18699999999732 seconds.

[edit] Performance Charts

Quick sort efficiency

In addition to the best, average and worst case table there needs to be a graphic for each and every sort that displays the full range of performance between best and worst case. The third parameter of memory size can also be included a three dimentional chart. ...IMHO (Talk) 18:20, 4 June 2006 (UTC)

I think this would be useful, but it's not immediately clear how to go about it. CPU and time depend on a variety of factors including specific implementation, machine, and input. The purpose of big O notation is to characterize efficiency in a way that is independent of such concerns, and that's what we're using now. Deco 06:10, 5 June 2006 (UTC)
My point exactly! While use of big O notation may be okay for those who have had more than one college course in computer algorithms it is a generic construct and fails to give the average reader a true sense of sort routine behavior under various conditions to enlighten him on its true operational characteristics which a graph that is based upon actual values for n, k, t, v and m does, not to mention other relevant variables especially if you are not going to provide sufficient definition for the reader of big O notation. In the alternative you could use the charts to help explain and define big O notation for the reader. ...IMHO (Talk) 09:34, 5 June 2006 (UTC)
Sure, that sounds like a good idea, as long as we qualify it with the specifics of the implementation (language, bells and whistles) and the hardware it ran on, for the sake of reproducible results. We can also cite papers that have done practical comparisons. Deco 10:10, 5 June 2006 (UTC)


Sort routine performance can be standardized simply by using the same machine, programming launguage, etc. to perform an analysis for each sort routine to assure standardization in terms of comparing both the constrains inherent between one sort with another and each sort routines performance. The problem with using big O notation as the basis for comparison is that it only relates performance to one of several different numerical categories, i.e., linear, parabolic or logarithmic and exponential without providing the actual equation values derived from regression analysis for performance the values of the constraints. To provide a more comprehensive understanding of performance a plot of both the sort routine and array construction (initialization) performace are used to provide the basis upon which to judge the relative performance of each sort on differnce machines. The performance of any sort routine can therefore be judged on the basis of its Class, its constraints and its performance in additon to the nature of the machine on which it is to be run thus supporting Big O notation while providing significantly more performance information.

Plotting both sort routine performance and array construction efficiency together allows array construction efficiency to be used as the basis for determining sort routine performance independent of system performance or execution on any particular machine. (Since both array construction efficiency and sort routine performance are linear only their corresponding slope values are needed to yield a performance measure. In the Rapid Sort Performance vs Array Construction Efficiency chart the slope is ..01839 and .04692 respectively. Using class notation the class would be L (linear). Thus the Array Construction Efficiency has a class L measure of 4.692n while the Rapid Sort Performance class L measure is 1.839n. In other words while it takes 4.692 seconds to construct and array with n equal to 10,000,000 it only takes 1.839 seconds to do the sort itself. Since the class is L or linear the time it will take to sort 1,000,000 items is one tenth of 1.839 or .1839 seconds compared to .4692 seconds to initialize the array taking 39.19 percent of the time necessary to sort the array as to create it. If you are a developer looking for the best sort to use you will however need more than just an idea of speed per number of items but such things as data type and non-numeric key length, etc. Thus a classification scheme must be adopted which can accomodate all of these various characteristics that may effect the decision as to which sort is best.


...IMHO (Talk) 13:33, 5 June 2006 (UTC)



I think if we're going to have performance charts, they should be of very raw and easy to understand metrics like "CPU time" or "Memory use in bytes". The big-O notation already serves to eliminate machine- and implementation-specific concerns, and these simple measurements would demonstrate practical performance on a typical modern machine. I don't think there's really a need for anything in the middle. Deco 20:37, 5 June 2006 (UTC)
If you need to write a sort routine for a handheld device versus writing one for a mainframe then performance can be a factor long before n tops out which is what Big O is really intended to show. With Class notation I can calculate time for any value of n or any value of n for whatever time limits I have and thus pick and choose the method that best suits the exact application, taking into account its environment (CPU speed, memory size, etc.). In other words the classification serves me in a practical rather than in an esoteric way. ...IMHO (Talk) 23:12, 5 June 2006 (UTC)
A chart depicting performance of a single implementation of a single machine, or maybe a couple different machines, is a useful addition for the purpose of illustration - but attempting to create a formula describing performance in terms of a variety of variables just verges too far into the territory of Original Research. That kind of thing needs to go into papers first. Deco 00:41, 6 June 2006 (UTC)
Numerical analysis and its applications are well known, well understood, well published and well used as a common tool throughout all of the applied sciences. How do you think performance of sorting routines can be represented by Big O notation in the first place? Hearing you say that use of a best fit equation Class which is required in order to formulate Big O notation is original research is embarrassing. Calling addition of the variable symbols and values of the equations used to formulate Big O notation Original Research makes me realize now that what you have done is to copy other people's work without having any idea what they have done or what you are doing. You truly have no business being an editor on this Wiki or anywhere else in the real world. ...IMHO (Talk) 01:30, 6 June 2006 (UTC)
Wikipedia:No personal attacks. I think perhaps a simple graph of O(n) versus O(logn) versus O(n2) from (0-10) and (100-110) may be useful to show how quickly the different orders grow, but isn't really necessary. As to having an alternate to big O notation, I think that it wouldn't be very practical, as it would be different for every implementation of every algorithm. And after 1000 elements, you can start to see a difference anyway. --DevastatorIIC 01:06, 8 June 2006 (UTC)
You didn't understand me correctly. But forget it. Do whatever you want. I'm tired of being nice. Deco 01:46, 8 June 2006 (UTC)

[edit] Maybe document leading constant factors?

Thinking about Pce3's ideas, I'm wondering if it might be interesting from a practical standpoint for some algorithms to construct a list of leading constant factors on various real-world implementations. Although the smaller terms have little impact on growth for large n, there's a big difference between n2 and 10000n2 for all n. For example, we might have something like this in the quicksort article:

Implementation Platform Leading constant (ms)
Turbo C qsort Amiga 1000 204.3
Visual C++ qsort Intel P3 800 MhZ 40.3
glibc std::sort Intel P4 2.8 GhZ 5.62

This is relatively easy to measure - just sort a bunch of big lists, divide the times by the big-O, and average the results - and it provides a terse comparison of large-scale performance differences between implementations, and between asymptotically identical sorts on the same platform. It still varies according to input type, but I think it's reasonable to assume "random" input. This does verge into original research, but it's a lot better than the wishywashy language we use now claiming that quicksort is "usually" faster than mergesort or heapsort. Deco 01:57, 8 June 2006 (UTC)

Here's a sample. I just ran some tests with std::sort on a 32-bit integer array on an Intel P4 2.8 GhZ, compiled by "g++ -O3" on Gentoo Linux. After dividing by n log n, the numbers I got were (in nanoseconds): 10.1, 10.7, 10.4, 10.3. The average is about 10.4. So it takes roughly 10.4nlogn + o(nlogn) nanoseconds of time. I did the same experiment on an Intel P4 2.4 GhZ box and got 13.1 nanoseconds as the leading constant.
When I ran the same tests with C's qsort, due to overhead from the comparison operator the leading constant on these same two machines was 29.7 and 38.9, respectively. Deco 02:17, 8 June 2006 (UTC)
I don't think a long list of benchmark results are really suitable for Wikipedia. --P3d0 10:57, 8 June 2006 (UTC)
Yeah, you're right - even just giving a few representative examples, I think it's probably too still too sensitive to particular implementations/compilers/machines. The fact that I had to list about ten different attributes for the test above suggests that it's difficult to tackle this in a thorough and controlled way (especially with limited hardware access). I guess I'll stick to benchmarking on my own wiki. Deco 11:12, 8 June 2006 (UTC)

[edit] Intuitive explanation

I removed an "intuitive explanation" for the lower bound of O(n log n) for comparison sorts. It claimed that there are n keys with log n bits each, which is false. If n is the problem size (which it technically should be for asymptotic complexity), with average key length k, then there are actually n/k keys of k bits each. If n is the number of items (which people often seem to assume), then there is no reason to think the keys would be limited to log n bits. --P3d0 10:50, 9 June 2006 (UTC)

[edit] Revamp complexities

I think we need to revisit the complexity analysis in this article. It is conventional to measure complexity in terms of the size of the input, which in this case would be the number of keys multiplied by the average key length. Using other measures is allowable, but are often misleading and unrepresentative.

It may be relatively common to compute sorting complexity informally in terms of the number of keys, but I think for an encyclopedia we need to be more rigorous. I wonder if we can find a citation for each quoted complexity that contains a rigorous proof? --P3d0 11:02, 9 June 2006 (UTC)

I have studied both algorithms and complexity fairly extensively, and I sympathise with your viewpoint and the frustrations caused by inconsistent measurements of efficiency, but I can honestly say I've never seen any source deal with the bit-level complexity of sorting algorithms. To engage in such an endeavor would not only be original research, but would confuse the hell out of your average reader whose textbook says something rather different. Maybe a better strategy would be to be more explicit about our assumptions and what's being measured. For example, a common unit of measurement is "number of comparisons" or "number of exchanges", and a common assumption is "elements have constant size", although this is applied more to comparison sorts and less to indexing-based sorts. Deco 06:13, 13 June 2006 (UTC)
Ok, it's good to know what is normally done in textbooks. Given that this area seems to be fraught with peril, perhaps we should go the route of citing all the complexities we give in our tables. There must be a tome (Knuth?) that could cover most of the common algorithms; that would be a good place to start. Know one? --P3d0 13:02, 13 June 2006 (UTC)

[edit] Check sort and Rapid now under formal review

Since the Check sort and the Rapid sort have been put to use without giving much thought to their legitimacy or publication to date only a few relevant questions have been raised and answered. The process of formal review therefore represents not only an opportunity for their publication but an opportunity to explore every possible aspect of their being. I therefore invite you to submit questions in addition to making a full assessment of your own so that any misunderstanding or doubt can be addressed and hopefully put to rest. Please visit the Academia Wikia Check sort article where both sorts are open to formal review. Thanks. ...IMHO (Talk) 06:09, 13 June 2006 (UTC)

[edit] Splitting out list section

Does anyone object if I split out the List of Sorting algorithms into a new article? I could then add a {{see also}} or {{Main}} to the top of the "Overview of popular sorting algorithms" section. Miss Madeline | Talk to Madeline 22:30, 28 July 2006 (UTC)

[edit] New sorting algorithm(s) -> where should they be sent?

General question: If someone has a new, rigorously tested sorting algorithm that is significantly faster both theoretically and in practice than std::sort, is there a good place to discuss and/or demonstrate it? An issue with sorting algorithms is that since their core is usually under 100 lines of code (and thus simple), they're really easy to rip off if you even describe how they work. I have run into the issue that if I describe the capabilities of the algorithm to another Computer Scientist, I've found two reactions: "That's impossible", regardless of my willingness to demonstrate comparative performance or my proof (they refuse to look), or "what's a sorting algorithm?". Cuberoot31 22:23, 31 August 2006 (UTC)

Publish it in a peer-reviewed journal or conference. --P3d0 16:39, 27 September 2006 (UTC)
Don't worry about your idea being ripped off. Just publish it. Or better yet, find someone who knows something about sorting algorithms (like us), e-mail or talk to them, and figure out if you actually have a novel algorithm. It's so well-studied, there are only a couple really novel ones I've heard of in the last 5 years. Deco 21:00, 27 September 2006 (UTC)

I've already presented an earlier version of it at a conference, and no one made any comment, besides "that's neat". My primary course of study wasn't C.S. (it was ECE), and I may not have picked the right place. I'd be open to suggestions on how to republish it somewhere appropriate. I would be happy to demonstrate its performance, and am willing to discuss theoretical performance one-on-one. I've read Knuth's book on sorting, and I happen to know this algorithm is novel, though not as innovative as I thought before I read the book. It significantly outperforms std::sort on all the testcases I've thrown at it on multiple platforms (I welcome more testcases). It is partially distributional, so it does have some limitations, but surprisingly still works well on strings. Its worst-case performance is better than O(nlog(n)) on many classes of real problems, which I demonstrated in a proof. It also has excellent performance on linear media, far superior to any other approach I've seen.Cuberoot31 22:35, 2 October 2006 (UTC)

Sounds cool. Is your publication online somewhere? --P3d0 18:01, 6 October 2006 (UTC)
You may want to try the CJTCS; and tell me what happened while you're at it. I may be writing a couple of papers to publish too. Alternatively, wiki freaks could aim for a more casual but diminished audience at Academia Wikia, or get a more established (or at least more numerous) peer base at the Wikiversity. Again, I don't know how it'd work out there, but it could be worth a shot. Jafet 14:40, 7 October 2006 (UTC)

Yes, my publication is searchable online, but no, it is not readable in full online, and revealing it will remove my anonymity. I emailed the editor of CJTCS and received no response; it doesn't look like a big, reputable journal, which is what I'm really looking for. I'm looking to make a splash, which requires a peer-reviewed journal where people will actually read it.Cuberoot31 21:56, 12 October 2006 (UTC)

[edit] Graphical Representations: Remove?

This section refers to Microsoft QuickBasic example code and an arbitrary webpage, with no precluding explanation of what a "graphical representation" is. Should this be moved to External Links? - Mpnolan 22:22, 26 September 2006 (UTC)

[edit] Graphs and more...

I think a good way to make graphical representations of complexity is to base them on operations made. Operations like comparisons, swaps, long distance (ie. out of cache) operations, arithmetic operations (heapsort will show something here) and so on. It would be relatively easy to write a specialized class which implements static counters for such operations in most OOP languages. For an example see Musser, Introspective Sorting and Selection Algorithms [10].

Also, I made some original research which demonstrates that Combsort, like Quicksort, has its own worst-case fiascos, which appear rather unpredictably and explosively. Tyr running the canonical 1.3-gap implementation with a Quicksort median-of-3 killer sequence 100,000 elements long. I suggest that, in the abscence of concrete proof, we should mark Combsort's worst-case complexity as O(n2) (it degrades to bubblesort in the worst case so its definitely not more than quadratic). Jafet 6 October 2006 (UTC)

Unfortunately, since different sorts use different operations (comparisons, exchanges, random accesses, etc.) and these depend heavily on the particular implementation, I think direct comparisons of the number of operations is inappropriate on a fundamental level. Also, your combsort implementation is probably incorrect; although it uses bubble sort in its final stage, shell sort also uses insertion sort in its final stage, but that does not make it O(n2). By this time the input has been altered to be closer to the best case input for the final sort. Nevertheless, I have not yet found any formal worst-case time for comb sort - I'm sure the original paper would contain such a bound if I could locate it. Deco 13:44, 6 October 2006 (UTC)
I have tested my combsort implementation thoroughly, as well as tested the same "killer input"s on various combsort implementations written by different people (eg [11]). In any case, is there any concrete reference (paper, journal...) stating that combsort is O(n log n)? I can't find any online. Jafet 05:12, 7 October 2006 (UTC)

[edit] Spreadsort

With assistance from its author in obtaining published reference materials, I've written a short article on the algorithm spreadsort. I'd appreciate any help in reviewing this article and adding it properly to the list of sorting algorithms. Thanks. Deco 21:02, 5 November 2006 (UTC)

If anyone would like graphs and/or to submit testcases for runtime comparisons, I'd be happy to run them. My current test system is an OSX G4 (yes, it's old). I've run it on Linux before, and could do so again. I am also willing to answer some questions about the algorithm. As with Deco, I would welcome integrating this into the Sorting Algorithm discussion. It may be appropriate to mention Markku Tamminen's work (Two Is As Good As Any) also, as it is in the same class of algorithms. In the interest of maintaining NPOV, I am not making such changes myself.Cuberoot31 21:07, 6 November 2006 (UTC)

If you'd like to make any new contributions to the article but you're worried about self-promotion, please just drop them on Talk:Spreadsort and I'll have a look. I'd rather not add too much performance data that isn't in the paper, as private runs are difficult to verify (Wikipedia:Verifiability). The relation to Tamminen's work is certainly of interest. Deco 22:18, 6 November 2006 (UTC)

I think the right place to put this would be in distributional algorithms, with theta(n) average performance and O(nlog(n)) worst-case. I've looked at my old work on worst-case performance proof, and the actual worst-case for the combined algorithm is a little more complicated: O(n(k - log(n)).5) (if log(n) > k it's trivially O(n)), up until k = log(n)2, at which point it immediately becomes O(nlog(n)). so for 2k = n2 it's log(n).5, which for n = 225 (and k=50, 6 bytes long) would be 5 (times the square root of 2, plus the initial operation, so 8 actual iterations). In the worst-case, it breaks out to do comparison-based sorting after only one iteration, but with n in each bin divided by just enough for the heuristic to determine that comparison-based sorting has a better worst-case. So that's 8 operations vs. 32 operations; a noticeable improvement. I'll note that since this better than O(nlog(n)) performance degrades slowly up until 2k = log(n)2 if n is say 32 (4 billion elements), it will have slightly less operations in the worst-case up to k = 512, or 64 bytes, which is a pretty big key. For 32 and 64-bit integer keys, Spreadsort has a significant speed improvement on real data where n is over 1 million. The worst-case I calculated in the paper assumes that if n is cut into significantly smaller-sized pieces, it falls back on O(nlog(n)) for those smaller pieces, and didn't account for a problem that cuts by just the right amount to sit on the boundary of the heuristic.Cuberoot31 22:58, 7 November 2006 (UTC)

Okay, i've added it to the page under "not comparison sorts". What I put in for average case is debatable, but it's normal performance for the recursive algorithm on a hard problem. On evenly distributed random data and on anything bucketsort can handle, it's O(n). The worst-case is a distribution playing with the heuristic, which I've calculated and mention above.Cuberoot31 19:51, 8 November 2006 (UTC)

[edit] Radix-based sorting

I've updated the performance summary to be accurate for both LSD and MSD radix sorting. For MSD I picked an "in-place" sorting approach (which is not stable), to differentiate from LSD radix sorting. I also have added Spreadsort performance info. Spreadsort actually has two worst-case performances (the one mentioned, and O(nlog(n)), and its worst-case is the better of the two, but I put in the version that applies for large n and reasonable k, as I figure that is more relevant to an asymptotic discussion. It's worth mentioning that in certain modes of operation, Spreadsort is basically an MSD radix sort where s = log(n)/c, and c is a constant, though it also does comparison-based sorting to obtain better worst-case performance and better performance on random data. Also, I set the best-case of MSD Radix sort as O(n), though it's actually the better of O(n(k/s)) and O(nlog(n)/s). Is there a better way to put this?Cuberoot31 05:48, 10 November 2006 (UTC) -Changed to O(n(k/s)), as the other option is incredibly unlikely (requires bins with exactly 1 item in them each)Cuberoot31 16:25, 10 November 2006 (UTC)

[edit] Shear sort

See [12], it seems to be an extremely fast algorithm, but I haven't been able to find any other sorts of analysis for it besides implementations. Anyone know anything about this algorithm? — Edward Z. Yang(Talk) 00:18, 17 November 2006 (UTC)

Check out Batcher's method in Knuth's book [1] It's faster: O(log(n)(log(n+ 1)). Parallel algorithms are a significantly different problem from sequential searching, which is what people normally discuss with sorting, though it may start to become somewhat outdated as multi-core processors appear.Cuberoot31 08:07, 17 November 2006 (UTC)