Sorting Algorithms

There are lots of things to consider with sorting algorithms. Most people don't need to know anything about them; others need to know a lot. They are an excellent area to explore the questions of complexity and simplicity, there being so many, some of which are subtle, others are devious, and many are clever.

Comparison follows. "stable" means "algorithm can be used to implement a StableSort".

                Worst case    Average case   Best case   Extra space   Stable?
 BubbleSort       O(n^2)        O(n^2)?        O(n)          O(1)        yes
 SelectionSort    O(n^2)        O(n^2)        O(n^2)         O(1)        No (it can depend on sort order and data arrangement)
 InsertionSort    O(n^2)        O(n^2)         O(n)          O(1)        yes

BitonicSort O(n log^2 n) O(n log^2 n)? ? O(1)? ?

ShellSort O(n^2) O(n log n)? O(n) O(1) no QuickSort O(n^2) O(n log n) O(n log n) O(log n) no HeapSort O(n log n) O(n log n) O(n log n) O(1) no SmoothSort O(n log n) O(n log n)? O(n) O(1) no

MergeSort O(n log n) O(n log n) O(n log n) O(n) yes TimSort O(n log n) O(n log n)? O(n) O(n) yes

CountingSort O(n+k) O(n+k) O(n+k) O(n+k) yes RadixSort O(n+k) O(n+k) O(n+k) O(n+k) yes BucketSort O(n^2) O(n+k) ????? O(n*k) or O(n+k) ?

BogoSort unbounded O(n!) O(n) O(1) no SlowSort O(n^(log n)) O(n^(log n)) O(n^(log n)) O(1) yes QuantumBogoSort O(1) O(1) O(1) O(0) no

In above table, k is the size of the bucket or size of the packet.

Why are there so many of them? And why isn't the "best" one used everywhere? We need to consider time-complexity and space-complexity, both short and asymptotic - this comparison only shows asymptotic behaviour and neglects constant factors. The "real" performance of your algorithm depends on the system you are coding for, the amounts of data, the characteristics of your data, and so on. We also need to consider ease of coding, compactness of coding, predictability of time consumption and intermediary state of data.

In short, each of the algorithms has its limitations. HeapSort in-place, for example, completely messes up the order of your data, should you interrupt it before it is finished, whereas the three simpler algorithms iteratively improve the sortedness. If we need stability, none of the unstable algorithms can be used. Believe it or not, there are situations where BubbleSort is not a bad choice.

Most of the time the truly "optimal" method is a combination of two or more of the above.

See also:

Since a list of n elements has n! possible orderings, and since each comparison might only eliminate half the remaining candidate orderings, we know that in the worst case a sorting algorithm might need to make log(n!) comparisons, where the logarithm is taken base 2. This gives a lower bound for the number of comparisons that a serial algorithm can take in the worst case, which is O(n log n). Checking that an already sorted sequence is sorted takes O(n), which is therefore the lower bound of the best case.

The RadixSort runs in O(n), but only works in a few specialized applications. It avoids the above O(n log n) worst case lower bound because it is not based on comparing elements with each other.

Another consideration is the one between StableSort and UnstableSort. If we find an algorithm that suits our data perfectly, but isn't stable, we may need to use a less optimal algorithm (if, of course, stability is important to us for some reason).


See also: SortingExactlyThreeItems

More interesting would be to use CeePlusPlus TemplateMetaprogramming to generate this type of optimal sort for an arbitrary number of items at compile-time.


The above mostly focuses on sorting when a TotalOrder over the set of elements is readily computed. However, I often run into cases where no such TotalOrder is available (or at least is not readily computed). Instead, one has constraints over each element regarding which items it must appear before and which it must appear after in the sorted order. Dependency descriptions and schedules are especially common examples. If there are any total ordering rules, they are secondary to these constraints (e.g. "if nothing else applies, sort in alphabetical order" - usually to guarantee a deterministic sorting order).

Such cases generally call for a TopologicalSort? or a variation upon it (http://en.wikipedia.org/wiki/Topological_sort).


Random thought:

If one wanted to utilise the cache properly, one would use a sort which minimises comparisons, because every comparison requires you to load two objects for comparison, while a moderate-sized working array of pointers could be kept in L1 or L2 without trouble. Am I right in thinking this? Or dead-wrong, or what?


See also:


CategoryAlgorithm


EditText of this page (last edited December 5, 2013) or FindPage with title or text search