Quick Sort In Haskell

[From HaskellLanguage]

Here's a snippet of Haskell. It's an (inefficient) implementation of quicksort. It, and its explanation, was taken from http://www.haskell.org/aboutHaskell.html (except that we changed two of the variable names).

 > qsort []= []
 > qsort (x:xs) = qsort small ++ mid ++ qsort large
 >   where
 >     small = [y | y<-xs, y<x]
 >     mid   = [y | y<-xs, y==x] ++ [x]
 >     large = [y | y<-xs, y>x]
The first line reads: "The result of sorting an empty list (written []) is an empty list". The second line reads: "To sort a list whose first element is x and the rest of which is called xs, just sort all the elements of xs which are less than x (call them "small"), sort all the elements of xs which are greater than x (call them "large"), and concatenate (++) the results, with x sandwiched in the middle."

The definition of "small", which is given immediately below, is read like this: "'small' is the list of all y's such that y is drawn from the list xs, and y is less than x". The definition of 'large' is similar. The syntax is deliberately reminiscent of standard mathematical set notation, pronouncing "|" as "such that" and " <-" as "drawn from".

Methinks that is too complicated a function. I speak the magic word -- Amend!
 quicksort [] = []
 quicksort (x:xs) = quicksort small ++ (x : quicksort large)
   where small = [y | y <- xs, y <= x]
         large = [y | y <- xs, y > x]


[(this question inserted by MartinValjavec?; please delete after things have been clarified/corrected): The above code does, of course, not sandwich x in the middle because there could be more values equal to x. But wouldn't it be necessary to write something like [y | y<-(x:xs), y=x] in order to avoid skipping the first occurrence of x? Since I know nothing about Haskell I am unsure whether the sample would work anyway but suspect it needs correction (or if it actually works: some clarification why, since it seems rather odd). Thank you very much.]

I've now added the original x to the list mid which solves your objection. The next person's offering is also valid, though not necessarily better. I left the "y=x" bug.

You're right. The above code will not include the first element though it will include any other elements that are equal to the first element. Here is some corrected code. A second bug is that y = x should really be y == x

 > qsort []= []
 > qsort l@(x:xs) = qsort small ++ mid ++ qsort large
 >   where
 >     small = [y | y<-xs, y<x]
 >     mid   = [y | y<-l, y==x]
 >     large = [y | y<-xs, y>x]
The web page referenced above makes much of the fact that this function definition is much smaller than a corresponding one in C. This seems a little bogus to me. For instance, a substantial part of the brevity comes from the decision to use the first element of the list as the "pivot". In C, changing this (which is necessary for a real implementation; pivoting on the first element can lead to catastrophic performance loss when e.g. sorting an already-sorted list) to a more sensible scheme requires very little extra code; I'm not sure this is so true of the Haskell version. Would a production-quality quicksort in Haskell be much shorter or easier to understand than a production-quality quicksort in C? (I really don't know; would someone more expert than me like to comment?)

Not sure if I'm more expert, but in the case that the list is in order, or reversed, quick sort has O(n^2) performance. One solution is to pair the list up with a list of random numbers - then sort the list of pairs on the random elements, then sort the list again on the items from the original list, then select out the original list elements. This only requires a few more lines of code. On small lists it could be several times slower because the list is effectively scanned 3 times, but sorting small lists hardly matters. On large lists the gain due to avoiding having sorted lists, or sorted sub-lists is considerable. This has been checked out in Haskell, and works fine. [DavidMartland]

A production-quality MergeSort is a lot shorter in Haskell than in C.


The question is difficult to answer: an important point is that in C you would use a mutable array, while the Haskell solution uses a linked list. You can use a mutable array in Haskell (using a state monad), but the resulting code would very much look like its C equivalent. If you really want to sort a list, you would probably use another algorithm both in C and in Haskell. Pivoting on a middle element in a linked list requires each time O(n) operations to find the middle, independent of the programming language. Therefore, I would prefer merge sort for sorting a linked list, unless there are reasons why pivoting on the first element is acceptable (perhaps the list is generated by a pseudo-random number generator?).

I do agree that the comparison is bogus. What did you expect from a language comparison comparing two toy programs?

-- StephanHouben

Basically, one could do quicksort on non-mutable arrays quite much in the same way as for lists, but I don't remember if Haskell has reasonable facilities for this. However, the result would be a lot shorter than the version with mutable arrays. -- PanuKalliokoski


One non-obvious point about this function that many people find stunning: Beginning programmers who want to find the minimum element of a list often try sorting the list and extracting the first element of the sorted list. In most languages, this is a bad move, because sorting the list is O(n log n), but scanning the list looking for the minimum would have been O(n). In Haskell, however, you can write

min = hd(qsort list)
and it will be O(n). Because Haskell's evaluation is lazy, it executes only as much of the "qsort" function as is necessary to compute the head of the sorted list! The resulting algorithm is effectively HoaresAlgorithm? for locating the minimum element of a list.

-- MarkJasonDominus

Assuming, that is, that you've replaced the qsort above with one that does a better job of pivoting. :-) But yes, this is very nice.

Yes, although it's interesting to notice that not all of the "qsort" worst-cases are also "min(qsort)" worst cases. For example, if the input is already in the correct order, the "min = hd(qsort list)" scans the list exactly once, extracts the minimum, and returns it. It's the best case, not the worst. -- MJD


Sorry for asking stupid questions: What's the abovementioned HoaresAlgorithm? for finding the minimum? Is it the one for finding the k-th smallest where we simply set k=1? This would on average perform linearly (quadratic in worst case) but, for searching the minimum, I'd just iterate once and remember the smallest value, which always performs in linear time.

This is also very easy to code in imperative languages (where I can save the smallest value so far in a variable) but how is this done in Haskell (or Lisp or Scheme...)? Would I need LexicalScoping to 'hide' the state variable holding the smallest-so-far or is it equally simple as in e.g. C without that? Functional style is still a brain twister to me and I usually have a hard time guessing what an adequate functional solution to a common programming task might look like...

It is even easier to code a minimum in a functional language using folds e.g.

 > minimum = foldl1 min
 >   where min x y = if x < y then x else y

I also wonder how we can conclude that, by 'just performing lazy evaluation', Haskell will be smart enough to optimize the sorting calls and reduce this to a mere find algorithm. Finding the head of a concatenated list is equivalent to finding the head of its first non-empty constituent. But where's the knowledge that we can break off list construction early? Sounds like witchcraft to me.

Did you actually try this? Will it work for all Haskell systems, guaranteed by the language specification, or only for the smarter implementations?

-- MartinValjavec?

It is guaranteed to work. All intermediate computations will be deferred until needed, and most aren't needed.


And here's another one about performance (yes, I know this is an inefficient example implementation, just to demonstrate the language, but I don't want to 'improve' the example as such... just throw in a few questions that might be interesting to play with):

1) Speed: How elegantly could we change the definition to extracting small, mid and large in one iteration rather than in three? Could we still have three separate definitions to define the selection criteria of small, mid and large or would we have to cram it all into a single expression? In other words: How elegant could a 'multiplex selection' be in functional style?

2) Memory usage: In C, I would replace recursion with iteration and process the shorter partition first. This reduces memory usage from O(N) to O(log N). This works only because I iterate rather than recurse when partitioning the shorter part; recursion would use stack space! Still, for the longer partitions I would have to simulate recursion by pushing start and end positions onto an explicit stack (which uses less memory than a typical stack frame of a recursive call but still grows linearly with recursion depth, just like true recursion does).

Since tail recursion could be optimized into mere iteration, we could expect Haskell (or whatever functional language we might use: most of them would probably support this) to eliminate recursion on the second of the two qsort calls.

In a single thread solution, this would be the longer part, since we want the shorter to be processed first - which different from the C solution but I guess memory usage would still be kept down to O(log N). Of course I would have to compute small and large first in order to compare their lengths and write either

 > qsort small ++ mid ++ qsort large
 > or
 > qsort large ++ mid ++ qsort small
But: Would Haskell always perform the first call first? I think it is not forced to do that.

In a multithreaded environment, it might even perform these calls in parallel, and automated parallelization is good to have if the computing tasks can be distributed over multiple CPUs. Of course it's only natural that multiple threads would use multiple stacks but, since there would be a practical limit on the number of concurrent threads, memory usage would not grow 'beyond some point'.

The real problem with quicksort is: I do not want the partitioning process of the longer part immediately to be split into several threads. These would run for a long time using deep recursion and a lot of stack space. Rather I would like the shorter part - the one that, in a single threaded solution, should be processed first - to make use of multiple threads... and, of course, the remainders of the longer part, as soon as they are short enough.

In fact, I would not even want the shorter part to be multithreaded unless it is really short enough! The notion of short enough is, of course, problem dependent; quicksort is one example but a common treatment of the general question 'Where should I parallelize?' is a bit tougher. This might be an interesting research topic. I guess that a syntax which lets me influence the threading decisions (like in imperative languages, where I always have to do that explicitly) would often allow for effective performance tuning but, on the other hand, it would usually be less portable (depending on how detailed the effects can be specified). And most likely it would be less elegant and readable.

-- MartinValjavec?

1) Speed. A compiler may very well transform the program so that small, mid and large are computed in parallel. It isn't even hard:

- desugar the program; the list comprehension becomes a call to filter - inline filter, calls to foldr result - combine three folds over the same list into one by tupling the three results. (GHC may be missing the rule to actually do so.)

2) Memory consumption. Haskell's recursion is not implemented using a call stack. Tail call optimization isn't as important as in eager languages. So all these thoughts are simply not applicable.


    part l y = f [] [] [] l
      where f gt eq lt x:xs = 
         if x > y then f x:gt eq lt xs
         else if x == y then f gt x:eq lt xs
         else f gt eq x:lt xs
      f gt eq lt [] = (gt, eq, lt)

Three-way partition for the guy who asked if we could split the list in one parse.
  The answer is yes.


CategoryProgrammingLanguage CategoryFunctionalProgramming CategoryHaskell


EditText of this page (last edited January 24, 2012) or FindPage with title or text search