It is a comparison-based sort since elements a and b are only swapped in case their relative order has been obtained in the transitive closure of prior comparison-outcomes.
As a part of the translation process, he needed to sort the words in Russian sentences before looking them up in a Russian-English dictionary, which was in alphabetical order on magnetic tape.
Quicksort gained widespread adoption, appearing, for example, in Unix as the default library sort subroutine.
Robert Sedgewick's PhD thesis in 1975 is considered a milestone in the study of Quicksort where he resolved many open problems related to the analysis of various pivot selection schemes including Samplesort, adaptive partitioning by Van Emden[8] as well as derivation of expected number of comparisons and swaps.
[7] Jon Bentley and Doug McIlroy in 1993 incorporated various improvements for use in programming libraries, including a technique to deal with equal elements and a pivot scheme known as pseudomedian of nine, where a sample of nine elements is divided into groups of three and then the median of the three medians from three groups is chosen.
[7] Bentley described another simpler and compact partitioning scheme in his book Programming Pearls that he attributed to Nico Lomuto.
The steps for in-place quicksort are: The choice of partition routine (including the pivot selection) and other details not entirely specified above can affect the algorithm's performance, possibly to a great extent for specific input arrays.
While there is no reason to exchange elements equal to the pivot, this change allows tests on the pointers themselves to be omitted, which are otherwise needed to ensure they do not run out of range.
Such a separation can only result when no inversions are found, with both pointers advancing to the pivot element at the first iteration (they are then considered to have crossed, and no exchange takes place).
With the middle element as the pivot, however, sorted data results with (almost) no swaps in equally sized partitions leading to best case behavior of Quicksort, i.e. O(n log(n)).
Subsequent recursions (expansion on previous paragraph) Let's expand a little bit on the next two segments that the main algorithm recurs on.
Because we are using strict comparators (>, <) in the "do...while" loops to prevent ourselves from running out of range, there's a chance that the pivot itself gets swapped with other elements in the partition function.
Let's first examine the choice of recursing on (lo..p) and (p+1..hi), with the example of sorting an array where multiple identical elements exist [0, 0].
Because the right half of the recursion includes the returned index, it is the partition function's job to exclude the "head" in non-advancing scenarios.
Median-of-three code snippet for Lomuto partition: It puts a median into A[hi] first, then that new value of A[hi] is used for a pivot, as in a basic algorithm presented above.
Specifically, the expected number of comparisons needed to sort n elements (see § Analysis of randomized quicksort) with random pivot selection is 1.386 n log n. Median-of-three pivoting brings this down to Cn, 2 ≈ 1.188 n log n, at the expense of a three-percent increase in the expected number of swaps.
[22][23] Given an array of size n, the partitioning step performs O(n) work in O(log n) time and requires O(n) additional scratch space.
Assuming an ideal choice of pivots, parallel quicksort sorts an array of size n in O(n log n) work in O(log2 n) time using O(n) additional space.
The use of scratch space simplifies the partitioning step, but increases the algorithm's memory footprint and constant overheads.
The in-place version of quicksort has a space complexity of O(log n), even in the worst case, when it is carefully implemented using the following strategies.
Quicksort with in-place and unstable partitioning uses only constant additional space before making any recursive call.
Another, less common, not-in-place, version of quicksort uses O(n) space for working storage and can implement a stable sort.
[28][29] The main disadvantage of quicksort is the implementation complexity required to avoid bad pivot choices and the resultant O(n2) performance.
Introsort is a variant of quicksort which solves this problem by switching to heapsort when a bad case is detected.
The main disadvantage of merge sort is that it is an out-of-place algorithm, so when operating on arrays, efficient implementations require O(n) auxiliary space (vs. O(log n) for quicksort with in-place partitioning and tail recursion, or O(1) for heapsort).
Merge sort works very well on linked lists, requiring only a small, constant amount of auxiliary storage.
A selection algorithm chooses the kth smallest of a list of numbers; this is an easier problem in general than sorting.
This is a kind of three-way quicksort in which the middle partition represents a (trivially) sorted subarray of elements that are exactly equal to the pivot.
When partitioning, the input is divided into moderate-sized blocks (which fit easily into the data cache), and two arrays are filled with the positions of elements to swap.
The BlockQuicksort technique is incorporated into LLVM's C++ STL implementation, libcxx, providing a 50% improvement on random integer sequences.
i
and
j
respectively), the black outlines show the positions of the sorted elements, and the filled black square shows the value that is being compared to (
pivot
).