Sorting algorithm

A sorting algorithm is an algorithm for computers to sort a list of data, possibly unsorted. More formally, sorting algorithms find the sorted permutation of any given input, therefore a valid sorting algorithm must return a list that is in either non-decreasing or non-increasing order containing all the same items matching that of the initial input.

The task in sorting a list is deceptively simple, and many intuitive techniques one would use, such as Selection sort or Insertion sort, prove inefficient in managing large amounts of data often present in digital databases today. As a result, a large part of computer science, since the 1950's, is dedicated to finding solutions to sort and manage data efficiently, and algorithms related to sorting are frequently taught in computer science courses today.

Complexity
The most important attribute in many cases as it provides a measure of the algorithm's running time in the amount of operations such as comparisons, calculations, and data movements. Mathematically speaking, algorithms with a worse complexity are infinitely slower than those with better complexities as the amount of items grow therefore it is crucial to optimize in this area for very large problem sizes.

Best and worst case are self explanatory and are the best and worst possible times the algorithm can perform. Average case is given from how the sorting algorithm performs on a random shuffle which are statistically the most common data sets on average from all possible permutations.

This measure is not perfect however, and it fails to take into account hardware-level nuances such as cache hits/misses and branch predictions which also affect the running time. It also omits other, possibly important, information such as the constant on the largest term and supposedly smaller terms that end up dominating the running time for smaller problem sizes.

Memory
Some algorithms require extra information to sort the data and can be measured by complexity. The use of extra space can reduce the complexity and boost the intuitiveness of algorithms but is an issue for systems that cannot allocate enough extra space.

A sort that permutes data within its storage space (ex. never copies and stores data outside) is said to be in-place (in-situ) and the opposite being out-of-place (ex-situ). The term "In-place" is also synonymous with sorts that require only a constant amount of extra memory.

Stability

 * Main page: Stability

A sort is said to be stable if the initial order of equal elements is preserved in the final sorted permutation. In-place sorting algorithms are generally unstable because there is no way to recover the original order of equal elements when solely inspecting the value. Assigning a unique value to each item makes any unstable sort stable, since it effectively eliminates all equal items, but requires an additional O(n) space.

Well-known Algorithms

 * See also: full list of sorting algorithms

Unclassified Hybrid Sorts
Currently, all unclassified hybrid sorts make $$O(n\log n)$$ comparisons and $$O(n)$$ data moves in $$O(1)$$ space. Due to the immense challenge posed by these bounds, these sorts use very non-trivial and complicated techniques which involve multiple (if not all) methods of sorting, so it's unclear which single category they belong to.