Sorting network
In computer science, comparator networks are abstract devices built up of a fixed number of "wires", carrying values, and comparator modules that connect pairs of wires, swapping the values on the wires if they are not in a desired order. Such networks are typically designed to perform sorting on fixed numbers of values, in which case they are called sorting networks.
Sorting networks differ from general comparison sorts in that they are not capable of handling arbitrarily large inputs, and in that their sequence of comparisons is set in advance, regardless of the outcome of previous comparisons. This independence of comparison sequences is useful for parallel execution and for implementation in hardware. Despite the simplicity of sorting nets, their theory is surprisingly deep and complex. Sorting networks were first studied circa 1954 by Armstrong, Nelson and O'Connor, who subsequently patented the idea.
Sorting networks can be implemented either in hardware or in software. Donald Knuth describes how the comparators for binary integers can be implemented as simple, three-state electronic devices. Batcher, in 1968, suggested using them to construct switching networks for computer hardware, replacing both buses and the faster, but more expensive, crossbar switches. Since the 2000s, sorting nets are used by the GPGPU community for constructing sorting algorithms to run on graphics processing units.
Introduction
A sorting network consists of two types of items: comparators and wires. The wires are thought of as running from left to right, carrying values that traverse the network all at the same time. Each comparator connects two wires. When a pair of values, traveling through a pair of wires, encounter a comparator, the comparator swaps the values if and only if the top wire's value is greater than the bottom wire's value.In a formula, if the top wire carries and the bottom wire carries, then after hitting a comparator the wires carry and, respectively, so the pair of values is sorted. A network of wires and comparators that will correctly sort all possible inputs into ascending order is called a sorting network.
The full operation of a simple sorting network is shown [|below]. It is easy to see why this sorting network will correctly sort the inputs; note that the first four comparators will "sink" the largest value to the bottom and "float" the smallest value to the top. The final comparator simply sorts out the middle two wires.
Depth and efficiency
The efficiency of a sorting network can be measured by its total size, meaning the number of comparators in the network, or by its depth, defined as the largest number of comparators that any input value can encounter on its way through the network. Noting that sorting networks can perform certain comparisons in parallel, and assuming all comparisons to take unit time, it can be seen that the depth of the network is equal to the number of time steps required to execute it.Insertion and Bubble networks
We can easily construct a network of any size recursively using the principles of insertion and selection. Assuming we have a sorting network of size n, we can construct a network of size by "inserting" an additional number into the already sorted subnet. We can also accomplish the same thing by first "selecting" the lowest value from the inputs and then sort the remaining values recursively.The structure of these two sorting networks are very similar. A construction of the two different variants, which collapses together comparators that can be performed simultaneously shows that, in fact, they are identical.
The insertion network has a depth of, where is the number of values. This is better than the time needed by random-access machines, but it turns out that there are much more efficient sorting networks with a depth of just, as described below.
Zero-one principle
While it is easy to prove the validity of some sorting networks, it is not always so easy. There are permutations of numbers in an -wire network, and to test all of them would take a significant amount of time, especially when is large. The number of test cases can be reduced significantly, to, using the so-called zero-one principle. While still exponential, this is smaller than for all, and the difference grows rapidly with increasing.The zero-one principle states that, if a sorting network can correctly sort all sequences of zeros and ones, then it is also valid for arbitrary ordered inputs. This not only drastically cuts down on the number of tests needed to ascertain the validity of a network, it is of great use in creating many constructions of sorting networks as well.
The principle can be proven by first observing the following fact about comparators: when a monotonically increasing function is applied to the inputs, i.e., and are replaced by and, then the comparator produces and. By induction on the depth of the network, this result can be extended to a lemma stating that if the network transforms the sequence into, it will transform into. Suppose that some input contains two items, and the network incorrectly swaps these in the output. Then it will also incorrectly sort for the function
This function is monotonic, so we have the zero-one principle as the contrapositive.
Constructing sorting networks
Various algorithms exist to construct simple, yet efficient sorting networks of depth such as Batcher odd–even mergesort, bitonic sort, Shell sort, and the Pairwise sorting network. These networks are often used in practice. It is also possible, in theory, to construct networks of logarithmic depth for arbitrary size, using a construction called the AKS network, after its discoverers Ajtai, Komlós, and Szemerédi. While an important theoretical discovery, the AKS network has little or no practical application because of the linear constant hidden by the Big-O notation, which is in the "many, many thousands". These are partly due to a construction of an expander graph. A simplified version of the AKS network was described by Paterson, who notes that "the constants obtained for the depth bound still prevent the construction being of practical value". Another construction of sorting networks of size was discovered by Goodrich. While their size has a much smaller constant factor than that of AKS networks, their depth is, which makes them inefficient for parallel implementation.Optimal sorting networks
For small, fixed numbers of inputs, optimal sorting networks can be constructed, with either minimal depth or minimal size. These networks can be used to increase the performance of larger sorting networks resulting from the recursive constructions of, e.g., Batcher, by halting the recursion early and inserting optimal nets as base cases. The following table summarizes the optimality results for small networks for which the optimal depth is known:1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | |
Depth | 0 | 1 | 3 | 3 | 5 | 5 | 6 | 6 | 7 | 7 | 8 | 8 | 9 | 9 | 9 | 9 | 10 |
Size, upper bound | 0 | 1 | 3 | 5 | 9 | 12 | 16 | 19 | 25 | 29 | 35 | 39 | 45 | 51 | 56 | 60 | 71 |
Size, lower bound | 43 | 47 | 51 | 55 | 60 |
For larger networks neither the optimal depth nor the optimal size are currently known. The bounds known so far are provided in the table below:
18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | |
Depth, upper bound | 11 | 11 | 11 | 12 | 12 | 12 | 12 | 13 | 13 | 14 | 14 | 14 | 14 | 14 | 14 |
Depth, lower bound | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 |
Size, upper bound | 77 | 85 | 91 | 100 | 107 | 115 | 120 | 132 | 139 | 150 | 155 | 165 | 172 | 180 | 185 |
Size, lower bound | 65 | 70 | 75 | 80 | 85 | 90 | 95 | 100 | 105 | 110 | 115 | 120 | 125 | 130 | 135 |
The first sixteen depth-optimal networks are listed in Knuth's Art of Computer Programming, and have been since the 1973 edition; however, while the optimality of the first eight was established by Floyd and Knuth in the 1960s, this property wasn't proven for the final six until 2014.
For one to eleven inputs, minimal sorting networks are known, and for higher values, lower bounds on their sizes can be derived inductively using a lemma due to Van Voorhis :. The first ten optimal networks have been known since 1969, with the first eight again being known as optimal since the work of Floyd and Knuth, but optimality of the cases and took until 2014 to be resolved.
An optimal network for size 11 was found in December of 2019 by Jannis Harder, which also made the lower bound for 12 match its upper bound.
Some work in designing optimal sorting network has been done using genetic algorithms: D. Knuth mentions that the smallest known sorting network for was found by Hugues Juillé in 1995 "by simulating an evolutionary process of genetic breeding", and that the minimum depth sorting networks for and were found by Loren Schwiebert in 2001 "using genetic methods".