Decoding methods
In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.
Notation
is considered a binary code with the length ; shall be elements of ; and is the distance between those elements.Ideal observer decoding
One may be given the message, then ideal observer decoding generates the codeword. The process results in this solution:For example, a person can choose the codeword that is most likely to be received as the message after transmission.
Decoding conventions
Each codeword does not have an expected possibility: there may be more than one codeword with an equal likelihood of mutating into the received message. In such a case, the sender and receiver must agree ahead of time on a decoding convention. Popular conventions include:Maximum likelihood decoding
Given a received codeword maximum likelihood decoding picks a codeword that maximizesthat is, the codeword that maximizes the probability that was received, given that was sent. If all codewords are equally likely to be sent then this scheme is equivalent to ideal observer decoding.
In fact, by Bayes Theorem,
Upon fixing, is restructured and
is constant as all codewords are equally likely to be sent.
Therefore,
is maximised as a function of the variable precisely when
is maximised, and the claim follows.
As with ideal observer decoding, a convention must be agreed to for non-unique decoding.
The maximum likelihood decoding problem can also be modeled as an integer programming problem.
The maximum likelihood decoding algorithm is an instance of the "marginalize a product function" problem which is solved by applying the generalized distributive law.
Minimum distance decoding
Given a received codeword, minimum distance decoding picks a codeword to minimise the Hamming distance:i.e. choose the codeword that is as close as possible to.
Note that if the probability of error on a discrete memoryless channel is strictly less than one half, then minimum distance decoding is equivalent to maximum likelihood decoding, since if
then:
which is maximised by minimising d.
Minimum distance decoding is also known as nearest neighbour decoding. It can be assisted or automated by using a standard array. Minimum distance decoding is a reasonable decoding method when the following conditions are met:
These assumptions may be reasonable for transmissions over a binary symmetric channel. They may be unreasonable for other media, such as a DVD, where a single scratch on the disk can cause an error in many neighbouring symbols or codewords.
As with other decoding methods, a convention must be agreed to for non-unique decoding.
Syndrome decoding
Syndrome decoding is a highly efficient method of decoding a linear code over a noisy channel, i.e. one on which errors are made. In essence, syndrome decoding is minimum distance decoding using a reduced lookup table. This is allowed by the linearity of the code.Suppose that is a linear code of length and minimum distance with parity-check matrix. Then clearly is capable of correcting up to
errors made by the channel.
Now suppose that a codeword is sent over the channel and the error pattern occurs. Then is received. Ordinary minimum distance decoding would lookup the vector in a table of size for the nearest match - i.e. an element with
for all. Syndrome decoding takes advantage of the property of the parity matrix that:
for all. The syndrome of the received is defined to be:
To perform ML decoding in a binary symmetric channel, one has to look-up a precomputed table of size, mapping to.
Note that this is already of significantly less complexity than that of a standard array decoding.
However, under the assumption that no more than errors were made during transmission, the receiver can look up the value in a further reduced table of size
only. The table is against pre-computed values of for all possible error patterns.
Knowing what is, it is then trivial to decode as:
For Binary codes, if both and are not too big, and assuming the code generating matrix is in standard form, syndrome decoding can be computed using 2 precomputed lookup tables and 2 XORs only.
Let be the received noisy codeword, i.e.. Using the encoding lookup table of size, the codeword that corresponds to the first bits of is found.
The syndrome is then computed as the last bits of . Using the syndrome, the error is computed using the syndrome lookup table of size, and the decoding is then computed via .
The number of entries in the two lookup tables is, which is significantly smaller than required for standard array decoding that requires only lookup. Additionally, the precomputed encoding lookup table can be used for the encoding, and is thus often useful to have.
Information set decoding
This is a family of Las Vegas-probabilistic methods all based on the observation that it is easier to guess enough error-free positions, than it is to guess all the error-positions.The simplest form is due to Prange: Let be the generator matrix of used for encoding. Select columns of at random, and denote by the corresponding submatrix of. With reasonable probability will have full rank, which means that if we let be the sub-vector for the corresponding positions of any codeword of for a message, we can recover as. Hence, if we were lucky that these positions of the received word contained no errors, and hence equalled the positions of the sent codeword, then we may decode.
If errors occurred, the probability of such a fortunate selection of columns is given by.
This method has been improved in various ways, e.g. by Stern and Canteaut and Sendrier.
Partial response maximum likelihood
Partial response maximum likelihood is a method for converting the weak analog signal from the head of a magnetic disk or tape drive into a digital signal.Viterbi decoder
A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using forward error correction based on a convolutional code.The Hamming distance is used as a metric for hard decision Viterbi decoders. The squared Euclidean distance is used as a metric for soft decision decoders.