Asymmetric numeral systems


Asymmetric numeral systems is a family of entropy encoding methods introduced by Jarosław Duda from Jagiellonian University, used in data compression since 2014 due to improved performance compared to previously used methods, being up to 30 times faster. ANS combines the compression ratio of arithmetic coding, with a processing cost similar to that of Huffman coding. In the tabled ANS variant, this is achieved by constructing a finite-state machine to operate on a large alphabet without using multiplication.
Among others, ANS is used in the Facebook Zstandard compressor, in the Apple LZFSE compressor, Google Draco 3D compressor and PIK image compressor, in CRAM DNA compressor from SAMtools utilities, Dropbox DivANS compressor, and in JPEG XL next generation image compression standard.
The basic idea is to encode information into a single natural number.
In the standard binary number system, we can add a bit of information to
by appending at the end of which gives us.
For an entropy coder, this is optimal if.
ANS generalizes this process for arbitrary sets of symbols with an accompanying probability distribution.
In ANS, if is the result of appending the information from to, then. Equivalently,, where is the number of bits of information stored in the number and is the number of bits contained in the symbol.
For the encoding rule, the set of natural numbers is split into disjoint subsets corresponding to different symbols – like into even and odd numbers, but with densities corresponding to the probability distribution of the symbols to encode. Then to add information from symbol into the information already stored in the current number, we go to number being the position of the -th appearance from the -th subset.
There are alternative ways to apply it in practice – direct mathematical formulas for encoding and decoding steps, or one can put the entire behavior into a table. Renormalization is used to prevent going to infinity – transferring accumulated bits to or from the bitstream.

Entropy coding

Imagine you want to encode a sequence of 1,000 zeros and ones, which would take 1000 bits to store directly. However, if it is somehow known that it only contains 1 zero and 999 ones, it would be sufficient to encode the zero's position, which requires only bits here instead of the original 1000 bits.
Generally, such length sequences containing zeros and ones, for some probability, are called combinations. Using Stirling's approximation we get their asymptotic number being
called Shannon entropy.
Hence, to choose one such sequence we need approximately bits. It is still bits if, however, it can also be much smaller. For example we need only bits for.
An entropy coder allows the encoding of a sequence of symbols using approximately the Shannon entropy bits per symbol. For example ANS could be directly used to enumerate combinations: assign a different natural number to every sequence of symbols having fixed proportions in a nearly optimal way.
In contrast to encoding combinations, this probability distribution usually varies in data compressors. For this purpose, Shannon entropy can be seen as a weighted average: a symbol of probability contains bits of information. ANS encodes information into a single natural number, interpreted as containing bits of information. Adding information from a symbol of probability increases this informational content to. Hence, the new number containing both information should be.

Basic concepts of ANS

Imagine there is some information stored in a natural number, for example as bit sequence of its binary expansion. To add information from a binary variable, we can use coding function, which shifts all bits one position up, and place the new bit in the least significant position. Now decoding function allows one to retrieve the previous and this added bit:. We can start with initial state, then use the function on the successive bits of a finite bit sequence to obtain a final number storing this entire sequence. Then using function multiple times until allows one to retrieve the bit sequence in reversed order.
The above procedure is optimal for the uniform probability distribution of symbols. ANS generalize it to make it optimal for any chosen probability distribution of symbols:. While in the above example was choosing between even and odd, in ANS this even/odd division of natural numbers is replaced with division into subsets having densities corresponding to the assumed probability distribution : up to position, there are approximately occurrences of symbol.
The coding function returns the -th appearance from such subset corresponding to symbol. The density assumption is equivalent to condition. Assuming that a natural number contains bits information,. Hence the symbol of probability is encoded as containing bits of information as it is required from entropy coders.

Uniform binary variant (uABS)

Let us start with the binary alphabet and a probability distribution,. Up to position we want approximately analogues of odd numbers. We can choose this number of appearances as, getting. This variant is called uABS and leads to the following decoding and encoding functions:
Decoding:

s = ceil - ceil // 0 if fract < 1-p, else 1
if s = 0 then new_x = x - ceil // D =
if s = 1 then new_x = ceil // D =

Encoding:

if s = 0 then new_x = ceil/) - 1 // C = new_x
if s = 1 then new_x = floor // C = new_x

For it amounts to the standard binary system, for a different it becomes optimal for this given probability distribution. For example, for these formulas lead to a table for small values of :
01234567891011121314151617181920
012345678910111213
0123456

The symbol corresponds to a subset of natural numbers with density, which in this case are positions. As, these positions increase by 3 or 4. Because here, the pattern of symbols repeats every 10 positions.
The coding can be found by taking the row corresponding to a given symbol, and choosing the given in this row. Then the top row provides. For example, from the middle to the top row.
Imagine we would like to encode the sequence '0100' starting from. First takes us to, then to, then to, then to. By using the decoding function on this final, we can retrieve the symbol sequence. Using the table for this purpose, in the first row determines the column, then the non-empty row and the written value determine the corresponding and.

Range variants (rANS) and streaming

The range variant also uses arithmetic formulas, but allows operation on a large alphabet. Intuitively, it divides the set of natural numbers into size ranges, and split each of them in identical way into subranges of proportions given by the assumed probability distribution.
We start with quantization of probability distribution to denominator, where is chosen : for some natural numbers.
Denote, cumulative distribution function:
For denote function
symbol = s such that CDF <= y < CDF .
Now coding function is:
C = + + CDF
Decoding: s = symbol
D = +
This way we can encode a sequence of symbols into a large natural number. To avoid using large number arithmetic, in practice stream variants are used: which enforce by renormalization: sending the least significant bits of to or from the bitstream.
In rANS variant is for example 32 bit. For 16 bit renormalization,, decoder refills the least significant bits from the bitstream when needed:
if x = + read16bits

Tabled variant (tANS)

tANS variant puts the entire behavior for into a table which yields a finite-state machine avoiding the need of multiplication.
Finally, the step of the decoding loop can be written as:
t = decodingTable;
x = t.newX + readBits; //state transition
writeSymbol; //decoded symbol

The step of the encoding loop:
s = ReadSymbol;
nbBits = >> r; // # of bits for renormalization
writeBits; // send the least significant bits to bitstream
x = encodingTable + ];

A specific tANS coding is determined by assigning a symbol to every position, their number of appearances should be proportional to the assumed probabilities. For example one could choose "abdacdac" assignment for Pr=3/8, Pr=1/8, Pr=2/8, Pr=2/8 probability distribution. If symbols are assigned in ranges of lengths being powers of 2, we would get Huffman coding. For example a->0, b->100, c->101, d->11 prefix code would be obtained for tANS with "aaaabcdd" symbol assignment.

Remarks

As for Huffman coding, modifying the probability distribution of tANS is relatively costly, hence it is mainly used in static situations, usually with some Lempel–Ziv scheme. In this case, the file is divided into blocks - for each of them symbol frequencies are independently counted, then after approximation written in the block header and used as static probability distribution for tANS.
In contrast, rANS is usually used as a faster replacement for range coding. It requires multiplication, but is more memory efficient and is appropriate for dynamically adapting probability distributions.
Encoding and decoding of ANS are performed in opposite directions, making it a stack for symbols. This inconvenience is usually resolved by encoding in backward direction, after which decoding can be done forward. For context-dependence, like Markov model, the encoder needs to use context from the perspective of later decoding. For adaptivity, the encoder should first go forward to find probabilities which will be used by decoder and store them in a buffer, then encode in backward direction using the buffered probabilities.
The final state of encoding is required to start decoding, hence it needs to be stored in the compressed file. This cost can be compensated by storing some information in the initial state of encoder. For example, instead of starting with "10000" state, start with "1****" state, where "*" are some additional stored bits, which can be retrieved at the end of the decoding. Alternatively, this state can be used as a checksum by starting encoding with a fixed state, and testing if the final state of decoding is the expected one.