IEEE 754-1985


IEEE 754-1985 was an industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008, and then again in 2019 by minor revision IEEE 754-2019. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087.
IEEE 754-1985 represents numbers in binary, providing definitions for four levels of precision, of which the two most commonly used are:
LevelWidthRange at full precisionPrecision
Single precision32 bits±1.18 to ±3.4Approximately 7 decimal digits
Double precision64 bits±2.23 to ±1.80Approximately 16 decimal digits

The standard also defines representations for positive and negative infinity, a "negative zero", five exceptions to handle invalid results like division by zero, special values called NaNs for representing those exceptions, denormal numbers to represent numbers smaller than shown above, and four rounding modes.

Representation of numbers

Floating-point numbers in IEEE 754 format consist of three fields: a sign bit, a biased exponent, and a fraction. The following example illustrates the meaning of each.
The decimal number 0.1562510 represented in binary is 0.001012. Analogous to scientific notation, where numbers are written to have a single non-zero digit to the left of the decimal point, we rewrite this number so it has a single 1 bit to the left of the "binary point". We simply multiply by the appropriate power of 2 to compensate for shifting the bits left by three positions:
Now we can read off the fraction and the exponent: the fraction is.012 and the exponent is −3.
As illustrated in the pictures, the three fields in the IEEE 754 representation of this number are:
IEEE 754 adds a bias to the exponent so that numbers can in many cases be compared conveniently by the same hardware that compares signed 2's-complement integers. Using a biased exponent, the lesser of two positive floating-point numbers will come out "less than" the greater following the same ordering as for sign and magnitude integers. If two floating-point numbers have different signs, the sign-and-magnitude comparison also works with biased exponents. However, if both biased-exponent floating-point numbers are negative, then the ordering must be reversed. If the exponent were represented as, say, a 2's-complement number, comparison to see which of two numbers is greater would not be as convenient.
The leading 1 bit is omitted since all numbers except zero start with a leading 1; the leading 1 is implicit and doesn't actually need to be stored which gives an extra bit of precision for "free."

Zero

The number zero is represented specially:

Denormalized numbers

The number representations described above are called normalized, meaning that the implicit leading binary digit is a 1. To reduce the loss of precision when an underflow occurs, IEEE 754 includes the ability to represent fractions smaller than are possible in the normalized representation, by making the implicit leading digit a 0. Such numbers are called denormal. They don't include as many significant digits as a normalized number, but they enable a gradual loss of precision when the result of an arithmetic operation is not exactly zero but is too close to zero to be represented by a normalized number.
A denormal number is represented with a biased exponent of all 0 bits, which represents an exponent of −126 in single precision, or −1022 in double precision. In contrast, the smallest biased exponent representing a normal number is 1.

Representation of non-numbers

The biased-exponent field is filled with all 1 bits to indicate either infinity or an invalid result of a computation.

Positive and negative infinity

are represented thus:

NaN

Some operations of floating-point arithmetic are invalid, such as taking the square root of a negative number. The act of reaching an invalid result is called a floating-point exception. An exceptional result is represented by a special code called a NaN, for "Not a Number". All NaNs in IEEE 754-1985 have this format:

Range and precision

Precision is defined as the minimum difference between two successive mantissa representations; thus it is a function only in the mantissa; while the gap is defined as the difference between two successive numbers.

Single precision

numbers occupy 32 bits. In single precision:
Some example range and gap values for given exponents in single precision:
Actual Exponent Exp MinimumMaximumGap
−11260.5≈ 0.999999940395≈ 5.96046e-8
01271≈ 1.999999880791≈ 1.19209e-7
11282≈ 3.999999761581≈ 2.38419e-7
21294≈ 7.999999523163≈ 4.76837e-7
101371024≈ 2047.999877930≈ 1.22070e-4
111382048≈ 4095.999755859≈ 2.44141e-4
231508388608167772151
2415116777216335544302
127254≈ 1.70141e38≈ 3.40282e38≈ 2.02824e31

As an example, 16,777,217 cannot be encoded as a 32-bit float as it will be rounded to 16,777,216. This shows why floating point arithmetic is unsuitable for accounting software. However, all integers within the representable range that are a power of 2 can be stored in a 32-bit float without rounding.

Double precision

numbers occupy 64 bits. In double precision:
Some example range and gap values for given exponents in double precision:
Actual Exponent Exp MinimumMaximumGap
−110220.5≈ 0.999999999999999888978≈ 1.11022e-16
010231≈ 1.999999999999999777955≈ 2.22045e-16
110242≈ 3.999999999999999555911≈ 4.44089e-16
210254≈ 7.999999999999999111822≈ 8.88178e-16
1010331024≈ 2047.999999999999772626≈ 2.27374e-13
1110342048≈ 4095.999999999999545253≈ 4.54747e-13
521075450359962737049690071992547409911
5310769007199254740992180143985094819822
10232046≈ 8.98847e307≈ 1.79769e308≈ 1.99584e292

Extended formats

The standard also recommends extended format to be used to perform internal computations at a higher precision than that required for the final result, to minimise round-off errors: the standard only specifies minimum precision and exponent requirements for such formats. The x87 80-bit extended format is the most commonly implemented extended format that meets these requirements.

Examples

Here are some [|examples] of single-precision IEEE 754 representations:

Comparing floating-point numbers

Every possible bit combination is either a NaN or a number with a unique value in the affinely extended real number system with its associated order, except for the two combinations of bits for negative zero and positive zero, which sometimes require special attention. The [|binary representation] has the special property that, excluding NaNs, any two numbers can be compared as sign and magnitude integers. When comparing as 2's-complement integers: If the sign bits differ, the negative number precedes the positive number, so 2's complement gives the correct result. If both values are positive, the 2's complement comparison again gives the correct result. Otherwise, the correct FP ordering is the opposite of the 2's complement ordering.
Rounding errors inherent to floating point calculations may limit the use of comparisons for checking the exact equality of results. Choosing an acceptable range is a complex topic. A common technique is to use a comparison epsilon value to perform approximate comparisons. Depending on how lenient the comparisons are, common values include 1e-6 or 1e-5 for single-precision, and 1e-14 for double-precision. Another common technique is ULP, which checks what the difference is in the last place digits, effectively checking how many steps away the two values are.
Although negative zero and positive zero are generally considered equal for comparison purposes, some programming language relational operators and similar constructs treat them as distinct. According to the Java Language Specification, comparison and equality operators treat them as equal, but Math.min and Math.max distinguish them, as do the comparison methods equals, compareTo and even compare of classes Float and Double.

Rounding floating-point numbers

The IEEE standard has four different rounding modes; the first is the default; the others are called directed roundings.
The IEEE standard employs the affinely extended real number system, with separate positive and negative infinities. During drafting, there was a proposal for the standard to incorporate the projectively extended real number system, with a single unsigned infinity, by providing programmers with a mode selection option. In the interest of reducing the complexity of the final standard, the projective mode was dropped, however. The Intel 8087 and Intel 80287 floating point co-processors both support this projective mode.

Functions and predicates

Standard operations

The following functions must be provided:
In 1976 Intel began planning to produce a floating point coprocessor. John Palmer, the manager of the effort, persuaded them that they should try to develop a standard for all their floating point operations. William Kahan was hired as a consultant; he had helped improve the accuracy of Hewlett-Packard's calculators. Kahan initially recommended that the floating point base be decimal but the hardware design of the coprocessor was too far along to make that change.
The work within Intel worried other vendors, who set up a standardization effort to ensure a 'level playing field'. Kahan attended the second IEEE 754 standards working group meeting, held in November 1977. Here, he received permission from Intel to put forward a draft proposal based on the standard arithmetic part of their design for a coprocessor. The arguments over gradual underflow lasted until 1981 when an expert hired by DEC to assess it sided against the dissenters.
Even before it was approved, the draft standard had been implemented by a number of manufacturers. The Intel 8087, which was announced in 1980, was the first chip to implement the draft standard.