Normal number (computing)
Floating-point formats |
---|
IEEE 754 |
|
Other |
Alternatives |
Tapered floating point |
In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand.
The magnitude of the smallest normal number in a format is given by:
where b is the base (radix) of the format (like common values 2 or 10, for binary and decimal number systems), and depends on the size and layout of the format.
Similarly, the magnitude of the largest normal number in a format is given by
where p is the precision of the format in digits and is related to as:
In the IEEE 754 binary and decimal formats, b, p, , and have the following values:
For example, in the smallest decimal format in the table (decimal32), the range of positive normal numbers is 10−95 through 9.999999 × 1096.
Non-zero numbers smaller in magnitude than the smallest normal number are called subnormal numbers (or denormal numbers).
Zero is considered neither normal nor subnormal.
See also
- Normalized number
- Half-precision floating-point format
- Single-precision floating-point format
- Double-precision floating-point format