Machine Epsilon
Goal
Understand machine epsilon $\varepsilon_{\mathrm{mach}}$ as the precision limit of floating-point arithmetic. Master its IEEE 754 values, the relationship with unit roundoff, and applications in convergence testing and approximate comparison.
Prerequisites
- IEEE 754 floating-point structure
- Concept of rounding error
Table of Contents
1. Definition
Machine epsilon $\varepsilon_{\mathrm{mach}}$ is the fundamental quantity indicating the precision limit of a floating-point number system. Two common definitions:
Definition 1 (Gap Definition)
The smallest floating-point number $\varepsilon$ such that $\mathrm{fl}(1 + \varepsilon) > 1$.
Definition 2 (Relative Rounding Error Bound)
For any real number $x$ in range, when rounded to the nearest floating-point number $\mathrm{fl}(x)$:
$$\left|\frac{\mathrm{fl}(x) - x}{x}\right| \leq u$$The smallest such $u$ is the unit roundoff, and $u = \varepsilon_{\mathrm{mach}} / 2$.
For a floating-point system with base $\beta$ and $p$ significand digits:
IEEE 754 uses $\beta = 2$, so $\varepsilon_{\mathrm{mach}} = 2^{1-p}$.
2. IEEE 754 Values
| Format | Significand bits $p$ | $\varepsilon_{\mathrm{mach}} = 2^{1-p}$ | Approx. value |
|---|---|---|---|
| Half (float16) | 11 | $2^{-10}$ | $\approx 9.77 \times 10^{-4}$ |
| Single (float32) | 24 | $2^{-23}$ | $\approx 1.19 \times 10^{-7}$ |
| Double (float64) | 53 | $2^{-52}$ | $\approx 2.22 \times 10^{-16}$ |
| Quadruple (float128) | 113 | $2^{-112}$ | $\approx 1.93 \times 10^{-34}$ |
Available as constants in most languages: C/C++ DBL_EPSILON, Python sys.float_info.epsilon, MATLAB eps.
3. Unit Roundoff
The unit roundoff $u$ is central to floating-point error analysis:
$$u = \frac{\varepsilon_{\mathrm{mach}}}{2} = \frac{1}{2} \beta^{1-p}$$For IEEE 754 double precision, $u = 2^{-53} \approx 1.11 \times 10^{-16}$.
The fundamental theorem of floating-point arithmetic: for $\circ \in \{+, -, \times, /\}$,
where $\mathrm{fl}(\cdot)$ denotes round-to-nearest. This property is the starting point for all error analysis.
4. Measurement Algorithm
Machine epsilon can be measured with a simple algorithm:
Machine Epsilon Measurement
- Initialize $\varepsilon = 1$.
- While $\mathrm{fl}(1 + \varepsilon) > 1$, set $\varepsilon \leftarrow \varepsilon / 2$.
- Return $\varepsilon_{\mathrm{mach}} = 2\varepsilon$ (twice the last value that failed).
function computeMachineEpsilon():
eps = 1.0
while (1.0 + eps) > 1.0:
eps = eps / 2.0
return 2.0 * eps
Note: Compiler optimizations (especially extended precision registers) may affect the result. In C/C++, use the volatile qualifier or control optimization levels.
5. Applications
Convergence Tolerance
Iterative method stopping criteria are often based on $\varepsilon_{\mathrm{mach}}$:
- Newton's method: $|f(x_n)| < \sqrt{\varepsilon_{\mathrm{mach}}}$
- Eigenvalue accuracy: $O(\varepsilon_{\mathrm{mach}} \|A\|)$
Approximate Floating-Point Comparison
Since equality comparison of floating-point numbers is unreliable due to rounding, use:
$$|a - b| \leq \varepsilon_{\mathrm{mach}} \cdot \max(|a|, |b|)$$Condition Number
The relative error in a numerical solution is roughly $\kappa \cdot \varepsilon_{\mathrm{mach}}$, where $\kappa$ is the condition number. When $\kappa \cdot \varepsilon_{\mathrm{mach}} \approx 1$, the result is completely unreliable.
6. Worked Examples
Example 1: Floating-Point Numbers Near 1
In double precision ($\varepsilon_{\mathrm{mach}} = 2^{-52}$), the next floating-point number after $1$ is:
$$1 + \varepsilon_{\mathrm{mach}} = 1 + 2^{-52} \approx 1.0000000000000002$$No floating-point number exists between $1$ and $1 + \varepsilon_{\mathrm{mach}}$.
Example 2: Estimating Significant Digits
Double precision $\varepsilon_{\mathrm{mach}} \approx 2.22 \times 10^{-16}$ gives $\log_{10}(\varepsilon_{\mathrm{mach}}^{-1}) \approx 15.65$, so about 15--16 decimal significant digits are expected.
7. Pitfalls
- Inconsistent definitions: Some references use $\varepsilon_{\mathrm{mach}}$ to mean unit roundoff $u = \varepsilon_{\mathrm{mach}}/2$. Be aware of potential confusion.
- Not the smallest positive float: Machine epsilon is entirely different from the smallest positive floating-point number ($\approx 2^{-1074}$ for double).
- Extended precision: On x86, the FPU may use 80-bit extended precision, causing measured values to differ from expected.
8. Frequently Asked Questions
Q1. What is machine epsilon?
The smallest floating-point number $\varepsilon$ such that $\mathrm{fl}(1 + \varepsilon) > 1$. It indicates the relative precision limit of the floating-point system. For double precision, $\varepsilon_{\mathrm{mach}} \approx 2.22 \times 10^{-16}$.
Q2. What are the single and double precision values?
Single (float32): $2^{-23} \approx 1.19 \times 10^{-7}$. Double (float64): $2^{-52} \approx 2.22 \times 10^{-16}$.
Q3. What is machine epsilon used for?
As the fundamental unit for rounding error analysis: setting convergence tolerances (e.g., $\mathrm{tol} = \sqrt{\varepsilon_{\mathrm{mach}}}$) and approximate floating-point comparison.
9. References
- Wikipedia, "Machine epsilon"
- Wikipedia, "IEEE 754"
- D. Goldberg, "What Every Computer Scientist Should Know About Floating-Point Arithmetic," ACM Computing Surveys, 23(1), 1991.
- N. J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd ed., SIAM, 2002.
Implementation in calx
The concepts in this article are implemented in calx's Float class.