Round-off Error
Goal
Understand how round-off errors accumulate quantitatively over many operations, learn the trade-off with truncation error, and master reduction techniques.
Prerequisites
- Machine epsilon ($\varepsilon_{\text{mach}}$ and the $\text{fl}(a \circ b) = (a \circ b)(1+\delta)$ formula)
- Significant digits and rounding errors (catastrophic cancellation, absorption)
Table of Contents
1. Context
Round-off error arises when representing a real number in finite-precision floating-point. The relative error of a single rounding is bounded by machine epsilon $\varepsilon_{\text{mach}}$, but over many operations the accumulated error can become non-negligible.
For the definition, generation mechanism, and rounding modes, see the earlier chapters. This chapter focuses on quantitative accumulation analysis and reduction techniques.
2. Accumulation Models
Random Walk Model
Assuming round-off errors are random and independent over $n$ operations, the cumulative error grows as
$$E \sim \sqrt{n} \cdot \varepsilon_{\text{mach}}$$(random walk). In practice, error signs tend to fluctuate somewhat randomly, so many computations exhibit behavior close to this model.
Worst Case
When errors always accumulate in the same direction:
$$E \sim n \cdot \varepsilon_{\text{mach}}$$A naive sequential summation algorithm exhibits behavior close to this worst case.
3. Summation Algorithm Comparison
When computing $\displaystyle S = \sum_{i=1}^{n} a_i$ for $n$ floating-point numbers, algorithm choice dramatically affects the accumulated error.
| Algorithm | Error Bound | Extra Memory | Notes |
|---|---|---|---|
| Naive summation | $O(n\varepsilon)$ | $O(1)$ | Simplest. Worst case. |
| Sorted summation | $O(n\varepsilon)$ | $O(1)$ | Add from smallest. Improved constant. Requires $O(n \log n)$ sort. |
| Pairwise summation | $O(\log n \cdot \varepsilon)$ | $O(\log n)$ | Recursively add halves. Used by NumPy's np.sum. |
| Kahan summation | $O(\varepsilon)$ | $O(1)$ | Compensation variable tracks errors. Independent of $n$. |
Kahan Compensated Summation
Accumulates the fractional part lost to rounding in a compensation variable, correcting at each step. See Loss of Significance for pseudocode and details.
Pairwise Summation
Recursively splits the array in half and sums each half. Only $\log_2 n$ addition levels are needed, bounding accumulated error at $O(\log n \cdot \varepsilon)$.
function pairwiseSum(a[], lo, hi):
if hi - lo < 16: // base case: naive sum
s = 0.0
for i = lo to hi-1: s += a[i]
return s
mid = (lo + hi) / 2
return pairwiseSum(a, lo, mid)
+ pairwiseSum(a, mid, hi)
Other Techniques
- Avoiding catastrophic cancellation: Transform formulas to avoid subtraction of nearly equal values.
- Higher precision: Use quadruple or arbitrary precision when double precision is insufficient.
- Choosing stable algorithms: Prefer backward-stable algorithms.
4. Trade-off with Truncation Error
Total numerical error is the sum of round-off error and truncation error. In many situations, improving accuracy causes round-off error to grow, so an optimal parameter exists.
Numerical Differentiation Example
Consider the forward difference $f'(x) \approx \dfrac{f(x+h) - f(x)}{h}$.
- Truncation error (Taylor remainder): $\dfrac{h}{2}|f''(\xi)| = O(h)$ — decreases as $h$ shrinks.
- Round-off error: $f(x+h)$ and $f(x)$ each carry $O(\varepsilon)$ rounding error. The difference $f(x+h)-f(x)$ also has $O(\varepsilon)$ error, but dividing by $h$ amplifies it to $O(\varepsilon/h)$.
Minimizing total error $E(h) \sim C_1 h + C_2 \varepsilon/h$ gives optimal step $h^* = \sqrt{C_2\varepsilon/C_1} \sim \sqrt{\varepsilon} \approx 10^{-8}$ (double precision); smaller $h$ actually degrades accuracy. Here $C_1 = |f''(\xi)|/2$ and $C_2 \approx 2|f(x)|/|f'(x)|$, so the exact $h^*$ depends on the function.
The figure below plots $E_{\text{trunc}}=C_1 h$ (blue), $E_{\text{round}}=C_2\varepsilon/h$ (red), and total error (green) on a log-log scale. The two lines cross at the optimal $h^*$, where total error reaches its minimum.
Two Mechanisms of Round-off Error Growth
In numerical differentiation, decreasing $h$ does not increase the number of operations. Round-off grows because dividing the $O(\varepsilon)$ rounding error in $f(x+h)-f(x)$ by a small $h$ amplifies the error (a different mechanism from §2). In numerical integration, however, smaller step sizes increase the number of subintervals $n=1/h$, causing round-off accumulation exactly as in the §2 model.
Series Truncation
Truncating the Taylor series $e^x = \sum_{k=0}^{N} x^k/k!$ at $N$ terms also exhibits a trade-off.
- Truncation error: $O(|x|^{N+1}/(N+1)!)$ — decreases rapidly as $N$ increases.
- Round-off error: $N$ additions → $O(N\varepsilon)$ accumulation (§2 model).
For series like $e^x$ where terms decrease rapidly, round-off accumulation is typically not a concern at practical $N$ (a few dozen terms). The more serious issue is catastrophic cancellation when computing $e^{-x}$ ($x>0$) via $\sum (-x)^k/k!$, which requires reformulation such as $e^{-x}=1/e^x$ (though for very large $x$, $e^x$ itself overflows, requiring a different approach).
5. Frequently Asked Questions
Q1. How does round-off error accumulate?
Over $n$ operations, if each rounding error is random, cumulative error grows as $O(\sqrt{n} \cdot \varepsilon)$. In the worst case (same-direction accumulation), it grows as $O(n \cdot \varepsilon)$. Kahan compensated summation keeps summation error at $O(\varepsilon)$ regardless of $n$.
Q2. Round-off error vs. truncation error?
Round-off error comes from finite-precision arithmetic; truncation error from replacing infinite mathematical processes with finite approximations. Total error is the sum of both; as step size decreases, truncation error decreases but round-off error increases.
Q3. Error bounds for summation algorithms?
Naive sequential: $O(n\varepsilon)$, sorted: $O(n\varepsilon)$ with improved constant, pairwise: $O(\log n \cdot \varepsilon)$, Kahan compensated: $O(\varepsilon)$.
6. References
- Wikipedia, "Round-off error"
- Wikipedia, "Kahan summation algorithm"
- Wikipedia, "Pairwise summation"
- N. J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd ed., SIAM, 2002, Chapter 4 (Summation).
- D. Goldberg, "What Every Computer Scientist Should Know About Floating-Point Arithmetic," ACM Computing Surveys, vol. 23, no. 1, pp. 5--48, 1991.