Relative Error

Goal

Understand the definition and properties of relative error, its fundamental relationship with floating-point numbers, and error propagation rules for arithmetic operations (especially the risk of catastrophic cancellation).

Prerequisites

Definition of absolute error, concept of machine epsilon

Table of Contents

1. Definition

Relative error is the absolute error divided by the absolute value of the true value, yielding a dimensionless quantity.

$$E_{\text{rel}} = \frac{|x - \tilde{x}|}{|x|} \quad (x \neq 0)$$

Expressed as a percentage: $100 \times E_{\text{rel}}$ (%).

Properties of Relative Error

  • Dimensionless (has no units)
  • Scale-invariant: multiplying $x$ by a constant does not change the relative error
  • Undefined when the true value is zero
  • Well-suited for floating-point arithmetic

Relative error indicates what fraction of the true value the error represents, making it suitable for comparing quantities at different scales. For example, relative error should be used when comparing the measurement precision of weight and height.

2. Relationship with Floating-Point Numbers

IEEE 754 floating-point numbers represent values in the form $\pm m \times 2^e$ ($m$: significand, $e$: exponent). The essential characteristic of this representation is that precision scales with the magnitude of the number.

When rounding any nonzero real $x$ to the nearest floating-point number $\text{fl}(x)$,

$$\text{fl}(x) = x(1 + \delta), \quad |\delta| \le \varepsilon_{\text{mach}}$$

holds, where $\varepsilon_{\text{mach}}$ is the machine epsilon ($\approx 1.11 \times 10^{-16}$ for double precision).

This means the relative error of floating-point representation is always bounded by machine epsilon -- the fundamental reason relative error is well-suited for floating-point numbers.

3. Propagation Rules for Relative Error

Let $\tilde{x} = x(1 + \epsilon_x)$, $\tilde{y} = y(1 + \epsilon_y)$ ($\epsilon_x$, $\epsilon_y$: relative errors).

Multiplication

$$\tilde{x}\tilde{y} = xy(1 + \epsilon_x)(1 + \epsilon_y) \approx xy(1 + \epsilon_x + \epsilon_y)$$

Thus, the relative error of multiplication is approximately the sum of the relative errors of the factors.

Division

$$\frac{\tilde{x}}{\tilde{y}} \approx \frac{x}{y}(1 + \epsilon_x - \epsilon_y)$$

Division also propagates relative error additively.

Addition and Subtraction

$$\tilde{x} + \tilde{y} = (x + y) + x\epsilon_x + y\epsilon_y$$

Therefore the relative error of the result is

$$\epsilon_{x+y} \approx \frac{x}{x+y}\epsilon_x + \frac{y}{x+y}\epsilon_y$$

When $x \approx -y$ (subtraction of nearly equal values), $|x+y| \ll |x|$, causing explosive growth of relative error. This is catastrophic cancellation.

Multiplication/Division Relative error propagates additively Addition/Subtraction (cancellation) Relative error grows explosively Safe: error grows gradually Dangerous: significant digits are lost
Figure 1. Overview of relative error propagation in arithmetic. Multiplication/division is safe, but subtraction of nearly equal values risks catastrophic cancellation.

4. Relationship with Significant Digits

Having $n$ significant digits in an approximation $\tilde{x}$ is roughly equivalent to the relative error being at most $5 \times 10^{1-n}$.

$$E_{\text{rel}} \le 5 \times 10^{1-n} \quad \Leftrightarrow \quad \text{approx. } n \text{ significant digits}$$

Double precision floating-point (53-bit significand) corresponds to approximately 15--16 significant digits.

5. Caveats of Relative Error

  • True value is zero: When $x = 0$, relative error involves division by zero and is undefined. Use absolute error or a modified definition such as $|x - \tilde{x}| / \max(|x|, \delta)$.
  • True value unknown: When the true value is unknown, an approximate relative error $|x_n - x_{n-1}| / |x_n|$ using the best approximation as a reference is commonly used.
  • Symmetry: $|x - \tilde{x}|/|x|$ and $|x - \tilde{x}|/|\tilde{x}|$ generally differ. The former is called the true relative error, the latter the approximate relative error.

6. Computation Examples

Example 1: Approximation of a Physical Constant

Relative error of the approximation $\tilde{c} = 3.0 \times 10^8\;\text{m/s}$ for the speed of light $c = 299\,792\,458\;\text{m/s}$:

$$E_{\text{rel}} = \frac{|299\,792\,458 - 300\,000\,000|}{299\,792\,458} = \frac{207\,542}{299\,792\,458} \approx 6.92 \times 10^{-4} \approx 0.069\%$$

Example 2: Relative Error Amplification by Cancellation

$x = 1.000\,000\,1$, $y = 1.000\,000\,0$ , compute the difference $x - y = 1.0 \times 10^{-7}$.

Assume each value has relative error $\epsilon = 10^{-15}$ (double precision round-off).

The maximum absolute error of the result is $|x|\epsilon + |y|\epsilon \approx 2 \times 10^{-15}$, so the relative error of the result is

$$\frac{2 \times 10^{-15}}{10^{-7}} = 2 \times 10^{-8}$$

This is 7 orders of magnitude worse than the original $10^{-15}$. This is a classic example of catastrophic cancellation.

7. Frequently Asked Questions

Q1. What is relative error?

A dimensionless quantity obtained by dividing the absolute error by the absolute value of the true value, representing the proportion of error relative to the true value. Multiply by 100 for a percentage. Suitable for comparing precision across different scales.

Q2. Why is relative error well-suited for floating-point numbers?

Floating-point numbers represent values using a significand and exponent, so precision scales with magnitude. Therefore, the relative error from rounding is always bounded by machine epsilon.

Q3. How is relative error handled when the true value is zero?

When $x = 0$, relative error involves division by zero and is undefined. Use absolute error or a modified definition such as $|x - \tilde{x}| / \max(|x|, \delta)$.

8. References

  • Wikipedia, "Error" (Japanese)
  • Wikipedia, "Approximation error" (English)
  • Wikipedia, "Relative change and difference" (English)
  • D. Goldberg, "What Every Computer Scientist Should Know About Floating-Point Arithmetic," ACM Computing Surveys, vol. 23, no. 1, pp. 5--48, 1991.
  • N. J. Higham, Accuracy and Stability of Numerical Algorithms, 2nd ed., SIAM, 2002.