Euler Method

Euler's Method

Goal of This Page

Understand the recurrence formula of the forward Euler method and the geometric meaning of tangent approximation, and learn the concepts of error order, stability, and implicit methods.

1. Problem Setting

Consider the initial value problem (IVP: Initial Value Problem):

\begin{equation} \frac{dy}{dx} = f(x, y), \qquad y(x_0) = y_0 \label{eq:ivp} \end{equation}

Integrating both sides from $x_n$ to $x_{n+1}$ yields

$$y(x_{n+1}) = y(x_n) + \int_{x_n}^{x_{n+1}} f\bigl(x,\, y(x)\bigr)\, dx$$

If the integral on the right-hand side could be evaluated exactly, one would obtain the exact solution. In general, however, the unknown function $y(x)$ appears inside $f$, making analytical computation impossible. Approximating this integral by some means is therefore the fundamental idea behind numerical methods.

When the solution cannot be obtained analytically, it is necessary to construct an approximate solution numerically. Divide the interval $[x_0, x_0 + L]$ into equally spaced subintervals:

$$x_n = x_0 + nh, \quad n = 0, 1, 2, \dots, N, \qquad h = \frac{L}{N}$$

and compute the approximation $y_n$ to $y(x_n)$ at each grid point $x_n$ successively.

Equal spacing of the interval [x0, x0+L] with grid points
Figure 1: The interval $[x_0, x_0 + L]$ is divided into $N$ equal parts of width $h$, placing grid points $x_0, x_1, \dots, x_N$

2. Forward Euler Method (Explicit Euler Method)

Definition: Forward Euler Method

Starting from the initial value $y_0$, compute $y_1, y_2, \dots$ successively by the following recurrence:

$$y_{n+1} = y_n + h \cdot f(x_n, y_n)$$

2.1 Derivation

The method can be derived from the Taylor expansion. For the exact solution $y(x)$,

$$y(x_{n+1}) = y(x_n) + h \, y'(x_n) + \frac{h^2}{2} y''(x_n) + O(h^3) = y(x_n) + h \, f(x_n, y(x_n)) + O(h^2)$$

Dropping the $O(h^2)$ terms yields the forward Euler recurrence.

The same result can also be obtained from the integral perspective. In the integral form from Section 1, approximating the integrand $f(x, y(x))$ on the interval $[x_n, x_{n+1}]$ by its value at the left endpoint $f(x_n, y_n)$ (the left-rectangle rule) gives

$$\int_{x_n}^{x_{n+1}} f(x,\, y(x))\, dx \approx h \cdot f(x_n, y_n)$$

which yields the forward Euler recurrence. In other words, the forward Euler method amounts to approximating $f$ by a step function on each subinterval and integrating.

Step approximation by the left-rectangle rule. The exact curve y(x) is shown together with step-function approximations using the left-endpoint value on each subinterval.
Figure 2: Step approximation by the left-rectangle rule — on each subinterval $[x_n, x_{n+1}]$, $f$ is approximated by the left-endpoint value $f(x_n, y_n)$, and the integral is approximated by the area of the orange rectangles

2.2 Geometric Interpretation

The integral perspective above can be rephrased geometrically as follows. From equation $\eqref{eq:ivp}$, $f(x, y) = dy/dx$, so $f(x_n, y_n)$ is the slope of the solution curve at the point $(x_n, y_n)$, that is, the slope of the tangent line itself. Thus, "approximating $f$ by its value at the left endpoint" amounts to replacing the solution curve by its tangent line and proceeding along it.

In other words, the forward Euler method advances along the tangent line at the point $(x_n, y_n)$ by the step size $h$, and takes the resulting point as the next approximation $y_{n+1}$. Advancing $h$ in the $x$-direction produces a change of $h \cdot f(x_n, y_n)$ in the $y$-direction.

Illustration of tangent approximation in the forward Euler method
Figure 3: Tangent approximation of the forward Euler method ($y' = y$, $y(0)=1$, $h=0.5$) — advancing along the tangent at each point, but errors accumulate at every step

An important observation regarding Figure 3: the slope of each blue line segment is not the slope of the exact solution (green curve), but the slope $f(x_n, y_n)$ computed from the approximate value $y_n$. In the first interval $[x_0, x_1]$, we have $y_0 = y(x_0)$, so the two slopes coincide. From the second interval onward, however, the approximate value $y_n$ deviates from the exact value $y(x_n)$, causing the blue segments to have a different slope from the exact solution. This "advancing with an incorrect slope" is precisely what causes errors to accumulate.

2.3 Worked Example

Solve $y' = y$, $y(0) = 1$ (exact solution $y = e^x$) with $h = 0.5$ up to $x = 2$.

$n$ $x_n$ $y_n$ (Euler) $y(x_n)$ (Exact) Error
0 0.0 1.0000 1.0000 0.0000
1 0.5 1.5000 1.6487 0.1487
2 1.0 2.2500 2.7183 0.4683
3 1.5 3.3750 4.4817 1.1067
4 2.0 5.0625 7.3891 2.3266

With $h = 0.5$ the error is large, but reducing $h$ improves accuracy. Comparing the error at $x = 2$ for different step sizes:

Relationship between step size $h$ and error at $x=2$
$h$ Steps $y_N$ (Euler) Error
0.545.06252.3266
0.1206.7275$6.6 \times 10^{-1}$
0.012007.3160$7.3 \times 10^{-2}$
0.0012,0007.3817$7.4 \times 10^{-3}$
0.000120,0007.3883$7.4 \times 10^{-4}$

Reducing $h$ by a factor of 10 reduces the error by roughly a factor of 10. This reflects the fact that the Euler method is first-order accurate ($O(h)$).

Is smaller $h$ always better? From the viewpoint of truncation error, smaller $h$ yields better accuracy, but there are practical limits.

  • Accumulation of roundoff errors: If $h$ is extremely small, the number of steps becomes enormous and floating-point roundoff errors accumulate. At some point, the decrease in truncation error and the increase in roundoff error balance out, and reducing $h$ further does not improve accuracy.
  • Computational cost: Halving $h$ doubles the computational cost. Balancing accuracy and computational cost is essential.

When high accuracy is required, rather than making $h$ arbitrarily small, it is more efficient to use a higher-order method such as the Runge-Kutta method.

3. Error Analysis

3.1 Local Truncation Error

The local truncation error is the error introduced in a single step, defined by assuming departure from the exact solution:

$$\tau_n = y(x_{n+1}) - \bigl[y(x_n) + h f(x_n, y(x_n))\bigr] = \frac{h^2}{2} y''(\xi_n) = O(h^2)$$

Since the $O(h^2)$ term in the Taylor expansion is truncated, the local truncation error is $O(h^2)$.

3.2 Global Error

The global error is defined as $e_n = y(x_n) - y_n$. Due to the accumulation over $N = L/h$ steps,

$$|e_N| \le C \cdot h$$

That is, the global error is $O(h)$. This is why the Euler method is said to be a first-order method. Intuitively, the local error of $O(h^2)$ accumulates over $N = O(1/h)$ steps, giving $O(h^2) \times O(1/h) = O(h)$.

To achieve $p$-th order accuracy, the local truncation error per step must be $O(h^{p+1})$.

4. Backward Euler Method (Implicit Euler Method)

Definition: Backward Euler Method

$y_{n+1} = y_n + h \cdot f(x_{n+1}, y_{n+1})$

From the integral perspective, whereas the forward Euler method corresponds to the left-rectangle rule (approximating $f$ by its value at the left endpoint of the interval), the backward Euler method corresponds to the right-rectangle rule (approximating $f$ by its value at the right endpoint $f(x_{n+1}, y_{n+1})$):

$$\int_{x_n}^{x_{n+1}} f(x,\, y(x))\, dx \approx h \cdot f(x_{n+1}, y_{n+1})$$
Step approximation by the right-rectangle rule. The exact curve y(x) is shown together with step-function approximations using the right-endpoint value on each subinterval.
Figure 4: Step approximation by the right-rectangle rule — on each subinterval $[x_n, x_{n+1}]$, $f$ is approximated by the right-endpoint value $f(x_{n+1}, y_{n+1})$, and the integral is approximated by the area of the purple rectangles

Since the unknown $y_{n+1}$ appears on the right-hand side, a nonlinear equation must be solved at each step (typically using Newton's method). The computational cost is higher than the forward Euler method, but the stability is superior.

4.1 Why Is the Stability Superior?

The intuitive reason can be seen with the decaying solution $y' = \lambda y$ ($\lambda < 0$). The exact solution decays as $y = e^{\lambda x} \to 0$.

The forward Euler method determines the next value based on "old information" $y_n$:

$$y_{n+1} = (1 + h\lambda)\, y_n$$

When $\lambda < 0$, if $h$ is too large, $|1 + h\lambda| > 1$ and the solution, which should be decaying, oscillates and diverges. For example, with $\lambda = -10$ and $h = 0.3$, $1 + h\lambda = 1 - 3 = -2$, and $|y_n|$ doubles at every step.

The backward Euler method determines the value using "information at the destination" $y_{n+1}$:

$$y_{n+1} = y_n + h\lambda\, y_{n+1} \implies y_{n+1} = \frac{y_n}{1 - h\lambda}$$

When $\lambda < 0$ and $h > 0$, $1 - h\lambda = 1 + h|\lambda| > 1$, so $|y_{n+1}| < |y_n|$ always holds. No matter how large $h$ is, the solution decays and never diverges. This is the essence of the stability of the backward Euler method.

In other words, the forward Euler method overshoots in rapidly changing situations by using the "old slope," whereas the backward Euler method automatically applies a brake via the "slope at the destination." The stiff equations for which this property is particularly important are discussed in Section 5.3.

5. Stability Analysis

To investigate stability, we use the test equation $y' = \lambda y$ ($\mathrm{Re}(\lambda) < 0$). The exact solution is $y = e^{\lambda x} \to 0$ ($x \to \infty$), and it is desirable for the numerical solution to reproduce this decay.

5.1 Forward Euler Method

Since $y_{n+1} = (1 + h\lambda) y_n$, the amplification factor is $R(z) = 1 + z$ ($z = h\lambda$). For $|y_n| \to 0$, the condition

$$|1 + z| < 1$$

must hold. This corresponds to the interior of a circle centered at $(-1, 0)$ with radius $1$ in the complex plane, known as the stability region. When $\lambda$ is a large negative real number, the step size restriction $h < 2/|\lambda|$ applies.

5.2 Backward Euler Method

Since $y_{n+1} = \frac{1}{1 - z} y_n$, the stability condition is $|1/(1-z)| < 1$, i.e., $|1 - z| > 1$. When $\mathrm{Re}(\lambda) < 0$, this condition is satisfied for all $h > 0$, meaning the backward Euler method is A-stable (stable for any step size).

5.3 Stiff Equations and Advantages of the Backward Euler Method

Equations that contain components with vastly different time constants are called stiff equations. For example, with $y' = -1000y$, we have $\lambda = -1000$, and the forward Euler method diverges unless $h < 2/|\lambda| = 0.002$. The backward Euler method, on the other hand, remains stable for any $h$. Although the cost per step is higher, the ability to take much larger step sizes greatly reduces the total number of steps, resulting in better overall efficiency. This is the advantage of the backward Euler method (and implicit methods in general) for stiff equations.

6. Improved Euler Method (Heun's Method)

Definition: Improved Euler Method (Heun's Method)

Compute $y_{n+1}$ in the following two stages:

$$\tilde{y}_{n+1} = y_n + h \, f(x_n, y_n) \qquad \text{(predictor: forward Euler)}$$ $$y_{n+1} = y_n + \frac{h}{2} \bigl[f(x_n, y_n) + f(x_{n+1}, \tilde{y}_{n+1})\bigr] \qquad \text{(corrector: trapezoidal rule)}$$

From the integral perspective, this corresponds to the trapezoidal rule, which approximates $f$ by the average of its values at the left and right endpoints:

$$\int_{x_n}^{x_{n+1}} f(x,\, y(x))\, dx \approx \frac{h}{2}\bigl[f(x_n, y_n) + f(x_{n+1}, y_{n+1})\bigr]$$
Trapezoidal approximation. The exact curve y(x) is shown together with trapezoidal approximations connecting left and right endpoint values on each subinterval.
Figure 5: Trapezoidal approximation — on each subinterval $[x_n, x_{n+1}]$, $f$ is approximated by the average of the left and right endpoint values, and the integral is approximated by the area of the green trapezoids

Since the trapezoidal approximation follows the curve more closely than the rectangular approximation, accuracy is improved:

  • Local truncation error: $O(h^3)$
  • Global error: $O(h^2)$ (second-order accuracy)

The improved Euler method is a type of two-stage, second-order Runge-Kutta method (RK2). Although explicit, it achieves twice the order of accuracy of the forward Euler method. The computational cost per step is two evaluations of $f$ (twice that of the forward Euler method), but since the step size required to achieve the same accuracy can be made considerably larger, the improved Euler method is often more efficient overall.

7. Comparison of Methods

Comparing the characteristics of the three methods described here (forward Euler, backward Euler, improved Euler = Heun's method) and the Runge-Kutta method discussed on a separate page, we obtain the following table.

Method Local Error Global Error $f$ evals/step A-stable
Forward Euler $O(h^2)$ $O(h)$ 1 No
Backward Euler $O(h^2)$ $O(h)$ 1 + solve Yes
Improved Euler $O(h^3)$ $O(h^2)$ 2 No
Runge-Kutta method $O(h^5)$ $O(h^4)$ 4 No

Comparing the relationship between step size $h$ and global error on a logarithmic scale for the four methods listed in the table above, the difference in order of accuracy of each method becomes clearly visible.

Relationship between step size h and global error (logarithmic scale) — comparison of 4 methods
Figure 6: Relationship between step size $h$ and global error ($y' = y$, $y(0)=1$, error measured at $x=2$) — forward and backward Euler methods are both first-order $O(h)$ with similar slopes, the improved Euler method is second-order $O(h^2)$, and the Runge-Kutta method is fourth-order $O(h^4)$ with substantially improved accuracy

Since this figure uses $y' = y$ ($\lambda = 1$, a growing solution), the accuracy of the forward and backward Euler methods appears nearly identical. The true advantage of the backward Euler method lies not in accuracy but in stability, and it demonstrates its effectiveness with stiff equations (see Section 5.3).

References

  • J. C. Butcher, Numerical Methods for Ordinary Differential Equations, 3rd ed., Wiley, 2016.
  • E. Hairer, S. P. Nørsett, G. Wanner, Solving Ordinary Differential Equations I: Nonstiff Problems, 2nd ed., Springer, 1993.
  • U. M. Ascher, L. R. Petzold, Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, SIAM, 1998.
  • Euler method — Wikipedia

Summary

Key Takeaways

  • Forward Euler method: $y_{n+1} = y_n + hf(x_n, y_n)$ (tangent approximation)
  • Local truncation error $O(h^2)$, global error $O(h)$ (first-order accuracy)
  • Backward Euler method (implicit): requires solving an equation at each step but is A-stable
  • Stability is analyzed using the test equation $y' = \lambda y$; the stability region is $|R(z)| < 1$
  • Improved Euler method (Heun's method): second-order accuracy, more accurate than the forward Euler method