Bisection Method

Basics

Published: 2026-03-30

1. Principle

The bisection method is a root-finding algorithm based on the Intermediate Value Theorem.

Intermediate Value Theorem

If $f$ is continuous on $[a,b]$ and $f(a)\cdot f(b) < 0$, then there exists at least one $c \in (a,b)$ such that $f(c) = 0$.

By applying this theorem iteratively, the interval is halved at each step, bracketing the root ever more tightly.

2. Algorithm

Bisection Method

  1. Choose an initial interval $[a,b]$ such that $f$ is continuous and $f(a)\cdot f(b) < 0$.
  2. Compute the midpoint $c = (a+b)/2$.
  3. If $f(c) = 0$, then $c$ is a root; terminate.
  4. If $f(a)\cdot f(c) < 0$, set $b \leftarrow c$; otherwise set $a \leftarrow c$.
  5. Repeat steps 2--4 until a stopping criterion is satisfied.
Iteration process of the bisection method. Finding the root pi of f(x) = sin(x) on the interval [2.5, 3.5] $x$ $y$ $f(x) = \sin x$ $x^* = \pi$ $f(a_0) > 0$ $f(b_0) < 0$ $a_0$ $b_0$ $c_0$ $a_1$ $b_1$ $c_1$ $a_2$ $b_2$ $c_2$ $a_3$ $b_3$ $c_3$ $a_4$ $b_4$ $c_4$ $a_5$ $b_5$ ↑ Converges to root
Figure 1. Iteration process of the bisection method ($f(x) = \sin x$, root $x^* = \pi$). Starting from $f(a_0) > 0$ and $f(b_0) < 0$, the sign of $f$ at the midpoint $c_n$ determines which half of the interval to keep, halving the interval at each step.

3. Convergence

After $n$ iterations, the width of the interval containing the root is

$$|b_n - a_n| = \frac{b_0 - a_0}{2^n}$$

and the error of the midpoint $c_n$ satisfies

$$|c_n - x^*| \leq \frac{b_0 - a_0}{2^{n+1}}$$

This is linear convergence: each iteration yields approximately one additional bit of accuracy. To obtain $k$ decimal digits of precision, approximately $k \times \log_2 10 \approx 3.32k$ iterations are required.

Convergence speed comparison of the bisection method, Newton's method, and the secant method
Figure 2. Convergence speed comparison ($f(x) = e^x - 3$, root $x^* = \ln 3$, initial interval $[0, 3]$, measured values). Newton's method exhibits quadratic convergence (reaching machine precision in 7 iterations), the secant method shows superlinear convergence (9 iterations), while the bisection method converges linearly with oscillation, requiring 25 iterations.

4. Stopping Criteria

Iteration is terminated when any of the following conditions is satisfied. Here $\varepsilon$ and $\delta$ are user-specified tolerances (e.g., $10^{-8}$), and $N_{\max}$ is the maximum number of iterations (to prevent infinite loops).

  • Interval width: $|b_n - a_n| < \varepsilon$ (the root is located within precision $\varepsilon$)
  • Function value: $|f(c_n)| < \delta$ ($c_n$ is sufficiently close to a root)
  • Iteration count: $n \geq N_{\max}$ (computation time limit)

The interval width criterion is the most reliable. The required number of iterations can be determined in advance as $n \geq \lceil\log_2((b_0-a_0)/\varepsilon)\rceil - 1$.

5. Advantages and Disadvantages

AdvantagesDisadvantages
Convergence is guaranteed for continuous functionsSlow convergence (linear)
No derivative $f'(x)$ requiredRequires opposite signs at endpoints of initial interval
Extremely simple to implementDifficulty finding repeated roots (no sign change)
Number of iterations is predictable in advanceDifficult to extend to multiple dimensions

The fact that no derivative is required is a significant practical advantage. Newton's method converges quadratically but requires the computation of $f'(x)$ at each iteration. When an analytical expression for $f'(x)$ is unavailable, or when $f$ itself is the output of a numerical simulation, Newton's method becomes impractical. The secant method does not require derivatives either, but may diverge depending on the choice of initial values. The bisection method is "slow but reliable" and is also useful as a preprocessing step to provide initial values for other methods.

The reason that extension to multiple dimensions is difficult lies in the essence of the bisection method: bracketing the root between two points with opposite signs. Whether the bisection method is applicable depends not on the dimension of the domain but on the dimension of the codomain.

  • Scalar codomain ($f: \mathbb{R}^n \to \mathbb{R}$) -- The bisection method is applicable. By parameterizing the line segment between two points $\mathbf{a}$ and $\mathbf{b}$ in $n$-dimensional space as $g(t) = f(\mathbf{a} + t(\mathbf{b} - \mathbf{a}))$, the standard one-dimensional bisection method can be applied to $g: [0,1] \to \mathbb{R}$.
  • Vector codomain ($\mathbf{f}: \mathbb{R}^n \to \mathbb{R}^n$) -- The bisection method is not applicable. Since the output is a vector, there is no notion of "positive or negative," and it is impossible to determine which side contains the root.

6. Worked Example

Find the root $x^* = \sqrt{2} \approx 1.4142$ of $f(x) = x^2 - 2$ on the interval $[1, 2]$. We have $f(1)=-1 < 0$ and $f(2)=2 > 0$.

$n$$a_n$$b_n$$c_n$$f(c_n)$Selection
01.00002.00001.5000$+0.2500$Left half
11.00001.50001.2500$-0.4375$Right half
21.25001.50001.3750$-0.1094$Right half
31.37501.50001.4375$+0.0664$Left half
41.37501.43751.4063$-0.0225$Right half
51.40631.43751.4219$+0.0217$Left half

The sign of $f(c_n)$ alternates as $+, -, -, +, -, +$, while the interval width decreases as $1.0 \to 0.5 \to 0.25 \to \cdots$, halving at each step and converging to the root $x^* = \sqrt{2}$.

Midpoints c_n plotted on the curve at each iteration, showing the shrinking intervals $x$ $y$ $f(x) = x^2 - 2$ $x^* = \sqrt{2}$ $a_0$ $b_0$ $c_0$ $a_1$ $b_1$ $c_1$ $a_2$ $b_2$ $c_2$ $a_3$ $b_3$ $c_3$ $a_4$ $b_4$ $c_4$ $a_5$ $b_5$ ↑ Converges to root
Figure 3. Midpoints $c_n$ at each iteration ($f(x) = x^2 - 2$). Dashed lines from each $a_n$ and $b_n$ to the curve indicate the sign of $f$. The interval alternates between left and right halves, converging to the root $x^* = \sqrt{2}$.

7. Frequently Asked Questions

Q1. What is the bisection method?

Given a continuous function $f$ with $f(a)f(b)<0$, the bisection method repeatedly halves the interval and selects the subinterval containing the root. Convergence is guaranteed by the Intermediate Value Theorem.

Q2. What is the convergence rate of the bisection method?

It exhibits linear (first-order) convergence: each iteration halves the interval width. To obtain $k$ decimal digits of precision, approximately $3.32k$ iterations are required.

Q3. What are the disadvantages of the bisection method?

The main disadvantages are its slow convergence, the requirement that $f$ change sign over the initial interval, and the difficulty of extending it to multiple dimensions.

8. References

  • Wikipedia, "Bisection method" (English)
  • Wikipedia, "二分法" (Japanese)
  • R. L. Burden & J. D. Faires, Numerical Analysis, 10th ed., Cengage, 2016.
  • W. H. Press et al., Numerical Recipes, 3rd ed., Cambridge, 2007.