Bisection Method
Basics
Published: 2026-03-30
1. Principle
The bisection method is a root-finding algorithm based on the Intermediate Value Theorem.
Intermediate Value Theorem
If $f$ is continuous on $[a,b]$ and $f(a)\cdot f(b) < 0$, then there exists at least one $c \in (a,b)$ such that $f(c) = 0$.
By applying this theorem iteratively, the interval is halved at each step, bracketing the root ever more tightly.
2. Algorithm
Bisection Method
- Choose an initial interval $[a,b]$ such that $f$ is continuous and $f(a)\cdot f(b) < 0$.
- Compute the midpoint $c = (a+b)/2$.
- If $f(c) = 0$, then $c$ is a root; terminate.
- If $f(a)\cdot f(c) < 0$, set $b \leftarrow c$; otherwise set $a \leftarrow c$.
- Repeat steps 2--4 until a stopping criterion is satisfied.
3. Convergence
After $n$ iterations, the width of the interval containing the root is
$$|b_n - a_n| = \frac{b_0 - a_0}{2^n}$$and the error of the midpoint $c_n$ satisfies
This is linear convergence: each iteration yields approximately one additional bit of accuracy. To obtain $k$ decimal digits of precision, approximately $k \times \log_2 10 \approx 3.32k$ iterations are required.
4. Stopping Criteria
Iteration is terminated when any of the following conditions is satisfied. Here $\varepsilon$ and $\delta$ are user-specified tolerances (e.g., $10^{-8}$), and $N_{\max}$ is the maximum number of iterations (to prevent infinite loops).
- Interval width: $|b_n - a_n| < \varepsilon$ (the root is located within precision $\varepsilon$)
- Function value: $|f(c_n)| < \delta$ ($c_n$ is sufficiently close to a root)
- Iteration count: $n \geq N_{\max}$ (computation time limit)
The interval width criterion is the most reliable. The required number of iterations can be determined in advance as $n \geq \lceil\log_2((b_0-a_0)/\varepsilon)\rceil - 1$.
5. Advantages and Disadvantages
| Advantages | Disadvantages |
|---|---|
| Convergence is guaranteed for continuous functions | Slow convergence (linear) |
| No derivative $f'(x)$ required | Requires opposite signs at endpoints of initial interval |
| Extremely simple to implement | Difficulty finding repeated roots (no sign change) |
| Number of iterations is predictable in advance | Difficult to extend to multiple dimensions |
The fact that no derivative is required is a significant practical advantage. Newton's method converges quadratically but requires the computation of $f'(x)$ at each iteration. When an analytical expression for $f'(x)$ is unavailable, or when $f$ itself is the output of a numerical simulation, Newton's method becomes impractical. The secant method does not require derivatives either, but may diverge depending on the choice of initial values. The bisection method is "slow but reliable" and is also useful as a preprocessing step to provide initial values for other methods.
The reason that extension to multiple dimensions is difficult lies in the essence of the bisection method: bracketing the root between two points with opposite signs. Whether the bisection method is applicable depends not on the dimension of the domain but on the dimension of the codomain.
- Scalar codomain ($f: \mathbb{R}^n \to \mathbb{R}$) -- The bisection method is applicable. By parameterizing the line segment between two points $\mathbf{a}$ and $\mathbf{b}$ in $n$-dimensional space as $g(t) = f(\mathbf{a} + t(\mathbf{b} - \mathbf{a}))$, the standard one-dimensional bisection method can be applied to $g: [0,1] \to \mathbb{R}$.
- Vector codomain ($\mathbf{f}: \mathbb{R}^n \to \mathbb{R}^n$) -- The bisection method is not applicable. Since the output is a vector, there is no notion of "positive or negative," and it is impossible to determine which side contains the root.
6. Worked Example
Find the root $x^* = \sqrt{2} \approx 1.4142$ of $f(x) = x^2 - 2$ on the interval $[1, 2]$. We have $f(1)=-1 < 0$ and $f(2)=2 > 0$.
| $n$ | $a_n$ | $b_n$ | $c_n$ | $f(c_n)$ | Selection |
|---|---|---|---|---|---|
| 0 | 1.0000 | 2.0000 | 1.5000 | $+0.2500$ | Left half |
| 1 | 1.0000 | 1.5000 | 1.2500 | $-0.4375$ | Right half |
| 2 | 1.2500 | 1.5000 | 1.3750 | $-0.1094$ | Right half |
| 3 | 1.3750 | 1.5000 | 1.4375 | $+0.0664$ | Left half |
| 4 | 1.3750 | 1.4375 | 1.4063 | $-0.0225$ | Right half |
| 5 | 1.4063 | 1.4375 | 1.4219 | $+0.0217$ | Left half |
The sign of $f(c_n)$ alternates as $+, -, -, +, -, +$, while the interval width decreases as $1.0 \to 0.5 \to 0.25 \to \cdots$, halving at each step and converging to the root $x^* = \sqrt{2}$.
7. Frequently Asked Questions
Q1. What is the bisection method?
Given a continuous function $f$ with $f(a)f(b)<0$, the bisection method repeatedly halves the interval and selects the subinterval containing the root. Convergence is guaranteed by the Intermediate Value Theorem.
Q2. What is the convergence rate of the bisection method?
It exhibits linear (first-order) convergence: each iteration halves the interval width. To obtain $k$ decimal digits of precision, approximately $3.32k$ iterations are required.
Q3. What are the disadvantages of the bisection method?
The main disadvantages are its slow convergence, the requirement that $f$ change sign over the initial interval, and the difficulty of extending it to multiple dimensions.
8. References
- Wikipedia, "Bisection method" (English)
- Wikipedia, "二分法" (Japanese)
- R. L. Burden & J. D. Faires, Numerical Analysis, 10th ed., Cengage, 2016.
- W. H. Press et al., Numerical Recipes, 3rd ed., Cambridge, 2007.