Gaussian Elimination

Gaussian Elimination

Basics

Published: 2026-03-20

1. Overview

Gaussian elimination is the most fundamental direct method for solving a system of linear equations $A\mathbf{x} = \mathbf{b}$. It applies elementary row operations to the augmented matrix $[A|\mathbf{b}]$ to transform it into upper triangular form (forward elimination), then obtains the solution by back substitution.

Named after C. F. Gauss (1777--1855), similar techniques can be found in the ancient Chinese text The Nine Chapters on the Mathematical Art (Jiuzhang Suanshu), dating back to before the common era.

2. Forward Elimination

From the $n \times n$ coefficient matrix $A$ and the right-hand side vector $\mathbf{b}$, form the augmented matrix. For each column $k$ ($k = 1, 2, \ldots, n-1$), subtract a constant multiple of row $k$ from all rows below row $k$ to eliminate $a_{ik}$.

Elimination Multiplier

At step $k$, for $i > k$, define the multiplier $m_{ik}$ as

$$m_{ik} = \frac{a_{ik}^{(k)}}{a_{kk}^{(k)}}$$

and subtract $m_{ik}$ times row $k$ from row $i$: $a_{ij}^{(k+1)} = a_{ij}^{(k)} - m_{ik}\, a_{kj}^{(k)}$.

a₁₁ a₁₂ a₁₃ a₂₁ a₂₂ a₂₃ a₃₁ a₃₂ a₃₃ Original Matrix k=1 a₁₁ a₁₂ a₁₃ 0 a'₂₂ a'₂₃ 0 a'₃₂ a'₃₃ k=2 a₁₁ a₁₂ a₁₃ 0 a'₂₂ a'₂₃ 0 0 a''₃₃ Upper Triangular
Figure 1. The forward elimination process. At each step, elements below the diagonal are set to 0, ultimately yielding an upper triangular matrix.

3. Back Substitution

Once forward elimination yields the upper triangular system $U\mathbf{x} = \mathbf{c}$, the solution is obtained starting from the last row:

$$x_n = \frac{c_n}{u_{nn}}, \qquad x_k = \frac{1}{u_{kk}}\!\left(c_k - \sum_{j=k+1}^{n} u_{kj}\, x_j\right), \quad k = n{-}1, \ldots, 1$$

4. Animated Computation Example

The following interactive widget demonstrates the complete process of forward elimination and back substitution on a concrete system of three linear equations, step by step.

Gaussian Elimination (3 Variables)
1 / 9

Example Using Augmented Matrix

The same system of equations from the animation above is presented here using augmented matrix row operations.

Forward Elimination:

$$\left[\begin{array}{ccc|c} 2 & 1 & -1 & 8 \\ -3 & -1 & 2 & -11 \\ -2 & 1 & 2 & -3 \end{array}\right]$$

$R_2 \leftarrow R_2 + \frac{3}{2}R_1$, $R_3 \leftarrow R_3 + R_1$:

$$\left[\begin{array}{ccc|c} 2 & 1 & -1 & 8 \\ 0 & \frac{1}{2} & \frac{1}{2} & 1 \\ 0 & 2 & 1 & 5 \end{array}\right]$$

$R_3 \leftarrow R_3 - 4R_2$:

$$\left[\begin{array}{ccc|c} 2 & 1 & -1 & 8 \\ 0 & \frac{1}{2} & \frac{1}{2} & 1 \\ 0 & 0 & -1 & 1 \end{array}\right]$$

Back Substitution: From row 3, $x_3 = -1$; substituting into row 2 gives $x_2 = 3$; substituting into row 1 gives $x_1 = 2$.

5. Pivot Selection

At step $k$ of forward elimination, if the pivot $a_{kk}^{(k)}$ is zero or close to zero, division by zero or large rounding errors may occur. To avoid this, pivot selection (pivoting) is performed.

5.1 Partial Pivoting

In column $k$, the element with the largest absolute value at or below row $k$ is selected, and the corresponding row swap is performed. Since swapping rows merely reorders the equations, the correspondence between variables is unaffected. In practice, partial pivoting is sufficient in most cases and is adopted by standard numerical libraries such as LAPACK.

* * * * 0 2 5 3 0 7 1 4 0 3 6 2 Search Column Row Swap In column 2, below row k=2 |7| is the largest → Row 3 becomes pivot row
Figure 2. Partial pivoting. The row with the element of largest absolute value in column $k$ is swapped with the pivot row.

5.2 Complete Pivoting (Full Pivoting)

In complete pivoting, at step $k$, the element with the largest absolute value is selected from all entries at or below row $k$ and at or to the right of column $k$, and both rows and columns are swapped.

* * * * 0 2 5 3 0 1 9 4 0 3 6 2 Search Region (rows k=2 and below, columns k=2 and right) Row Swap Column Swap Among all entries in the |9| is the largest → Swap rows 3 and 2 → Swap columns 3 and 2 (= swapping x₂ and x₃)
Figure 3. Complete pivoting. The element with the largest absolute value in the search region (yellow) is selected, and both a row swap (red) and a column swap (blue) are performed.

A column swap corresponds to a reordering of variables. For example, swapping columns 2 and 3 interchanges the roles of $x_2$ and $x_3$. Therefore, after all elimination steps are complete, the column swap history must be traced in reverse order to restore the original variable ordering.

Partial vs. Complete: Complete pivoting offers slightly better numerical stability, but finding the maximum element requires an additional $O(n^2)$ cost. The search cost for partial pivoting is $O(n)$, and in practice, partial pivoting is sufficient in most cases.

5.3 Pivot Selection for Rational Arithmetic

The pivoting strategies described above assume floating-point arithmetic. When performing Gaussian elimination exactly using rational numbers (fractions with arbitrary-precision integers), there is no rounding error; instead, coefficient swell becomes the concern. During elimination, the number of digits in the numerators and denominators can grow explosively, making computation extremely slow.

To mitigate this problem, in contrast to floating-point arithmetic, it is effective to choose the nonzero element with the smallest absolute value as the pivot. More precisely, by selecting the element whose height $H(p/q) = \max(|p|, |q|)$ (where $p/q$ is in lowest terms) is minimized as the pivot, the numerators and denominators of the elimination multipliers remain small, thereby suppressing coefficient swell.

Note: In rational Gaussian elimination, besides pivot selection, other techniques exist to avoid coefficient swell, such as reducing intermediate values to lowest terms at each step, or using the Bareiss algorithm, which performs elimination using only integers without introducing fractions.

6. Computational Complexity

The computational complexity of Gaussian elimination for a system of $n$ linear equations is as follows.

  • Forward Elimination: $\frac{2}{3}n^3 + O(n^2)$ floating-point operations
  • Back Substitution: $n^2 + O(n)$ floating-point operations
  • Overall: $O(n^3)$

7. Relationship with LU Decomposition

The forward elimination process of Gaussian elimination is equivalent to decomposing the matrix $A$ into the product of a lower triangular matrix $L$ and an upper triangular matrix $U$:

$$A = LU$$

The elimination multipliers $m_{ik}$ become the subdiagonal elements of $L$, and the resulting upper triangular matrix is $U$. Once the $LU$ decomposition is computed, a solution for any different right-hand side $\mathbf{b}$ can be obtained in $O(n^2)$ operations, making it efficient when solving with the same coefficient matrix but multiple right-hand sides.

8. Frequently Asked Questions

Q1. What is Gaussian elimination?

Gaussian elimination is a direct method that solves a system of linear equations in two stages: forward elimination and back substitution. Its computational complexity is $O(n^3)$, and it is the most fundamental algorithm in numerical linear algebra.

Q2. Why is pivoting necessary?

If the pivot element is zero or close to zero, division by zero or large rounding errors can occur. Partial pivoting selects the element with the largest absolute value in the column as the pivot, thereby improving numerical stability.

Q3. What is the relationship between Gaussian elimination and LU decomposition?

The forward elimination process is equivalent to decomposing the matrix $A$ into $L$ (lower triangular) and $U$ (upper triangular). The elimination multipliers become the elements of $L$, and the eliminated matrix becomes $U$.

9. References

  • Wikipedia, "Gaussian elimination"
  • Wikipedia, "ガウスの消去法" (Japanese)
  • G. H. Golub & C. F. Van Loan, Matrix Computations, 4th ed., Johns Hopkins, 2013.
  • L. N. Trefethen & D. Bau III, Numerical Linear Algebra, SIAM, 1997.