Gaussian elimination is the most fundamental direct method for solving a system of linear equations $A\mathbf{x} = \mathbf{b}$. It applies elementary row operations to the augmented matrix $[A|\mathbf{b}]$ to transform it into upper triangular form (forward elimination), then obtains the solution by back substitution.
Named after C. F. Gauss (1777--1855), similar techniques can be found in the ancient Chinese text The Nine Chapters on the Mathematical Art (Jiuzhang Suanshu), dating back to before the common era.
2. Forward Elimination
From the $n \times n$ coefficient matrix $A$ and the right-hand side vector $\mathbf{b}$, form the augmented matrix. For each column $k$ ($k = 1, 2, \ldots, n-1$), subtract a constant multiple of row $k$ from all rows below row $k$ to eliminate $a_{ik}$.
Elimination Multiplier
At step $k$, for $i > k$, define the multiplier $m_{ik}$ as
$$m_{ik} = \frac{a_{ik}^{(k)}}{a_{kk}^{(k)}}$$
and subtract $m_{ik}$ times row $k$ from row $i$: $a_{ij}^{(k+1)} = a_{ij}^{(k)} - m_{ik}\, a_{kj}^{(k)}$.
Figure 1. The forward elimination process. At each step, elements below the diagonal are set to 0, ultimately yielding an upper triangular matrix.
3. Back Substitution
Once forward elimination yields the upper triangular system $U\mathbf{x} = \mathbf{c}$, the solution is obtained starting from the last row:
The following interactive widget demonstrates the complete process of forward elimination and back substitution on a concrete system of three linear equations, step by step.
Back Substitution: From row 3, $x_3 = -1$; substituting into row 2 gives $x_2 = 3$; substituting into row 1 gives $x_1 = 2$.
5. Pivot Selection
At step $k$ of forward elimination, if the pivot $a_{kk}^{(k)}$ is zero or close to zero, division by zero or large rounding errors may occur. To avoid this, pivot selection (pivoting) is performed.
5.1 Partial Pivoting
In column $k$, the element with the largest absolute value at or below row $k$ is selected, and the corresponding row swap is performed. Since swapping rows merely reorders the equations, the correspondence between variables is unaffected. In practice, partial pivoting is sufficient in most cases and is adopted by standard numerical libraries such as LAPACK.
Figure 2. Partial pivoting. The row with the element of largest absolute value in column $k$ is swapped with the pivot row.
5.2 Complete Pivoting (Full Pivoting)
In complete pivoting, at step $k$, the element with the largest absolute value is selected from all entries at or below row $k$ and at or to the right of column $k$, and both rows and columns are swapped.
Figure 3. Complete pivoting. The element with the largest absolute value in the search region (yellow) is selected, and both a row swap (red) and a column swap (blue) are performed.
A column swap corresponds to a reordering of variables. For example, swapping columns 2 and 3 interchanges the roles of $x_2$ and $x_3$. Therefore, after all elimination steps are complete, the column swap history must be traced in reverse order to restore the original variable ordering.
Partial vs. Complete: Complete pivoting offers slightly better numerical stability, but finding the maximum element requires an additional $O(n^2)$ cost. The search cost for partial pivoting is $O(n)$, and in practice, partial pivoting is sufficient in most cases.
5.3 Pivot Selection for Rational Arithmetic
The pivoting strategies described above assume floating-point arithmetic. When performing Gaussian elimination exactly using rational numbers (fractions with arbitrary-precision integers), there is no rounding error; instead, coefficient swell becomes the concern. During elimination, the number of digits in the numerators and denominators can grow explosively, making computation extremely slow.
To mitigate this problem, in contrast to floating-point arithmetic, it is effective to choose the nonzero element with the smallest absolute value as the pivot. More precisely, by selecting the element whose height $H(p/q) = \max(|p|, |q|)$ (where $p/q$ is in lowest terms) is minimized as the pivot, the numerators and denominators of the elimination multipliers remain small, thereby suppressing coefficient swell.
Note: In rational Gaussian elimination, besides pivot selection, other techniques exist to avoid coefficient swell, such as reducing intermediate values to lowest terms at each step, or using the Bareiss algorithm, which performs elimination using only integers without introducing fractions.
6. Computational Complexity
The computational complexity of Gaussian elimination for a system of $n$ linear equations is as follows.
Back Substitution: $n^2 + O(n)$ floating-point operations
Overall: $O(n^3)$
7. Relationship with LU Decomposition
The forward elimination process of Gaussian elimination is equivalent to decomposing the matrix $A$ into the product of a lower triangular matrix $L$ and an upper triangular matrix $U$:
$$A = LU$$
The elimination multipliers $m_{ik}$ become the subdiagonal elements of $L$, and the resulting upper triangular matrix is $U$. Once the $LU$ decomposition is computed, a solution for any different right-hand side $\mathbf{b}$ can be obtained in $O(n^2)$ operations, making it efficient when solving with the same coefficient matrix but multiple right-hand sides.
8. Frequently Asked Questions
Q1. What is Gaussian elimination?
Gaussian elimination is a direct method that solves a system of linear equations in two stages: forward elimination and back substitution. Its computational complexity is $O(n^3)$, and it is the most fundamental algorithm in numerical linear algebra.
Q2. Why is pivoting necessary?
If the pivot element is zero or close to zero, division by zero or large rounding errors can occur. Partial pivoting selects the element with the largest absolute value in the column as the pivot, thereby improving numerical stability.
Q3. What is the relationship between Gaussian elimination and LU decomposition?
The forward elimination process is equivalent to decomposing the matrix $A$ into $L$ (lower triangular) and $U$ (upper triangular). The elimination multipliers become the elements of $L$, and the eliminated matrix becomes $U$.