LinAlg Demo — Practical Linear Algebra Solvers

The LinAlg module provides direct and iterative solvers, eigenvalue decomposition, and matrix functions. It is used in combination with Matrix<T> / Vector<T>.

This page presents four demos showcasing practical uses of linear algebra solvers. All outputs below were actually computed by calx. The source code is provided at the bottom of the page, and you can reproduce the same results by building calx.

Demo 1: Linear Systems — LU / QR Unified Interface

We solve the linear system $Ax = b$ using two methods: LU decomposition and QR decomposition. lu_solve() is called directly, while solve() provides a unified interface that switches between solvers via SolverType.

$$\begin{pmatrix} 2 & 1 & -1 \\ -3 & -1 & 2 \\ -2 & 1 & 2 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} 8 \\ -11 \\ -3 \end{pmatrix}$$
=== demo_linear_solve === x (LU) = [ 2 3 -1 ] x (QR) = [ 2 3 -1 ]
Both LU and QR decomposition yield the same solution $x = (2, 3, -1)$. QR decomposition is computationally more expensive but offers better numerical stability for ill-conditioned systems.

Demo 2: Eigenvalue Decomposition of a Symmetric Matrix

We compute the eigenvalues and eigenvectors of a symmetric matrix $S$ using eigen_symmetric(). The eigenvalues of a symmetric matrix are always real, and the eigenvectors form an orthogonal set.

$$S = \begin{pmatrix} 4 & 1 & 1 \\ 1 & 3 & 0 \\ 1 & 0 & 2 \end{pmatrix}$$
=== demo_eigen_symmetric === eigenvalues = [ 1.38197 3 4.61803 ] eigenvectors (columns): [ -0.371748 -0.707107 0.601501 ] [ 0.601501 0 0.371748 ] [ -0.371748 0.707107 0.601501 ]
The eigenvalues are returned in ascending order $(\lambda_1 \approx 1.38,\; \lambda_2 = 3,\; \lambda_3 \approx 4.62)$. The identity $\lambda_1 + \lambda_2 + \lambda_3 = 9 = \operatorname{tr}(S)$ holds.

Demo 3: Matrix Exponential — Generating a Rotation Matrix

The matrix exponential $e^A$ of a skew-symmetric matrix yields a rotation matrix. We use expm() to generate a rotation matrix from a $2 \times 2$ skew-symmetric matrix.

$$A = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}, \quad e^A = \begin{pmatrix} \cos 1 & -\sin 1 \\ \sin 1 & \cos 1 \end{pmatrix}$$
=== demo_expm === expm(A) = [ 0.540302 -0.841471 ] [ 0.841471 0.540302 ] expected: [[ 0.540302 -0.841471 ] [ 0.841471 0.540302 ]]
The result of $e^A$ matches the rotation matrix using $(\cos 1, \sin 1) \approx (0.5403, 0.8415)$. This is precisely the exponential map of the Lie group $\mathrm{SO}(2)$.

Demo 4: Conjugate Gradient — Iterative Solver

For symmetric positive definite (SPD) matrices, conjugate_gradient() iteratively finds the solution. Unlike direct methods, it solves the system using only matrix-vector products without explicitly factoring the matrix. This makes it suitable for large sparse matrices.

We construct a $5 \times 5$ diagonally dominant symmetric matrix (tridiagonal):

$$A = \begin{pmatrix} 10 & -1 & & & \\ -1 & 10 & -1 & & \\ & -1 & 10 & -1 & \\ & & -1 & 10 & -1 \\ & & & -1 & 10 \end{pmatrix}, \quad b = \begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{pmatrix}$$
=== demo_iterative_solver === x (CG) = [ 0.109091 0.118182 0.12 0.118182 0.109091 ] ||Ax - b|| = 1.11022e-15
The residual norm $\|Ax - b\| \approx 10^{-15}$ shows convergence to machine precision. The symmetry of the solution $x_1 = x_5$, $x_2 = x_4$ reflects the symmetry of the matrix and vector.

Source Code and How to Run

All outputs on this page were generated from the following C++ program. You can reproduce the same results by building calx.

example_linalg.cpp (click to expand/collapse)
// example_linalg.cpp
// Demonstrates linear algebra features from the calx API reference.
//
// Build:
//   cd build && cmake --build . --config Release --target example-linalg

#include <math/core/matrix.hpp>
#include <math/core/vector.hpp>
#include <math/linalg/solvers.hpp>
#include <math/linalg/eigenvalues.hpp>
#include <math/linalg/matrix_functions.hpp>
#include <iostream>
#include <cmath>

using namespace calx;
using namespace calx::algorithms;

// ---------------------------------------------------------------------------
// Snippet 1: Solve a linear system  Ax = b  (LU + unified interface)
// ---------------------------------------------------------------------------
void demo_linear_solve() {
    std::cout << "=== demo_linear_solve ===" << std::endl;

    Matrix<double> A({{2, 1, -1},
                       {-3, -1, 2},
                       {-2, 1, 2}});
    Vector<double> b({8, -11, -3});

    // LU solver
    auto x = lu_solve(A, b);
    std::cout << "x (LU)  = [";
    for (std::size_t i = 0; i < x.size(); ++i)
        std::cout << " " << x[i];
    std::cout << " ]" << std::endl;

    // Unified interface with QR
    auto x2 = solve(A, b, SolverType::QR);
    std::cout << "x (QR)  = [";
    for (std::size_t i = 0; i < x2.size(); ++i)
        std::cout << " " << x2[i];
    std::cout << " ]" << std::endl;

    std::cout << std::endl;
}

// ---------------------------------------------------------------------------
// Snippet 2: Eigenvalue decomposition of a symmetric matrix
// ---------------------------------------------------------------------------
void demo_eigen_symmetric() {
    std::cout << "=== demo_eigen_symmetric ===" << std::endl;

    Matrix<double> S({{4, 1, 1},
                       {1, 3, 0},
                       {1, 0, 2}});

    auto [eigenvalues, eigenvectors] = eigen_symmetric(S);

    std::cout << "eigenvalues = [";
    for (std::size_t i = 0; i < eigenvalues.size(); ++i)
        std::cout << " " << eigenvalues[i];
    std::cout << " ]" << std::endl;

    std::cout << "eigenvectors (columns):" << std::endl;
    for (std::size_t i = 0; i < eigenvectors.rows(); ++i) {
        std::cout << "  [";
        for (std::size_t j = 0; j < eigenvectors.cols(); ++j)
            std::cout << " " << eigenvectors(i, j);
        std::cout << " ]" << std::endl;
    }

    std::cout << std::endl;
}

// ---------------------------------------------------------------------------
// Snippet 3: Matrix exponential  (rotation matrix from skew-symmetric)
// ---------------------------------------------------------------------------
void demo_expm() {
    std::cout << "=== demo_expm ===" << std::endl;

    Matrix<double> A({{0, -1},
                       {1,  0}});

    auto E = expm(A);

    std::cout << "expm(A) =" << std::endl;
    for (std::size_t i = 0; i < E.rows(); ++i) {
        std::cout << "  [";
        for (std::size_t j = 0; j < E.cols(); ++j)
            std::cout << " " << E(i, j);
        std::cout << " ]" << std::endl;
    }
    std::cout << "expected: [[ " << std::cos(1.0) << " " << -std::sin(1.0)
              << " ] [ " << std::sin(1.0) << " " << std::cos(1.0) << " ]]"
              << std::endl;

    std::cout << std::endl;
}

// ---------------------------------------------------------------------------
// Snippet 4: Iterative solver (CG on a symmetric positive-definite matrix)
// ---------------------------------------------------------------------------
void demo_iterative_solver() {
    std::cout << "=== demo_iterative_solver ===" << std::endl;

    const int n = 5;
    Matrix<double> A(n, n, 0.0);
    for (int i = 0; i < n; ++i) {
        A(i, i) = 10.0;
        if (i > 0) { A(i, i - 1) = -1.0; A(i - 1, i) = -1.0; }
    }

    Vector<double> b(n, 1.0);

    auto x = conjugate_gradient(A, b, 1e-12, 5000);

    std::cout << "x (CG) = [";
    for (std::size_t i = 0; i < x.size(); ++i)
        std::cout << " " << x[i];
    std::cout << " ]" << std::endl;

    // Verify residual
    auto r = A * x;
    double res = 0.0;
    for (std::size_t i = 0; i < b.size(); ++i)
        res += (r[i] - b[i]) * (r[i] - b[i]);
    std::cout << "||Ax - b|| = " << std::sqrt(res) << std::endl;

    std::cout << std::endl;
}

// ---------------------------------------------------------------------------
int main() {
    demo_linear_solve();
    demo_eigen_symmetric();
    demo_expm();
    demo_iterative_solver();
    return 0;
}

For API details, see the LinAlg API Reference.

Build and Run

cd build && cmake --build . --config Release --target example-linalg