Matrix Demo — Practical Matrix Operations

Matrix<T> is a dynamically-sized matrix class. Combined with linear algebra algorithms such as LU decomposition, SVD, and Gaussian elimination, it can solve linear systems and perform low-rank approximations. By specifying Rational as the template argument, exact computation without rounding errors is also possible.

This page presents three demos showcasing practical applications of matrix operations. All outputs below were actually computed by calx. The source code is provided at the bottom of the page, and you can reproduce the same results by building calx.

Demo 1: Solving Linear Systems — LU Decomposition vs Gaussian Elimination

We solve the same linear system using two methods: LU decomposition and Gaussian elimination. The results of partial pivoting and full pivoting are also compared.

$$\begin{pmatrix} 2 & 1 & -1 \\ -3 & -1 & 2 \\ -2 & 1 & 2 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} 8 \\ -11 \\ -3 \end{pmatrix}$$
LU solve: x = {2, 3, -1} Gauss partial: x = {2, 3, -1} Gauss full: x = {2, 3, -1} Residual norm: 0
All three methods yield the same solution $x = (2, 3, -1)$. Full pivoting is computationally more expensive but offers better numerical stability for ill-conditioned matrices.

Demo 2: Low-Rank Approximation with SVD

We use singular value decomposition (SVD) $A = U \Sigma V^\top$ to compute a low-rank approximation of a matrix. By retaining only the top $k$ singular values, $A_k = U_k \Sigma_k V_k^\top$ approximates the original matrix. This technique is fundamental to image compression and dimensionality reduction (PCA).

We decompose the following $5 \times 4$ matrix (rank-2 structure + small noise):

$$A = \begin{pmatrix} 3.01 & 0 & 6 & 3 \\ 10 & 2 & 11.98 & 8 \\ 4 & 2 & 0 & 2 \\ 13 & 2.02 & 18 & 11 \\ 11 & 4 & 6 & 6.99 \end{pmatrix}$$
Singular values: sigma[0] = 34.4036 sigma[1] = 6.7744 sigma[2] = 0.0130 sigma[3] = 0.0088 Rank-2 approximation error (Frobenius): 1.57e-02
The two largest singular values $\sigma_0 = 34.4$ and $\sigma_1 = 6.77$ dominate, while $\sigma_2, \sigma_3$ are on the order of $0.01$. The Frobenius error of the rank-2 approximation is $1.57 \times 10^{-2}$, showing that nearly all information in this matrix is concentrated in the top two singular values.

Demo 3: Precision Showdown — double vs Float vs Rational

The Hilbert matrix $H_{ij} = 1/(i+j+1)$ is one of the most ill-conditioned matrices known. We set the true solution to $\mathbf{x}_1 = (1, 1, \ldots, 1)$ and construct the right-hand side $\mathbf{b} = H \mathbf{x}_1$. We then solve $H\mathbf{x} = \mathbf{b}$ using double, Float (200 digits), and Rational, comparing the error $\max_i |x_i - (x_1)_i|$.

ndoubleFloat (200 digits)Rational
1000 (exact)
26.7e-1600 (exact)
31.0e-141.8e-2010 (exact)
46.1e-131.9e-1990 (exact)
56.2e-136.7e-1990 (exact)
65.3e-106.4e-1970 (exact)
72.6e-082.9e-1950 (exact)
84.2e-073.0e-1940 (exact)
92.0e-051.1e-1920 (exact)
103.2e-042.1e-1910 (exact)
119.7e-039.0e-1900 (exact)
120.362.1e-1880 (exact)
13FAILED5.7e-1870 (exact)
14FAILED5.2e-1850 (exact)
15FAILED8.2e-1840 (exact)
16FAILED2.9e-1820 (exact)
17FAILED2.6e-1810 (exact)
18FAILED3.1e-1790 (exact)
19FAILED7.6e-1780 (exact)
20FAILED4.6e-1760 (exact)
  • double: Precision degrades by roughly 2 digits per step. At n=12, all digits are incorrect (error 0.36); at n=13, the matrix is deemed singular
  • Float (200 digits): Even at n=20, the error is 4.6×10-176 (176 correct digits)
  • Rational: Always yields the exact solution (zero error)

Bonus: Exact Inverse of H(5)

It is well known that the inverse of the Hilbert matrix has all integer entries.

H^{-1}: 25 -300 1050 -1400 630 -300 4800 -18900 26880 -12600 1050 -18900 79380 -117600 56700 -1400 26880 -117600 179200 -88200 630 -12600 56700 -88200 44100 H * H^{-1} == I (exact): true

Source Code and How to Run

All outputs on this page were generated from the following C++ programs. You can reproduce the same results by building calx.

Demo 1: Linear Systems (click to expand)
#include <iostream>
#include <math/core/matrix.hpp>
#include <math/core/vector.hpp>
#include <math/linalg/decomposition.hpp>

using namespace calx;
using namespace calx::algorithms;

int main() {
    // 3x3 linear system  Ax = b
    Matrix<double> A = {{2, 1, -1}, {-3, -1, 2}, {-2, 1, 2}};
    Vector<double> b = {8, -11, -3};

    // Method 1: LU decomposition
    auto x1 = lu_solve(A, b);
    std::cout << "LU solve:       x = {";
    for (std::size_t i = 0; i < x1.size(); ++i)
        std::cout << (i ? ", " : "") << x1[i];
    std::cout << "}" << std::endl;

    // Method 2: Gaussian elimination (partial pivoting)
    auto x2 = gaussian_elimination(A, b, PivotStrategy::Partial);
    std::cout << "Gauss partial:  x = {";
    for (std::size_t i = 0; i < x2.size(); ++i)
        std::cout << (i ? ", " : "") << x2[i];
    std::cout << "}" << std::endl;

    // Method 3: Gaussian elimination (full pivoting)
    auto x3 = gaussian_elimination(A, b, PivotStrategy::Full);
    std::cout << "Gauss full:     x = {";
    for (std::size_t i = 0; i < x3.size(); ++i)
        std::cout << (i ? ", " : "") << x3[i];
    std::cout << "}" << std::endl;

    // Verify: Ax = b
    Vector<double> r = A * x1 - b;
    std::cout << "Residual norm:  " << r.norm() << std::endl;
}
Demo 2: SVD Low-Rank Approximation (click to expand)
#include <iostream>
#include <iomanip>
#include <cmath>
#include <math/core/matrix.hpp>
#include <math/linalg/decomposition.hpp>

using namespace calx;
using namespace calx::algorithms;

int main() {
    // 5x4 matrix (rank-2 structure + small noise)
    // A ≈ 3*u1*v1^T + 2*u2*v2^T
    Matrix<double> A = {
        { 3.01,  0.00,  6.00,  3.00},
        {10.00,  2.00, 11.98,  8.00},
        { 4.00,  2.00,  0.00,  2.00},
        {13.00,  2.02, 18.00, 11.00},
        {11.00,  4.00,  6.00,  6.99}
    };

    auto [U, sigma, Vt] = svd_decomposition(A);

    std::cout << "Singular values:" << std::endl;
    for (std::size_t i = 0; i < sigma.size(); i++)
        std::cout << "  sigma[" << i << "] = " << std::fixed
                  << std::setprecision(4) << sigma[i] << std::endl;

    // Rank-2 approximation: retain only the top 2 singular values
    std::size_t k = 2;
    Vector<double> s_trunc(sigma.size(), 0.0);
    for (std::size_t i = 0; i < k; i++) s_trunc[i] = sigma[i];
    auto A_approx = reconstruct_matrix_from_svd(U, s_trunc, Vt);

    // Approximation error
    double err = 0;
    for (std::size_t i = 0; i < A.rows(); i++)
        for (std::size_t j = 0; j < A.cols(); j++)
            err += (A(i,j) - A_approx(i,j)) * (A(i,j) - A_approx(i,j));
    std::cout << "\nRank-2 approximation error (Frobenius): "
              << std::scientific << std::setprecision(2) << std::sqrt(err) << std::endl;
    std::cout << "Compression: " << A.rows() * A.cols() << " -> "
              << k * (A.rows() + A.cols() + 1) << " values" << std::endl;
}
Demo 3: Precision Showdown (click to expand)
// example_precision_showdown.cpp
// Build:
//   cd build && cmake --build . --config Release --target example-precision-showdown

#include <math/core/matrix.hpp>
#include <math/core/vector.hpp>
#include <math/core/mp/Float.hpp>
#include <math/core/mp/Rational.hpp>
#include <math/linalg/decomposition.hpp>
#include <iostream>
#include <iomanip>
#include <cmath>
#include <limits>

using namespace calx;
using namespace calx::algorithms;

// Hilbert matrix H(i,j) = 1/(i+j+1)
template<typename T>
Matrix<T> hilbert(int n) {
    Matrix<T> H(n, n);
    for (int i = 0; i < n; i++)
        for (int j = 0; j < n; j++)
            H(i, j) = T(1) / T(i + j + 1);
    return H;
}

Matrix<Rational> hilbert_rational(int n) {
    Matrix<Rational> H(n, n);
    for (int i = 0; i < n; i++)
        for (int j = 0; j < n; j++)
            H(i, j) = Rational(1, i + j + 1);
    return H;
}

int main() {
    std::cout << std::setw(4) << "n"
              << std::setw(18) << "double"
              << std::setw(18) << "Float(200d)"
              << std::setw(18) << "Rational" << std::endl;

    for (int n = 1; n <= 20; n++) {
        // true solution: x1 = (1, 1, ..., 1)

        // --- double ---
        double err_d = 0;
        {
            auto H = hilbert<double>(n);
            Vector<double> x1(n, 1.0);
            Vector<double> b = H * x1;      // b = H * x1
            try {
                auto x = lu_solve(H, b);     // solve H x = b
                for (int i = 0; i < n; i++)
                    err_d = std::max(err_d, std::abs(x[i] - 1.0));
            } catch (...) { err_d = std::numeric_limits<double>::infinity(); }
        }

        // --- Float (200 digits) ---
        double err_f = 0;
        {
            Float::setDefaultPrecision(200);
            auto H = hilbert<Float>(n);
            Vector<Float> x1(n, Float(1));
            Vector<Float> b = H * x1;       // b = H * x1
            try {
                auto x = lu_solve(H, b);     // solve H x = b
                for (int i = 0; i < n; i++) {
                    Float diff = calx::abs(x[i] - Float(1));
                    err_f = std::max(err_f, diff.toDouble());
                }
            } catch (...) { err_f = std::numeric_limits<double>::infinity(); }
        }

        // --- Rational (exact) ---
        double err_r = 0;
        {
            auto H = hilbert_rational(n);
            Vector<Rational> x1(n, Rational(1));
            Vector<Rational> b = H * x1;    // b = H * x1
            auto x = gaussian_elimination(H, b, PivotStrategy::Partial);
            for (int i = 0; i < n; i++)
                if (x[i] != Rational(1)) err_r = 1.0;
        }

        // output
        auto fmt = [](double e) -> std::string {
            char buf[32];
            if (e == 0) return "0 (exact)";
            if (e >= 1.0) return "FAILED";
            if (std::isinf(e)) return "SINGULAR";
            snprintf(buf, sizeof(buf), "%.2e", e);
            return buf;
        };
        std::cout << std::setw(4) << n
                  << std::setw(18) << fmt(err_d)
                  << std::setw(18) << fmt(err_f)
                  << std::setw(18) << fmt(err_r) << std::endl;
    }
}

For API details, see the Matrix API Reference and the LinAlg API Reference.

Build and Run

cl /std:c++latest /EHsc /O2 /utf-8 /I<calx>/include demo_matrix.cpp ^
   /link calx_int.lib calx_float.lib calx_rational.lib