Int Primality Testing

Overview

Primality testing of arbitrary-precision integers is a cornerstone of cryptography and number-theoretic computation. However, deterministic primality testing for arbitrary-precision integers is extremely costly and often infeasible within practical time limits.

calx provides two approaches:

  • isProbablePrime() — Miller-Rabin probabilistic primality test. With the default 25 rounds, the false positive probability is $4^{-25} \leq 2^{-50} \approx 10^{-15}$. Fast.
  • isProvablePrime() — APRCL deterministic primality proof. Returns a mathematically rigorous proof. Computationally expensive but practical for numbers up to several hundred digits.

As a preprocessing step, trial division by a small-prime table is performed to eliminate the vast majority of composite numbers at low cost.

For usage details, see Int API Reference — Primality Testing.

Small-Prime Preprocessing

Since each round of Miller-Rabin requires a modular exponentiation, running it immediately on every input is inefficient. A small-prime table is used for trial division as a preprocessing step.

Specifically, $n$ is divided by small primes $p_1, p_2, \ldots, p_k$; if $n \bmod p_i = 0$ and $n \neq p_i$, then $n$ is immediately declared composite. This modular operation requires only a single-word division and is extremely lightweight.

The effectiveness of small-prime preprocessing is significant. For instance, trial division by just 2, 3, and 5 eliminates about 77% of composite odd numbers. Extending the table to several hundred primes eliminates the vast majority of composites at this stage.

When the input is small enough (at most the square of the largest prime in the table), trial division alone suffices for a complete primality test, and Miller-Rabin need not be invoked.

Miller-Rabin Primality Test

Miller-Rabin is a probabilistic primality test that strengthens Fermat's little theorem.

Fermat's Little Theorem

If $p$ is prime, then for all $a$ with $\gcd(a, p) = 1$:

$$a^{p-1} \equiv 1 \pmod{p}$$

By contrapositive, if $a^{n-1} \not\equiv 1 \pmod{n}$, then $n$ is composite. However, the converse does not hold: there exist composite numbers $n$ satisfying $a^{n-1} \equiv 1 \pmod{n}$ (pseudoprimes).

The Miller-Rabin Strengthening

Miller-Rabin strengthens the test based on the following observation. Factor $n - 1$ as $n - 1 = 2^s \cdot d$ (where $d$ is odd). If $n$ is prime, then for any base $a$ ($1 < a < n$), one of the following holds:

$$a^d \equiv 1 \pmod{n}$$

Or there exists some $r$ ($0 \leq r < s$) such that:

$$a^{2^r \cdot d} \equiv -1 \pmod{n}$$

This condition follows from the property that "the only square roots of 1 are $\pm 1$". In a prime $n$, if $x^2 \equiv 1 \pmod{n}$ then $x \equiv \pm 1 \pmod{n}$, so the value immediately before reaching $1$ in the repeated squaring of $a^d$ must be $-1$.

Algorithm

Input: odd integer $n \geq 3$, number of rounds $k$

  1. Factor $n - 1 = 2^s \cdot d$ (where $d$ is odd).
  2. Repeat $k$ times:
    1. Choose a random base $a$ ($2 \leq a \leq n - 2$).
    2. Compute $x \leftarrow a^d \bmod n$ (modular exponentiation).
    3. If $x = 1$ or $x = n - 1$, this round passes. Proceed to the next round.
    4. Repeat $s - 1$ times:
      1. $x \leftarrow x^2 \bmod n$
      2. If $x = n - 1$, this round passes. Proceed to the next round.
    5. If we reach this point, $n$ is composite ($a$ is a witness).
  3. If all $k$ rounds pass, declare $n$ "probably prime".

False Positive Probability

Rabin (1980) proved that for any composite $n$, the fraction of bases $a$ that fail to be witnesses is at most $1/4$. Therefore:

  • False positive probability per round: $\leq 1/4$
  • False positive probability after $k$ rounds: $\leq 4^{-k}$

With calx's default $k = 25$:

$$4^{-25} = 2^{-50} \approx 8.9 \times 10^{-16}$$

This is a false positive probability of less than one in a quadrillion, a level that can be considered "certain" for all practical purposes.

Complexity

The dominant cost per round is the modular exponentiation $a^d \bmod n$. Binary exponentiation requires $O(\log n)$ modular multiplications, and with a cost of $M(n)$ per modular multiplication:

$$\text{Cost per round} = O(\log n \cdot M(n))$$

The total cost for $k$ rounds is $O(k \cdot \log n \cdot M(n))$.

Fermat Primality Test

The Fermat primality test directly applies Fermat's little theorem. It is simpler than Miller-Rabin but weaker.

For a base $a$, compute $a^{n-1} \bmod n$; if the result is not $1$, then $n$ is composite; if it is $1$, declare $n$ "probably prime".

$$a^{n-1} \equiv 1 \pmod{n} \quad \Rightarrow \quad n \text{ is probably prime}$$

The critical weakness of the Fermat test is the existence of Carmichael numbers. A Carmichael number is a composite number satisfying $a^{n-1} \equiv 1 \pmod{n}$ for all bases $a$ with $\gcd(a, n) = 1$. The smallest Carmichael number is $561 = 3 \times 11 \times 17$, and it has been proven that infinitely many Carmichael numbers exist (Alford-Granville-Pomerance, 1994).

Miller-Rabin strengthens the Fermat test and can correctly identify Carmichael numbers as composite.

calx provides the Fermat test as IntPrime::isFermatPrime(), with a default of $k = 5$ rounds. Each round is slightly cheaper than Miller-Rabin (no squaring steps required), but it should not be used when high reliability is required.

Next Prime / Previous Prime

IntPrime::nextPrime(n) returns the smallest prime greater than $n$. IntPrime::prevPrime(n) returns the largest prime less than $n$.

Algorithm

The basic approach is straightforward: increment (or decrement) from $n$ by 1, testing each candidate for primality, and return the first candidate that tests prime.

Since even numbers are obviously not prime, only odd candidates are tested (except for the case of 2).

Expected Number of Candidates

By the Prime Number Theorem, the average gap between primes near $n$ is $\ln n$. Therefore, the expected number of candidates to examine before finding the next prime is $O(\ln n)$.

For a $b$-bit integer, $\ln n \approx b \cdot \ln 2 \approx 0.693 b$; for example, a 1024-bit integer requires examining approximately 710 candidates on average. Each candidate is tested with Miller-Rabin, but the vast majority are eliminated by small-prime trial division, so Miller-Rabin is actually invoked only a few times.

Design Decisions

Why Miller-Rabin Was Chosen

Multiple primality testing algorithms exist. The reasons calx chose Miller-Rabin as its primary algorithm are explained by comparison with alternatives.

BPSW (Baillie-PSW) Test

BPSW combines Miller-Rabin (base $a = 2$) with a strong Lucas test. As of 2026, no BPSW pseudoprime (counterexample) has been found. However:

  • An additional Lucas test implementation is required, increasing code complexity
  • The nonexistence of counterexamples is not proven (it remains a conjecture)
  • Miller-Rabin with $k = 25$ already provides a sufficiently small error probability of $2^{-50}$

AKS (Agrawal-Kayal-Saxena) Test

AKS is a deterministic polynomial-time primality test published in 2002, a landmark result in theoretical computer science. However:

  • Its complexity is $\tilde{O}(\log^6 n)$, making it extremely slow for arbitrary-precision integers
  • It runs at practical speed only for small numbers
  • Since the error probability of probabilistic methods is negligibly small, the practical value of determinism is limited

APRCL Deterministic Primality Proof (Implemented in calx)

In addition to Miller-Rabin, calx also provides the APRCL (Adleman-Pomerance-Rumely-Cohen-Lenstra) deterministic primality proof via isProvablePrime(). APRCL is practically faster than AKS and runs in reasonable time for primes up to several hundred digits.

Usage guidelines:

  • isProbablePrime(): For general use. Fast Miller-Rabin with $k = 25$
  • isProvablePrime(): When a mathematically rigorous proof is required. Higher computational cost

Rationale for Default $k = 25$

$k = 25$ yields a false positive probability of $2^{-50}$. This probability is sufficiently small when compared to:

  • Hardware errors (memory bit flips, etc.) cause computation errors with far higher probability
  • Cryptographic applications typically recommend $k = 20$–$40$ (FIPS 186-4, etc.)
  • Increasing $k$ raises cost only linearly, so the cost of building in a safety margin is low

References

  • Miller, G. L. (1976). “Riemann’s Hypothesis and Tests for Primality”. Journal of Computer and System Sciences, 13 (3), 300–317.
  • Rabin, M. O. (1980). “Probabilistic algorithm for testing primality”. Journal of Number Theory, 12 (1), 128–138.
  • Alford, W. R., Granville, A. and Pomerance, C. (1994). “There are Infinitely Many Carmichael Numbers”. Annals of Mathematics, 139 (3), 703–722.
  • Agrawal, M., Kayal, N. and Saxena, N. (2004). “PRIMES is in P”. Annals of Mathematics, 160 (2), 781–793.