The Mathematical Pillars of Post-Quantum Cryptography
Benjamin Skuse
Society entered a new era on August 13, 2024: the quantum era. This was the day when the National Institute of Standards and Technology (NIST), the US federal technology agency, released final versions of its first post-quantum cryptography standards. Ever since, NIST has been urging organizations to begin applying these quantum-resistant standards post-haste to secure their electronic information. As NIST emphasizes in bold on its website, these standards “can and should be put into use now.”
RSA Today
Currently, industry, academia, governments, the public – essentially everyone – rely on encryption algorithms that are designed to be next-to-impossible for even the most powerful supercomputers to solve. For example, the RSA algorithm – named after 2002 ACM A.M. Turing Award recipients Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the algorithm in 1977 – is built around the idea that multiplying two large prime numbers is easy, but the opposite, factoring them, is hard. It relies on the difficulty of finding which two prime numbers were used to create a specific number \(N\).

In more detail, if you want to send a message to a friend securely using RSA, you pick two prime numbers, say \(p = 13\) and \(q = 19\), and multiply them: \(13 \times 19 = 247\). \(13\) and \(19\) are private, but \(247\) is public, you can share it with anyone. You then choose a second smaller public number \(e\) based on a calculation from your original primes:
\[(13 – 1) \times (19 – 1) = 12 \times 18 = 216.\]
\(e\) must be smaller than \(216\) and not share factors, so in this case you can choose it to be \(7\). If your message is just “5”, say, you can use your public numbers to encrypt it:
\[\frac{5^{7}}{247} = \frac{78,125}{247} = 316 \enspace \mathrm{remainder} \enspace 73. \]
From all this effort, you just send \(73\).
On face value, this is gibberish to your friend, but they have your public key \((247,7)\) and their private key \(d\) which satisfies \((d \times e) \bmod 216 = 1\), e.g. \(d = 31\). To decrypt your message is then simple for them, as they just need to calculate the remainder of a simple equation:
\[\frac{73^{31}}{247} = \mathrm{some} \enspace \mathrm{ridiculous} \enspace \mathrm{number} \enspace \mathrm{remainder} \enspace 5,\]
i.e. the original message.
A hacker might have access to the public key \((247,7)\) too, but without the private key \(d\) derived from the original prime numbers, they are left to figure out \(p = 13\) and \(q = 19\) through brute force. In this example of a small \(N = 247\), these primes can be found quickly. But when \(N\) is huge, each prime can be hundreds or thousands of digits long. Having to use trial-and-error methods, even with the most advanced supercomputers in the world, the time required to factor \(N\) grows exponentially. In fact, it would likely take current supercomputers trillions of years to crack a single 2048-bit standard RSA key.

Shor’s Algorithm
Yet quantum computing threatens this infallibility. In a post-quantum world, RSA is not only less effective, it is redundant. Whereas today’s computers and supercomputers have to check for prime factors one by one, quantum computers of sufficient power and scale can take advantage of quantum phenomena to arrive at an answer rapidly, using the completely different logic behind Shor’s (factoring) algorithm; named after 1998 Nevanlinna Prize recipient Peter Shor.

Instead of factoring the number \(N\) by guessing divisors, Shor’s algorithm takes a radically different approach. To begin, a random number \(a\) smaller than \(N\) is picked and then the algorithm uses quantum superposition to simultaneously calculate the function \(a^{x} \bmod N\), for all \(x\), which is an infinite sequence of unknown period \(r\). It then uses the quantum Fourier transform to collapse most of the possibilities and leave a value related to the period \(r\). The algorithm then switches back to classical computation to find the factors:
\[p,q \approx \gcd(a^{r/2} \pm 1, N),\]
and break the RSA key.
A simple example could be helpful. If we want to factor \(N = 15\), we choose a random smaller number \(a\), say \(a = 7\). Then \(7^{x} \bmod 15\) leaves the following remainders:
\[7^{1} \longrightarrow 7\]
\[7^{2} \longrightarrow 4\]
\[7^{3} \longrightarrow 13\]
\[7^{4} \longrightarrow 1\]
\[7^{5} \longrightarrow 7\]
\[7^{6} \longrightarrow 4.\]
Therefore, period \(r = 4\). Hence, we calculate \(p,q \approx \gcd(a^{r/2} \pm 1, N)\):
\[7^{2} – 1 = 49 – 1 = \mathbf{48}\]
\[7^{2} + 1 = 49 + 1 = \mathbf{50}.\]
The greatest common divisor between these numbers and \(N = 15\) is then:
\[\gcd(48, 15) = \mathbf{3}\]
\[\gcd(50, 15) = \mathbf{5}.\]
Though ludicrously complicated for an example we can all just do in our heads instantly, the crucial point is that classical approaches scale exponentially with \(N\), whereas Shor’s algorithm scales polynomially. This means, given a quantum computer with enough stable, error-corrected qubits, Shor’s algorithm could crack RSA and other modern encryption schemes really quickly. And this is a serious cause for concern. From state secrets to bank account details, all digital information would suddenly be exposed and at risk. Modern society itself would be under threat.
Lattice Solutions
As far as the public has been told, even the most advanced quantum computers are nowhere near achieving the required order of millions of stable, error-corrected qubits needed for Shor’s algorithm to pose a threat. Nevertheless, our defences are living on borrowed time. Sooner or later, “cryptographically relevant” quantum computers will be built. This is why NIST is releasing its post-quantum cryptography standards now, so that the world can plan ahead and build defences before attackers get the chance to wield Shor’s algorithm, and other quantum algorithms, in the field.

These three principal initial standards aim to provide solutions for different situations, employ varied approaches for encryption, and offer more than one algorithm for each kind of application in the event one proves vulnerable. Two of these (named FIPS 203 and FIPS 204) are based on a family of mathematical problems called structured lattices: one for encryption and authentication using a shared secret key between two parties over a public channel that can be used for securing websites, messaging apps, etc; the other to generate and verify digital signatures for document signing, identity verification, etc.
FIPS 203 and FIPS 204 share the same mathematical foundation: the inherent difficulty of the module learning with errors (MLWE) problem. This is a variant of the LWE problem – a mathematical problem commonly used for encrytpion, where secret information is hidden by errors introduced in a set of equations – optimized for computational efficiency. Standard LWE operates over large matrices of integers, MLWE operates over modules; structures where the integers are replaced with polynomials. This gives what were huge, messy matrices of integers some structure through the rules of polynomial mathematics, shrinking public keys and accelerating multiplication.
What makes FIPS 203 and FIPS 204 secure is the difficulty of distinguishing a noisy linear relationship from a truly random one in the large lattice structure. Without the noise, this would be trivial to crack; the solution of a system of linear equations via Gaussian elimination. But with noise, MLWE becomes a hard lattice problem, closely related to the well-known closest vector problem (CVP), for which there is currently no known quantum algorithm providing solutions in polynomial time.
Hashing It Out
The final algorithm standard FIPS 205 is a high-security back-up option to its lattice-based counterpart for digital signatures, and uses a much older and simpler mathematical concept known as the one-way hash function; which maps an input of any size to a unique output of a fixed length of bits. A hash function can, for example, take a plaintext data input and use a mathematical algorithm to generate an unreadable output. Given a single hash can only be used once, FIPS 205 is structured in hierarchical layers to allow for millions of signatures without losing security. However, its large signature size and slow signing speed make it ill-equipped for everyday web use, and more suited to securing firmware updates or signing important archival documents.
Security rests on the fact that, for a sufficiently large hash (say 256-bit, i.e. \(2^{256}\) search space), finding the original input for a given output is practically impossible for any classical computer. The only known quantum threat is Grover’s algorithm.
Introduced by Lov Grover in 1997, the algorithm can be thought of like a tool to speed up searching for a name in a phone book just from a telephone number. Searching by brute force, you would have to look through half the phone book, on average, to find the number. In other words, if \(N\) is the total number of entries, you would have to check about \(N/2\) entries before finding the number.
Grover’s algorithm speeds the process, allowing you to find the number in approximately \(\sqrt{N}\) checks, with a probability that varies with the size of the search space (phone book) and the number of entries that match your search criteria. But Grover’s algorithm only provides a quadratic speedup, not an exponential one. The fix is simple: just increase the size of the hash.
With a fourth standard codenamed “Falcon” in the works, these initial standards will likely be iterated and overtaken by others as time progresses, and practical quantum computers start to become a reality. But they do mark an important moment in the quantum era, when society built the essential mathematical pillars required to ensure the integrity of our digital information.
The post The Mathematical Pillars of Post-Quantum Cryptography originally appeared on the HLFF SciLogs blog.