I've found that most explanations of quantum computing algorithms for laymen are really bad—often actively misleading. In fact I've never seen a good, succinct one, so I decided to write one.
The goal of this essay is to provide an accessible, succinct primer on how and why quantum computing algorithms work, with an emphasis on intuition over rigor.
I cover:
You may have heard that quantum computers (QCs) will be super fast because qubits (the bits in a QC) can be 0 and 1 at the same time (a superposition), so you can do multiple operations at the same time. That's not quite right. While a qubit can have amplitudes (values that are sort of like probabilities) for 0 and 1 at once while in superposition, actually reading its value collapses that superposition and you only get one out—so that's not a useful property on its own.
In fact, classical computers (normal bits) have superpositions too! Imagine a random number generator with 60% output probability of 0 and 40% of 1. That's a 0/1 superposition with 60% / 40% coefficients, just like a qubit.
So the interesting part isn't the superposition. It's the fact that, for qubits, those coefficients (in a quantum context, amplitudes) can interact with each other before you read the value out. And, since they're complex numbers, they can both reinforce each other and cancel each other out. It's called interference.
Think of an amplitude as a vector (or an arrow) in a 2D coordinate plane, starting at the origin. Its angle is called its phase and its length is called its magnitude. You take the absolute square of the amplitude of a state to get its probability. The phase disappears when you do this, though it comes into play for interference while in superposition.
So the superposition (and thus the QC) is not useful unless you've managed to design your algorithm so that the amplitudes of the wrong answers "cancel out" and you're left with the right answer before you read it out.
Sounds like magic, right? How hard is it to do that? Let's take an example.
For Shor's Algorithm (a quantum algo that factors numbers to break many cryptographic systems), you first reduce the problem (classically) to finding the period of a certain function. The period means the regular interval at which values are repeated; large periodicity is a feature of strong cryptography, which motivated this algorithm.
The function is modular exponentiation: f(x) = a^x mod N, where N is the number we're trying to factor and a is coprime (no common factor aside from 1) to N. r is the period. Why does the period help factor N? It tells us a lot about N's factors. We can take the first value of a cycle (a^r) and know that it has a remainder of 1 when divided by N, which means that (a^r - 1) is divisible by N. We can then extract the two component factors.
Now the hard part, finding the period, which requires quantum computing!
Remember Fourier transforms? They output a description of the frequencies of a function's outputs (in other words, convert it to the frequency domain), like so:
We can do a quantum Fourier transform (QFT) while in superposition. We then get a superposition over the frequencies of the possible outputs of the modular exponentiation function. Each frequency has an amplitude.
Amplitudes' phases are naturally periodic as well (after all, you can think of a phase as an angle). That's really convenient when we model the outputs of the periodic function with periodic quantum amplitudes.
When calculating the QFT, we get the same modular exponentiation value for (value of x + any multiple of r). At the same time, the amplitude for each of those values in superposition also has the same phase, because the phase—like modular exponentiation—is periodic. If you add together two amplitudes with the same phase, their magnitudes change but the phase stays the same: constructive interference.
Imagine this as two arrows starting at the origin pointing in the same direction: you add them together to get a single, longer arrow. As a result, the values for the QFT frequencies at (value of x + any multiple of r) have the same phase, so they constructively interfere/reinforce. Otherwise, they have some mixture of constructive and destructive interference, and are largely drowned out.
So, when we read out the QFT result, we get a frequency close to the inverse of the period of the modular exponentiation function. Period and frequency are inverses because if something happens more often in a given time, the time between occurrences must be shorter.
As a result, we now have the answer that we wanted the quantum algorithm to give us: the period of the modular exponentiation function. (In reality, we get something probably close to it, since the result from the QFT is probabilistic, not perfect—but that's good enough.)
Hopefully that gave some intuition for how quantum algorithms exploit periodic structures in problems, and why we can only achieve quantum advantage in very particular cases where that structure exists.
You may have also heard that QCs will be faster than classical computers. They're usually not! They're only faster (or, more specifically, have better scaling properties with # of qubits) for certain problems where we have good quantum algorithms, as explained above.
Well what do we have good quantum algorithms for, then, aside from testing weird bombs? Pretty much just simulating the physical world (physics/chemistry) and breaking cryptography. That's great for biopharma + materials scientists and depressing for cryptographers. As Scott Aaronson (a notable quantum computing researcher) noted, "all of the other use cases that people talk about are either more marginal, more speculative, or both."
There used to be hope that the QCs we have now or soon—and this is called the noisy intermediate-scale quantum era, or NISQ—would be useful for some important, high-value things.
That's generally regarded as incorrect by now unfortunately. Useful quantum computing probably requires millions (or at the very least thousands) of logical qubits. A logical qubit is the number of qubits after accounting for redundancy for error correction. A seminal demonstration by QuEra Computing sported a grand total of 48 logical qubits. And that's not even getting into things like decoherence time and post-selection for error correction.
Suffice it to say, we are a very long ways off from useful quantum computing.
Thanks a lot to Jonathan Lu for spot-checking the science here and suggesting the analogy to classical "superpositions," and to Tom Walpo for general feedback.