Insider Brief
- Despite hardware progress, quantum computing still lacks widely accepted applications, prompting a call for theorists to focus on discovering useful quantum algorithms.
- A Caltech doctoral student argues that the ideal quantum algorithm should be provably correct, outperform classical methods on average-case inputs, and produce results that are verifiable or repeatable.
- Researchers should adopt a pragmatic, exploratory mindset, emphasizing that even small advances in algorithm design can meaningfully shape the future of the field.
After decades of progress and billions in funding, quantum computing stands on the brink of maturity. But one persistent question looms over the industry: Are quantum computers actually good for anything?
That question is at the heart of a new post published by Robbie King, a doctoral student at Caltech, on Quantum Frontiers, the blog of the Institute for Quantum Information and Matter at Caltech. Despite significant engineering momentum — with platforms at Harvard, Yale, and Google reporting error rates low enough to support fault-tolerant computing — King argues that theory, not hardware, may ultimately decide the field’s fate.
In contrast to nuclear fusion, which has a clear goal in clean energy production, King writes in the post that quantum computing lacks an application that justifies its cost at scale. That uncertainty is not a death knell but a time for theorists to step up.
“For theorists like me, this is an opportunity, a call to action,” King writes.
Technological Momentum, Theoretical Drift
There is little doubt that quantum hardware is advancing. According to King, it’s conceivable that today’s devices could scale to around 100 logical qubits and a million gates — a range sometimes called the “megaquop” era.
“If mankind spends $100 billion over the next few decades,” he writes, “it’s likely we could build a quantum computer.”
But whether society will choose to do so depends not just on what’s technically possible, but whether the return on investment becomes compelling.
To get there, theorists must identify quantum algorithms that not only work in principle but promise practical value. King sees this as the missing link between today’s rapid hardware development and a future quantum economy. To maintain the momentum, King writes that it will be important to match investment growth and hardware progress with algorithmic capabilities.
The comparison to artificial intelligence is also telling. Decades ago, AI was a theoretical domain. But once computing resources became abundant, empirical methods began to dominate, pushing theorists to the sidelines. Today’s quantum landscape is the reverse: it is the theorists who have leverage.
Rethinking What Makes a “Good” Algorithm
Traditionally, the ideal quantum algorithm is defined by three criteria: provable correctness, classical hardness and practical utility. Shor’s algorithm for factoring integers comes close. But insisting on these strict standards might actually stall progress, King argues.
For example, proving classical hardness — that is, showing an algorithm is difficult or impossible to replicate with a regular computer — is often infeasible without resolving some of the deepest problems in computer science, like P vs NP. Instead, King proposes a more pragmatic benchmark: a super-quadratic speedup over the best known classical algorithm in the average case across a given input distribution.
“Emphasizing provable classical hardness might inadvertently impede the discovery of new quantum algorithms, since a truly novel quantum algorithm could potentially introduce a new classical hardness assumption that differs fundamentally from established ones,” writes King. “The back-and-forth process of proposing and breaking new assumptions is a productive direction that helps us triangulate where quantum advantage lies.”
Ultimately, this flexibility could help theorists discover problems where quantum computers really shine, according to King.
Equally important is the question of utility. King emphasizes that quantum results must be at least repeatable — if not classically verifiable, then reproducible across quantum machines.
He adds a fourth, often overlooked criterion: a working algorithm must not just be valid in theory — it must also be implementable with a realistic distribution of inputs.
“If given a quantum computer tomorrow, could you implement your quantum algorithm?” King asks.
Finding the Right Nails for the Sledgehammer
Quantum computing is sometimes criticized for being a solution in search of a problem. King reframes this with a more constructive lens: the field must identify fundamental tasks that quantum computers perform well, and then later map those to real-world uses.
One promising category is Hamiltonian simulation — the ability to model quantum systems from physics or chemistry directly. Nature computes certain properties effortlessly that classical computers cannot, suggesting that quantum machines may be well-suited to these problems.
Still, examples remain isolated. He calls for new ensembles of simulation problems where quantum advantage is more clearly demonstrable.
Similarly, sampling — generating outcomes from a complex distribution — is often considered a poor candidate for utility because its results are neither classically verifiable nor repeatable.
However, while such algorithms are often dismissed as impractical due to their randomness, they could become significantly more useful if their outputs contain extractable information tied to hard problems. As King explains: “If a collection of quantum algorithms generated samples containing meaningful signals from which one could extract classically hard-to-compute values, those algorithms would effectively transition into the compute a value category.”
A Field in Need of Bold Ideas
Despite the high stakes, King notes that only a small fraction of papers at major quantum conferences propose new algorithms. One explanation is that the field is too difficult. But King believes that even small ideas can matter.
Theoretical work remains the bottleneck. While platforms mature and funding surges, the field still lacks a broad set of algorithms that can justify the hardware race. Bridging this gap will require more than rigorous logic — it will demand creative risk-taking.
King urges his fellow theoreticians forward: “In between blind optimism and resigned pessimism, embracing a mission-driven mindset can propel our field forward. We should allow ourselves to adopt a more exploratory, scrappier approach: We can hunt for quantum advantages in yet-unstudied problems or subtle signals in the third decimal place. The bar for meaningful progress is lower than it might seem, and even incremental advances are valuable. Don’t be too afraid!”
0 Comments