Insider Brief
- A new study from IBM and Pasqal proposes a testable, criteria-based framework for defining quantum advantage, emphasizing verifiable outputs and measurable improvements over classical computing.
- The authors identify three problem types — sampling, variational algorithms, and expectation value calculations—as the most promising paths to near-term quantum advantage.
- The paper underscores that progress depends more on error management than algorithm design, highlighting hybrid quantum–classical systems and hardware platforms like superconducting circuits and neutral atoms as key enablers.
A new study from IBM and Pasqal outlines a rigorous framework for defining and demonstrating “quantum advantage” — which is often defined as the point at which quantum computers perform useful tasks more efficiently or accurately than classical systems. The paper doesn’t claim this milestone has been reached but lays out a practical and testable roadmap to get there.
Quantum advantage has been a moving target for years, according to the researchers involved in the study. The term is often used inconsistently: Some use it to describe narrowly scoped experiments, others to suggest broad disruption. The new framework, released on the pre-print server arXiv by researchers from IBM’s Quantum division and Pasqal, argues that quantum advantage must satisfy two specific conditions: the output must be verifiably correct, and the quantum device must show a measurable improvement over classical alternatives in efficiency, cost, or accuracy.
The study stakes out a middle ground between hype and skepticism. It doesn’t assume that classical computers are doomed, nor does it dismiss near-term quantum progress. Instead, it identifies a handful of problem classes — and specific algorithmic techniques — that could yield early wins.
Focusing on Verifiability
A major challenge in proving quantum advantage is trust. Quantum computers are designed to solve problems too large for classical systems to simulate, which makes their outputs hard to check.
“Establishing trust in the outcome of a computation is one of the foremost tasks in establishing any new computational method,” the researchers write. “This is particularly critical for a quantum computation, since quantum computers are poised to solve problems that are not efficiently solvable by established classical methods, making a direct comparison challenging.”
The authors break this trust dilemma into three strategies.
First is the gold standard: compute error bars using fault-tolerant circuits. These offer mathematical guarantees that the output is correct, but they require thousands of physical qubits to simulate one logical qubit, putting them out of reach for today’s machines.
Second is leveraging problem types that allow for efficient classical verification of results. For example, factoring large integers or certain sampling tasks produce outputs that are easy to check once you have them, even if generating them classically is infeasible.
Third is variational problems, where quantum algorithms output a number (such as the estimated ground-state energy of a molecule) that can be ranked against known values. While such numbers might not be provably correct, they can still outperform classical estimates.
These three strategies provide flexible options for benchmarking progress, depending on the nature of the quantum algorithm and the limits of available hardware.
Which Problems Are Ripe for Advantage?
The study identifies three classes of problems most likely to deliver early quantum wins:
- Sampling problems rely on generating bitstring outputs from quantum circuits, which are a sequence of 0s and 1s produced by a quantum computer after measuring qubits. In some cases, such as “peaked random circuits,” researchers can design distributions where one specific output occurs with unusually high probability, a structure that might be easier to verify and harder to simulate. However, most random sampling tasks are still difficult to validate, so their use as a proof point remains limited.
- Variational algorithms, such as the variational quantum eigensolver (VQE) and the newer sample-based quantum diagonalization (SQD), estimate quantities like energy levels. These methods are particularly useful in chemistry and materials science. Some versions, like SQD, produce classical outputs that are robust to quantum noise, making them easier to validate.
- Expectation value calculations support a wide range of applications — from quantum simulations to machine learning — and are central to many quantum workflows. Here, the challenge is minimizing error from noisy circuits, since the results are based on averages over repeated measurements.
The researchers note that different problems carry different levels of risk and promise. Sampling tasks might yield big wins but are harder to validate. Variational and observable-based problems are more modest in impact but easier to certify and test against classical methods.
Error Handling is the Bottleneck
Errors are the biggest obstacle to quantum advantage, not algorithm design — it’s error. Today’s quantum processors are noisy, and their computations degrade quickly.
The framework breaks error-handling methods into three tiers: Error correction is the most rigorous but also the most demanding. It requires encoding information across many qubits and maintaining coherence over long circuit depths. This won’t be practical in the short term. Error mitigation uses statistical methods to reduce bias in quantum results without the overhead of full error correction. Techniques like zero-noise extrapolation or probabilistic error cancellation can push quantum performance past classical baselines but at high sampling costs. Error detection involves identifying and discarding corrupted circuit runs. This approach uses fewer extra qubits than correction and enables verification in real time, making it more practical for current hardware.
The researchers argue that hybrid strategies — combining quantum circuits with classical post-processing and high-performance computing — will be key. As error rates decline and classical orchestration improves, mitigation and detection could enable useful computation even before fully fault-tolerant machines arrive.
Hardware Still Limits the Field
On the hardware front, in maybe not a huge surprise, the paper singles out superconducting circuits and neutral atoms as the most likely platforms to achieve early advantage. IBM’s 156-qubit Heron processor, which can perform more than 250,000 circuit layers per second, is cited as a viable candidate. It is already equipped with real-time feedback loops and dynamic circuit capabilities, according to the paper.
Pasqal’s neutral-atom devices, which trap atoms using optical tweezers and excite them to interact via Rydberg states, are highlighted for their ability to operate in both analog and digital modes. The researchers write that these analog capabilities are well suited for simulating physical systems like spin glasses, while their scalability makes them a strong contender for fault-tolerant development.
Both companies have deployed hardware into high-performance computing (HPC) environments, such as GENCI in France and Jülich in Germany. These “quantum-centric supercomputers” treat quantum processors as specialized accelerators, akin to GPUs, and can be integrated using standard scheduling software like SLURM.
This tight integration is already paying off in workflows like SQD, where quantum sampling is paired with classical diagonalization to estimate molecular energies beyond what brute-force methods can handle.
Looking Ahead
The authors are cautious about declaring any one architecture or algorithm the winner. Instead, they urge the field to adopt clear, testable benchmarks and emphasize that any claim of quantum advantage should be framed as a falsifiable hypothesis.
They also stress that even if a classical method eventually outperforms a quantum benchmark, it’s not a failure — it’s a sign that the field is working. To use a sports analogy, often a great coach will create a stronger team, but soon the competition raises its level of play to match that team’s performance.
What’s likely to come next are not sweeping declarations, but incremental wins: a quantum algorithm outperforming a classical solver for a single problem instance, a noise-mitigation technique pushing accuracy past a known limit, or a hybrid system reaching a lower energy estimate than any classical counterpart.
The researchers expect that conversations — and arguments — about quantum advantage will likely increase soon.
They write: “We expect that credible evidence of quantum advantage isvlikely to emerge in one of the areas highlighted here within the next two years, enabled by sustained coordination betweenvthe high-performance-computing and quantum-computing communities.”
0 Comments