Insider Brief

  • A new panel summary published on arXiv reveals points of agreement — and contention — among top quantum scientists over how to measure progress in quantum computing, particularly as the field moves from theory to applications like simulation and machine learning.
  • While hardware advances such as high-fidelity logical qubits suggest fault-tolerant quantum computing is nearing viability, panelists agreed that algorithmic breakthroughs comparable to Shor’s or Grover’s remain elusive.
  • The panel highlighted a growing divide between those who insist on provable speed-ups over classical methods and those who argue that practical usefulness, even without theoretical guarantees, should also count as meaningful progress.

A panel of leading quantum computing researchers has offered a frank — and at times divisive — assessment of where the field stands today, revealing deep disagreements over how to measure progress and what the future holds for applications like quantum simulation and machine learning.

The discussion, summarized in a recent arXiv article titled “Future of Quantum Computing” and authored by Barry Sanders and panelists Scott Aaronson, Andrew Childs, Edward Farhi and Aram Harrow, took place during the 8th International Conference on Quantum Techniques in Machine Learning, hosted by the University of Melbourne last year. The panel served as a rare forum for leading figures to publicly debate the merits of different approaches, with a sharp focus on the tension between theoretical guarantees and empirical results.

Fault Tolerance Is Within Reach

One of the most concrete signals of progress came from Aaronson, of the University of Texas at Austin, who pointed to experimental results showing that logical qubits — error-corrected qubits that function reliably over time — are now beginning to outperform their underlying physical qubits. This represents a key milestone toward building scalable quantum computers, which must maintain fragile quantum states for long periods in the face of constant noise and errors.

Advances in physical gate fidelity, particularly in trapped ion systems, have brought error rates near or at the threshold required for fault tolerance. That threshold marks the point where adding more qubits and error correction can reliably reduce errors rather than compound them. Researchers now see it as plausible that quantum computers will be able to carry out useful scientific simulations in fields like chemistry and materials science within the next decade.

“The year 2024 has just seen a genuine logical qubit that can outperform the underlying physical qubits and could be a building block of a future scalable system. Two qubit physics gates with 99.9% fidelity in trapped ions and other systems have been achieved so we are close to or already at the threshold for fault tolerance, which was not true before.”

Still, Aaronson and others emphasized that despite significant hardware advances, there remains no consensus on which quantum architectures — whether trapped ions, neutral atoms, superconducting circuits, or photonic systems — will ultimately prove best for scaling up.

No Breakthroughs on the Algorithmic Frontier

While hardware has made clear strides, algorithmic progress remains slower. Panelists acknowledged that no algorithm developed since the 1990s has matched the importance of Shor’s algorithm for factoring large numbers or Grover’s algorithm for search problems. These foundational results showed how quantum computers could, in principle, outperform classical ones in specific tasks.

Recent efforts have focused on optimization problems and machine learning, including hybrid quantum-classical approaches like the Quantum Approximate Optimization Algorithm (QAOA). Yet the field has yet to produce definitive proof that these algorithms can consistently outperform classical counterparts in real-world scenarios.

Childs, of the University of Maryland, and Harrow, of MIT, both noted that for quantum computers to deliver a significant speed-up, they must exploit special structure in a problem. Quantum methods offer no advantage for general-purpose computing tasks, they said, unless the problem has properties that allow quantum effects like superposition or entanglement to accelerate computation.

Although, Childs admitted that is one reason why quantum computing is so interesting.

Childs said: “Quantum computing is exciting because there are many things to try. We may eventually find quantum speed-ups for problems that we have not yet envisioned as good targets for quantum computers. Hopefully, we will discover surprising applications as we get larger-scale devices. Experimental advances have been very impressive in recent years, and it will be exciting to see how things change as we get larger devices to try things out on. But for now, it is unclear what the applications of quantum computers will be.”

The Heuristic Divide

Much of the panel discussion centered on a philosophical divide: should the quantum community prioritize proving theoretical speed-ups, or is it acceptable to pursue algorithms that simply work well in practice, even without performance guarantees?

Aaronson argued that the field has a responsibility to be clear about its claims, especially given investor and public enthusiasm for quantum computing. He cautioned against what he called the “stone soup” effect: progress that appears to come from quantum algorithms, but is actually the result of intense effort applied to a problem that could also have advanced through classical means.

Farhi, of Google Quantum AI, countered that quantum algorithms should be judged not only by whether they beat classical ones, but also by the insights they offer and their practical value. He pointed to results involving QAOA that revealed new patterns in how the algorithm behaves, including the emergence of universal curves for optimization parameters. These patterns help guide algorithm design even if they do not translate directly into superior performance.

The exchange underscored a broader tension in the field. Some researchers feel that proving a quantum speed-up is essential for validating the technology, while others see value in empirical results that advance understanding or enable new capabilities, even if those capabilities are not superior by traditional metrics.

Machine Learning Remains Unproven Territory

Quantum machine learning, a growing area of interest, came under scrutiny as well. Harrow and Aaronson both expressed skepticism about assumptions underlying many studies in this domain, particularly those that rely on quantum random access memory (qRAM), a technology that does not yet exist at scale and may never be practical.

They emphasized the need for fair comparisons between quantum and classical methods. Without careful benchmarking, results may be misleading—especially when researchers compare small quantum systems to unoptimized classical baselines.

One challenge is that many machine learning tasks are inherently heuristic, meaning they rely on approximations and do not have formal proofs of correctness. This makes it difficult to evaluate whether quantum approaches offer a meaningful advantage, or simply replicate what classical systems already do well.

Some in the audience, including researchers like Maria Schuld, of Xanadu Quantum Technologies, argued for a broader view of success in quantum machine learning. Rather than focusing exclusively on speed or accuracy, Schuld suggested valuing quantum routines that reveal novel features or enhance generalization, even in ways that are not fully understood.

Schuld told the panel: “To give an example, in machine learning, we could show that a reasonably unique quantum routine can reveal features that are just interesting for generalization, and we can show this is interesting. That would be an absolutely powerful argument that I do not see in any papers at the moment and that I think many more people should talk about and think about.”

Simulation May Be the First Real-World Application

Among all applications discussed, quantum simulation remains the most promising, according to the panel. Simulating quantum systems — such as molecules, materials, or exotic states of matter — is notoriously difficult for classical computers, due to the exponential growth of possible configurations.

Quantum computers, by contrast, operate according to the same physical laws as the systems they aim to simulate, making them natural tools for such tasks. But even here, the panel warned against overpromising. Classical methods in computational chemistry are improving rapidly, and proving a clear quantum advantage remains difficult.

Nonetheless, several panelists expressed optimism that future quantum simulations could lead to meaningful breakthroughs, such as the discovery of new materials or catalysts. These use cases may not require exponential speed-ups to be valuable, as even modest gains in performance or accuracy could lead to real-world impact.

Honesty as a Guiding Principle

One recurring theme was the need for transparency in quantum computing research. Aaronson, in particular, stressed that researchers should not only avoid making false claims, but should actively anticipate how their results could be misinterpreted by non-experts.

He and others called for clearer framing in academic papers, especially when algorithms work only on narrowly defined problems or structured instances. In a field where hype can outpace substance, the burden falls on scientists to ensure that results are communicated accurately and responsibly.

Farhi pushed back against this view, stating that he sees no obligation to manage public perception and instead focuses on developing high-quality algorithms and results. But other panelists maintained that broader awareness of how claims are received is essential, particularly when commercial and policy decisions may be influenced by the perceived state of the science.

Debates Remeain

The panel concluded with no unified vision for the future of quantum computing, but a shared sense of the stakes involved. As researchers continue to explore the boundaries of what quantum computers can do, the field must balance curiosity-driven exploration with rigorous evaluation.

The panelists suggest that debates in the future between understanding, usefulness and provability are likely to shape the next phase of quantum development.

The paper concludes: “As for the main question, ‘what is the future of quantum computing?’, the healthy debate and, at times, discord, illustrate the deep questions facing the quantum computing community and the excitement in dealing with profound issues during the march towards scalable quantum computing.”

Because the debate was wide-ranging and covered a spectrum of topics, please read the entire paper on arXiv for a deeper dive that this summary story cannot provide.


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *