Insider Brief
- PsiQuantum’s study evaluates photonic fusion-based quantum computing designs, identifying adaptive, encoded schemes that significantly improve tolerance to photon loss.
- The analysis finds that advanced methods like exposure-based adaptivity can raise loss thresholds to 18.8%, while balancing resource cost with performance.
- Using detailed modeling, the study maps tradeoffs between resource state size, preparation overhead and error tolerance, guiding scalable optical quantum architecture development.
A new study from a team of PsiQuantum researchers lays out a blueprint for building loss-tolerant quantum computers using photons, showing that carefully engineered resource states and adaptive measurements could push photonic systems into the realm of fault-tolerant computing.
The research, posted to arXiv recently, compares a wide range of design schemes for a quantum computing architecture known as fusion-based quantum computing (FBQC). The analysis focuses on one of the biggest hurdles for photonic qubits: photon loss. Using simulations and theoretical comparisons, PsiQuantum researchers evaluate how different strategies fare under realistic conditions and which designs offer the best tradeoff between error tolerance and hardware cost.
Fusion-based computing relies on entangling operations — called fusions — between small, pre-prepared resource states. These resources are stitched together to form larger structures capable of running algorithms. But in photonic systems, each qubit is represented by a single photon, making the system vulnerable. Simply put, if you lose the photon, the quantum information disappears.
PsiQuantum’s paper evaluates nearly a dozen different resource configurations and encoding strategies designed to counteract this. It shows that by combining specific error-correcting codes with measurement adaptivity — where the system adjusts future operations based on past measurement outcomes — photonic quantum systems can tolerate loss rates that would otherwise be catastrophic.
Loss Tolerance and Resource Cost
At the center of the study is a metric called the Loss Per Photon Threshold (LPPT), which measures how much photon loss a system can endure before errors accumulate beyond control. In the most basic designs, loss tolerance is extremely limited. For example, traditional “boosted” fusion networks without any encoding or adaptivity manage an LPPT below 1%.
PsiQuantum’s team shows that introducing encoding — essentially spreading quantum information across multiple photons in a structured way — significantly boosts resilience. Using a resource state called the 6-ring network with a {2,2} Shor code, the researchers reach an LPPT of 2.7%. Incorporating adaptivity, where measurements are adjusted on the fly based on outcomes, raises the threshold further. A four-qubit code with adaptivity pushes LPPT to 5.7%.
In more advanced designs, particularly those using “exposure-based adaptivity,” the LPPT reaches as high as 17.4% with a 168-qubit resource state. A newer geometry called the “loopy diamond” network — using 224 qubits and a {7,4} encoding — delivers even higher loss tolerance, hitting 18.8%.
However, the study emphasizes that resilience comes at a cost and higher thresholds generally require larger and more complex resource states. These are expensive to prepare, especially when constructed from basic three-photon building blocks known as 3GHZ states. For instance, a 24-qubit 6-ring state requires more than 1,500 3GHZ states to assemble, while a 224-qubit loopy diamond network demands over 52,000.
This essentially means that while setting up and running a photonic quantum calculation is theoretically possible, it remains impractical with current technology due to the extreme resource requirements.
Tradeoffs Between Size and Performance
Rather than chase higher thresholds, the paper is more focused on mapping out the tradeoff space — how much performance gain each additional photon delivers and when the cost becomes prohibitive. For example, PsiQuantum’s modeling suggests that a 32-qubit loopy diamond resource state — a cluster of photons arranged for reliability, even when some photons are lost — offers better loss tolerance than a 24-qubit 6-ring but is cheaper to build.
To further illustrate these tradeoffs, the team plots LPPT against resource size for dozens of schemes. While the theoretical maximum LPPT for adaptive systems approaches 50%, achieving this would require impractically large resource states. The best-performing small-to-medium scale systems top out at about 15%–19% LPPT, depending on geometry and adaptivity.
These results help identify “sweet spots” — designs that balance loss tolerance and hardware complexity. The authors suggest that, for near-term implementations, focusing on small resource states with smart adaptivity yields the best return.
Adaptive Fusion and Geometry Selection
The PsiQuantum team classifies adaptivity into two main types: local and global. Local adaptivity involves adjusting fusions within a small cluster of photons, while global adaptivity modifies the entire fusion network based on aggregate outcomes. The most effective technique analyzed — exposure-based adaptivity — selectively chooses which measurements to perform and in which order, prioritizing the parts of the system most vulnerable to error buildup.
On top of encoding and adaptivity, geometry plays a critical role. The team compares 4-star, 6-ring, and 8-loopy-diamond network topologies. Each configuration dictates how photons are entangled and measured, with some layouts offering better loss tolerance or simpler resource construction.
The study also introduces cost models for evaluating how many elementary operations are required to build each resource state. Using optimistic assumptions, such as perfect fusion success and no photon loss during assembly, they estimate preparation overhead in terms of the number of 3GHZ states needed. Even under these ideal conditions, resource costs rise steeply with encoding size.
Implications for Fault-Tolerant Quantum Computing
While fault-tolerant photonic quantum computing remains a long-term goal, this researchers are using this investigation to create a concrete map for getting there. It shows that with clever use of error-correcting codes, adaptive measurements and optimized network geometries, photon loss can be tamed to workable levels.
The results are especially important for companies like PsiQuantum that are betting on photons over other qubit types, such as trapped ions or superconducting circuits. Photons offer advantages like room-temperature operation and easy transmission over fiber, but they suffer from unique challenges — chief among them, fragility.
By framing the problem in terms of LPPT and resource cost, the PsiQuantum team provides a way to benchmark progress. New schemes can be compared on equal footing, and system architects can prioritize configurations that strike the right balance.
Limitations and Future Work
The study acknowledges several limitations. First, its cost metrics are based on simplified assumptions — such as perfect switching and no losses in the assembly stage — that may not hold in practice. Also, while the study focuses on theoretical error thresholds, end-to-end system performance should be considered, which would also account for decoherence, gate errors and environmental noise.
It’s likely that, as resource states grow, the complexity of managing measurement adaptivity also increases. Implementing dynamic fusion strategies in real time will require advances in classical control systems, fast switching networks and low-latency feedback loops, along with other technological innovations.
Future work could involve refining cost models with real-world data from photonic devices, testing these adaptive strategies experimentally, and integrating them into full-stack architectures. The study also hints at further gains from leveraging “scrap” information — residual quantum states that survive partial photon loss—a technique that could push non-adaptive systems beyond current limits.
The paper on arXiv dives in deeper technologically than this summary story, so reviewing the study for more exact technological detail is recommended. ArXiv is a pre-print server, meaning the work has not officially been peer-review, a key step of the scientific method.
The PsiQuantum team of researchers included: Sara Bartolucci, Tom Bell, Hector Bombin, Patrick Birchall, Jacob Bulmer, Christopher Dawson, Terry Farrelly, Samuel Gartenstein, Mercedes Gimeno-Segovia, Daniel Litinski, Yehua Liu, Robert Knegjens, Naomi Nickerson, Andrea Olivo, Mihir Pant, Ashlesha Patil, Sam Roberts, Terry Rudolph, Chris Sparrow, David Tuckett and Andrzej Veitia
0 Comments