IBM

Insider Brief

  • IBM has developed a new algorithm, Relay-BP, that significantly improves error detection and correction in quantum memory, marking progress toward scalable, fault-tolerant quantum computing.
  • Relay-BP demonstrated up to a tenfold accuracy improvement over prior methods while reducing resource demands, making it suitable for real-time use on compact hardware like FPGAs.
  • The algorithm introduces adjustable memory parameters to enhance belief propagation, enabling faster and more reliable decoding across a wide range of quantum error-correcting codes.

IBM researchers have developed a new decoder algorithm that outperforms all known alternatives in identifying and correcting errors in quantum memory, which the team suggests marks another step toward scalable, fault-tolerant quantum computing.

The algorithm, known as Relay-BP and discussed in this paper on the pre-print server arXiv, significantly improves how quantum systems detect and fix errors in real time. In testing, it showed up to a tenfold increase in accuracy over previous leading methods, while also reducing the computing resources required to implement it. IBM says this innovation addresses a persistent bottleneck in the quest to build reliable quantum computers and could lead to experimental deployments within the next few years.

Quantum computers are notoriously sensitive to errors because the devices’ building blocks — qubits — are fragile, easily disturbed by environmental noise or imperfections in control. Without error correction, a practical quantum computer capable of solving large-scale problems would be impossible. IBM’s new decoder helps solve this by rapidly interpreting data from quantum systems and determining which errors occurred and how to correct them, all without directly disturbing the qubits themselves.

The decoder works by analyzing syndromes — indirect measurements of quantum states — that provide clues about where something has gone wrong, according to the post. Classical algorithms then use this information to infer the most likely errors and propose fixes. Relay-BP, built on an improved version of a classical technique called belief propagation (BP), is the most compact, fast, and accurate implementation yet for decoding quantum low-density parity-check (qLDPC) codes, according to IBM researchers who published a detailed technical paper last month.

Why Decoding Matters

Error correction is fundamental to fault tolerance, which refers to a quantum computer’s ability to operate reliably despite inevitable errors. To achieve this, physical qubits are combined into logical qubits using error-correcting codes. These logical qubits can retain quantum information longer and with greater stability, if errors are detected and corrected efficiently.

But decoding that information isn’t easy, the team writes. Traditional decoders can be accurate but slow, or fast but inefficient. Many existing systems also require substantial computing power, making them difficult to deploy in real-time scenarios or on hardware with strict performance and energy constraints.

IBM’s Relay-BP decoder is designed to overcome these trade-offs, according to the post. It’s fast enough to keep up with quantum error rates, compact enough to run on field-programmable gate arrays (FPGAs), and flexible enough to adapt to a wide range of qLDPC codes. IBM says these four characteristics — speed, compactness, flexibility, and accuracy — are critical for real-world error correction.

The team writes: “This means that, to our knowledge, Relay-BP is the only real-time qLDPC decoder that hits all four nails on the head. It is not only flexible and compact, but also faster and more accurate than all known alternative methods.”

The Technology Behind Relay-BP

Relay-BP builds on belief propagation, a technique used in both classical and quantum systems to find likely causes of observed behavior. The analogy used by IBM researchers likens BP to a group of people passing messages to determine who in their group committed an unseen mistake. Each person updates their beliefs based on what they hear, and ideally, the group reaches consensus.

However, standard BP can fall short because it sometimes fails to settle on a clear answer or oscillates between options. To improve performance, researchers in the past combined BP with another algorithm called ordered statistics decoding (OSD). While accurate, the BP+OSD combination is computationally expensive and hard to implement efficiently.

Relay-BP solves this by adding a new twist to BP. Rather than having each computational node treat every message equally, Relay-BP allows each node to weigh information differently based on its own internal “memory strength.” Some nodes remember past messages more strongly than others, and some even forget previous beliefs entirely. These knobs—called memory parameters—help the algorithm avoid traps and converge more decisively.

This adjustment gives Relay-BP a structural edge by avoidihng what researchers call “trapping sets,” scenarios where algorithms get stuck in indecision. And it does so with fewer resources and in less time. According to IBM, Relay-BP is the only known decoder that excels across all four key performance dimensions—something previous decoders had failed to achieve simultaneously.

Interdisciplinary Effort

The lead researcher behind Relay-BP, Tristan Müller, began his career at IBM as a firmware developer and transitioned into quantum error correction. Drawing on his background in many-body physics, Müller noticed parallels between physical systems and optimization strategies. Memory tuning, a known tool in physics, became central to the algorithm’s success. In one case, a bug in the code led to negative memory strengths, a scenario in which nodes actively “forget” previous messages—and it turned out to improve performance.

This insight reflects the broader interdisciplinary nature of the project. IBM’s team combined expertise from firmware engineering, condensed matter physics, software development, and mathematics. The ability to integrate different fields proved essential, especially for building an algorithm as both technically sound and practically deployable.

“I’m doing fundamental research, but then seeing it turned into a product with my colleagues sitting in the room next to me. It’s a kind of luxury that not many people get to experience. It’s almost like a composer hearing their symphony performed by an orchestra for the first time,” said Müller.

IBM credits this cross-functional approach as a cultural strength of its quantum program. The company has emphasized that building useful quantum computers will require not only hardware and software innovation but also flexible teams willing to cross disciplinary boundaries.

Toward Real-Time Quantum Processing

Relay-BP currently focuses on decoding for quantum memory — in other words, keeping quantum states stable over time. This is an essential milestone but still short of full quantum processing, which involves manipulating logical qubits through long sequences of operations.

To get there, the decoding must become even faster and smaller. Real-time processing puts heavier demands on decoding speed and hardware integration. According to IBM, the decoding systems available today are not yet compact enough for real-time quantum computation involving logical operations. However, work is underway to further shrink and optimize Relay-BP for this purpose.

IBM plans to begin experimental testing of the decoder in 2026 on Kookaburra, an upcoming system designed to explore fault-tolerant quantum memory. Relay-BP is expected to play a central role in that demonstration, acting as a testbed for scaling quantum error correction into full system-level deployments.

This fits into the company’s broader fault-tolerance roadmap, which includes intermediate systems like Heron and Flamingo, and a long-term vision of achieving quantum advantage with large, error-corrected machines such as IBM’s planned Starling architecture.

Looking Ahead

While Relay-BP may not be the final decoder IBM uses in future hardware, the team considers it a vital piece of the puzzle. The algorithm pushes the limits of what can be done with classical resources to stabilize quantum systems. It also offers a new tool for researchers looking to bridge the gap between experimental qubits and reliable quantum logic.

As IBM researchers continue refining the system, their hope is to eventually demonstrate a complete, efficient, real-time decoding implementation capable of handling the complexity of quantum logic circuits. The path to fault-tolerant quantum computing remains long, but with Relay-BP, the industry now has a clearer signal in the noise.

For a deeper dive into the technology, please review the paper on arXiv. It’s important to note that arXiv is a pre-print, meaning that IBM’s findings have not yet been peer-reviewed yet.


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *