Computer Science
Fusion-based quantum computation
S. Bartolucci, P. Birchall, et al.
This work introduces fusion-based quantum computation (FBQC), a model of universal, fault-tolerant quantum computing built from two primitives: generation of small, constant-sized entangled resource states and projective entangling measurements (fusions). The motivation is to realize fault-tolerant topological quantum computation tailored to photonic architectures, where measurements are fast and qubit noise is low, but measurements are destructive and deterministic two-qubit entangling gates are unnatural. FBQC lies between circuit-based surface-code approaches (which use non-destructive multi-qubit measurements to extract syndromes as entanglement grows) and measurement-based quantum computation (which uses destructive single-qubit measurements on extensively entangled cluster states). In FBQC, finite resource states and destructive entangling measurements both create long-range entanglement and provide fault tolerance, potentially improving performance for photonic systems. The central principle is to build a fusion network from resource states and fusions; algorithms are implemented by choosing measurement bases for selected fusions, and appropriate combinations of fusion outcomes provide computational outputs.
The paper situates FBQC with respect to prior photonic and measurement-based approaches. It builds on ideas from KLM (Knill, Laflamme, Milburn) linear optical quantum computing and subsequent resource-efficient LOQC and cluster-state MBQC schemes that rely on nondeterministic entangling operations to first create large entanglement and then perform single-qubit measurements for computation. Prior work explored percolation-based, ballistic photonic architectures and error thresholds primarily for single-qubit measurements on pre-entangled lattices. FBQC differs by unifying entanglement creation and error correction via the same entangling fusion measurements and by explicitly incorporating errors from both resource state generation and fusion operations into a stabilizer-coded, topological framework compatible with standard decoders. The authors also compare two specific networks (6-ring and 4-star), relating to best prior architectures, and reference boosting of linear-optical Bell measurements, encoded fusions, and lattice-surgery style logic.
- Fusion network and stabilizer framework: The computation is built from repeated small stabilizer resource states and two-qubit projective fusion measurements. A stabilizer description uses: (1) the resource state stabilizer group R (union of stabilizers of all resource states) and (2) the fusion group F (Pauli subgroup defining the fusion measurements). The check group C = R ∩ F comprises stabilizers of resource states that can be reconstructed from fusion outcomes. Fusion redundancy produces a degenerate linear code enabling error correction.
- 6-ring fusion network: Each resource state is a six-qubit ring graph state with stabilizer generators S1..S6 (each qubit has an X on itself and Z on neighbors). Two resource states occupy opposite corners of a unit cell. Two-qubit fusions connect qubits that share a face or edge, using projective XX and ZZ measurements (M1 = X1X2, M2 = Z1Z2). Resource states form planes perpendicular to the (1,1,1) direction; each state fuses to three above and three below.
- Syndrome graph: The redundancy is represented by primal and dual syndrome graphs. Each edge corresponds to a fusion measurement outcome bit; each node is a check (generator of C). Flipping a fusion outcome flips parities at incident checks; erasing an outcome merges adjacent checks. In the 6-ring network the syndrome graphs are identical in the bulk (cubic with added diagonals), with each vertex having 12 incident edges. Each Bell fusion contributes one bit to each of the primal and dual graphs. Standard decoders (minimum-weight perfect matching, union-find) are applicable.
- Error models and thresholds:
- Hardware-agnostic fusion error model: Each fusion outcome (XX and ZZ per fusion) is independently erased with probability P_erasure and flipped with probability P_error. Thresholds are mapped by setting P_erasure = (1-β) x and P_error = β x and finding x_th(β) via Monte Carlo sampling and MWPM decoding, producing a correctable region in (P_erasure, P_error).
- Linear optical error model: Dual-rail qubit fusions are nondeterministic even without loss: success heralds XX and ZZ; failure yields separable single-qubit measurements in a chosen basis (X1,X2 or Z1,Z2), allowing recovery of one of the two intended observables and treating the other as erased. With photon loss (each photon lost independently with probability P_loss), any missing photon causes both XX and ZZ to be erased. Fusions can be "boosted" using ancillary entangled photons following Grice: P_fail = 1/2^n with 2^n - 2 extra photons; increasing photons raises loss-induced erasures. If N photons participate, the no-loss probability is η^N with η = 1 - P_loss. A fusion erasure model gives per-measurement erasure probability P0 = 1 - (1 - P_fail/2) η^{1/P_fail}. Encoded fusions further suppress erasures by replacing each physical qubit with an encoded qubit and making fusions transversally; the paper considers a (2,2) Shor [[4,1,2]] code.
- Networks compared: 6-ring and 4-star (FBQC variant using 4-qubit GHZ resource states). Encoded 6-ring with (2,2)-Shor code is also analyzed via mapping to the hardware-agnostic thresholds.
- Logic implementation: Fault-tolerant computation uses local modifications of the bulk by changing fusion bases or substituting some fusions with single-qubit measurements to create boundaries (primal/dual) and other topological features (twists), enabling lattice surgery and Clifford gates; state injection and magic state distillation enable T and small-angle rotations. The physical layout example uses a 2D array of resource state generators (RSGs) producing 6-ring states each clock cycle, routing qubits to fusion devices (with configurable settings), including one-cycle delays to connect across time steps.
- Simulations: Monte Carlo error sampling with MWPM decoder to estimate threshold curves for 6-ring and 4-star under the hardware-agnostic model; linear-optical thresholds derived via analytical mapping from P_fail and P_loss to P_erasure using the fusion erasure model.
- Under the hardware-agnostic fusion error model:
- 6-ring fusion network has a larger correctable region than the 4-star network.
- Marginal erasure-only threshold (P_error = 0): 6-ring ~11.98%; 4-star ~6.90%.
- Marginal measurement-error-only threshold (P_erasure = 0): 6-ring ~1.07%; 4-star ~0.75%.
- Linear optical error model and boosting:
- Fusions can be randomized to fail in X or Z single-qubit bases, allowing one intended two-qubit observable to be recovered and interpreting the other as erased.
- Boosting reduces intrinsic P_fail at the cost of more photons, which increases loss-induced erasures; an optimal P_fail exists for each network.
- With (2,2)-Shor encoded 6-ring, the marginal fusion-failure threshold (no loss) increases to 43.2%.
- With Bell-pair-boosted fusions (P_fail = 1/4), the architecture tolerates 2.7% photon loss per photon. Equivalently, the probability that at least one photon is lost in a fusion can be 10.4% while remaining in the correctable region.
- Architectural properties:
- FBQC doubles thresholds compared to prior photonic MBQC-like schemes (the 6-ring dominates the 4-star correctable region).
- The model is modular with reduced classical processing compared to earlier photonic architectures; decoding uses standard surface-code-style syndrome graphs with local structure (12-valent vertices).
The results demonstrate that using small resource states with entangling fusion measurements to both create long-range entanglement and extract error information yields improved fault-tolerance thresholds relevant to photonic platforms. The 6-ring network’s higher correctable region compared to a 4-star (GHZ-based) network indicates that appropriate choice of resource state and fusion connectivity significantly impacts error tolerance. Mapping linear-optical imperfections (probabilistic fusions and photon loss) to erasures within FBQC directly links device-level error processes to topological decoding. Boosting strategies and encoded fusions effectively trade more photons for reduced fusion failure, and the analysis identifies optimal operating points under loss. The ability to achieve 2.7% per-photon loss tolerance (10.4% per fusion) with Bell-boosted fusions, and to push failure thresholds to 43.2% with Shor-encoded fusions, shows that FBQC can exceed previously reported photonic thresholds. Moreover, the framework supports standard topological logic via local measurement-basis modifications (boundaries, twists), enabling lattice surgery and magic state injection, and offers architectural advantages: low qubit lifetimes, reusability of components, and reduced classical control complexity.
This paper introduces fusion-based quantum computation as a universal, topologically fault-tolerant framework tailored to photonic primitives: small stabilizer resource states and projective entangling fusions. It formalizes fusion networks via stabilizer groups (R, F, and checks C), provides a syndrome-graph description enabling standard decoding, and designs a high-performing 6-ring network. Simulations under a hardware-agnostic error model show superior thresholds for 6-ring over 4-star, with marginal thresholds of ~11.98% erasure and ~1.07% measurement error. In a linear-optical model, combining boosting and encoded fusions identifies optimal regimes; with Bell boosting, the scheme tolerates 2.7% loss per photon (10.4% per fusion), and with (2,2)-Shor encoding, the failure threshold reaches 43.2% (no loss). Physically, the architecture supports low-depth, modular implementations with resource state generators and configurable fusion devices; logic follows from local basis changes enabling boundaries, twists, and state injection. Future work includes detailed, architecture-specific circuit-level error modeling and decoding that exploits correlated fusion errors, optimization of resource-state generation and routing, exploration of alternative resource states and fusion layouts, and experimental demonstrations of FBQC components and end-to-end prototypes.
- Thresholds are derived under simplified error models: independent erasures and flips per fusion outcome (hardware-agnostic) and an analytical mapping of linear-optical loss and boosted fusion failure to erasures. Real devices may exhibit error biases, temporal/spatial correlations, and component-specific behaviors.
- Exact performance depends on detailed architectural choices (resource state generation fidelity, routing losses, detector characteristics, timing), which were not exhaustively modeled.
- Decoding treated primal and dual outcomes independently and did not exploit potential correlations between the two outcomes of a fusion; incorporating such correlations could improve thresholds.
- Logic operations introduce localized regions (boundaries, twists) with potentially different error models and finite-size effects that may affect below-threshold scaling, even if the bulk threshold remains indicative of overall fault tolerance.
Related Publications
Explore these studies to deepen your understanding of the subject.

