logo
ResearchBunny Logo
Experimental demonstration of continuous quantum error correction

Physics

Experimental demonstration of continuous quantum error correction

W. P. Livingston, M. S. Blok, et al.

Discover how William P. Livingston and colleagues have advanced quantum error correction (QEC) with a groundbreaking method that enhances resource efficiency and significantly improves error detection. Their innovative approach uses direct parity measurements, achieving remarkable results in stabilizing multi-qubit architectures and extending logical qubit relaxation time.

00:00
00:00
~3 min • Beginner • English
Introduction
Quantum systems are inherently susceptible to continuous noise processes that cause errors during computation. Quantum error correction (QEC) mitigates such errors by redundantly encoding information and detecting errors via stabilizer measurements, but standard discrete QEC requires rounds of entangling gates and projective measurements on ancilla qubits that can introduce additional errors. In recent years, discrete QEC has been realized in ion traps, diamond defects, and superconducting circuits. However, stabilizer measurements implemented via ancillas and entangling gates often dominate the error budget. Continuous measurement provides an alternative viewpoint in which measurement occurs over a finite duration and has been used to stabilize single-qubit trajectories with feedback, as well as to generate entanglement via direct parity measurements in multi-qubit systems. Continuous QEC leverages continuous stabilizer monitoring to eliminate discrete error-correction cycles and the need for ancillas and entangling gates. Here, the authors implement a continuous error-correction protocol using direct continuous ZZ parity measurements in a three-qubit repetition code to correct bit-flip errors while maintaining logical coherence, and they characterize logical errors and dephasing in this setting.
Literature Review
The paper situates its work within: (1) prior demonstrations of discrete QEC across multiple platforms (ion traps, NV centers, superconducting circuits), where ancilla-based, gate-mediated stabilizer measurements have been a dominant error source; (2) studies of continuous measurement and feedback used to stabilize single-qubit dynamics and to generate entanglement via direct parity measurements in multi-qubit systems; and (3) theoretical proposals for continuous QEC using continuous stabilizer monitoring that remove the need for ancillas and entangling gates. Prior work has also investigated optimal filtering (e.g., Bayesian filtering) for extracting information from continuous measurement records and schemes to measure noncommuting operators continuously for fully protected codes.
Methodology
Hardware and code architecture: The experiment uses a planar superconducting circuit with three transmon qubits arranged linearly and coupled via two joint readout resonators that implement direct ZZ parity measurements on qubit pairs (Z0Z1 and Z1Z2). Each resonator is dispersively coupled to its associated qubits with equal dispersive shifts χi (2.02 MHz and 2.34 MHz for the two resonators), with linewidths κi (636 kHz and 810 kHz) chosen smaller than χi to approximate full parity measurements. Parity probe tones are centered on the odd-parity resonances, yielding nearly indistinguishable responses for the two even-parity states. Reflected signals are amplified by phase-sensitive Josephson Parametric Amplifiers aligned to the informational quadrature. Logical code: A three-qubit repetition code is implemented with stabilizers Z0Z1 and Z1Z2. A specific parity subspace (odd-odd, i.e., −1, −1) is selected as the codespace; the other stabilizer outcomes identify single-qubit error subspaces. Continuous homodyne monitoring provides noisy voltage traces correlated with stabilizer eigenvalues; the logical subspaces act as invariant attractors under the stochastic evolution induced by finite-rate measurement. Error detection and feedback: Incoming readout voltages are first demodulated and filtered to V_DC, then further filtered on the FPGA with a 1536 ns exponential filter to form Vi(t), normalized so that mean +1 corresponds to odd parity and −1 to even parity for each resonator. A thresholding scheme classifies bit-flip events: if starting in even-even, an outer-qubit flip is detected when one Vi falls below θ1=0.50 while the other remains above θ2=0.72; a central-qubit flip is detected if both Vi fall below θ3=0.39. Thresholds were numerically optimized on experimental trajectories to maximize detection efficiency while minimizing dark counts and misclassifications. Upon detection, the FPGA issues a corrective π pulse to the inferred qubit and resets the internal filtered-voltage memory to reflect the updated state. FPGA logic manages timing to avoid interpreting the corrective pulse as a new error by inverting the sign of the stored filtered signal around the detection time and the known correction propagation delay. Characterization procedures: Single induced flips are applied after 4 μs of steady-state readout within 16 μs sequences to measure detection efficiency, latency distributions, and dark count rates. Successive induced flips on different qubits with controlled inter-pulse delays probe the controller’s dead time (the interval after an error during which a second error cannot be reliably corrected). Logical T1 is measured by preparing logical excited states in chosen codespaces, enabling continuous parity monitoring and feedback, and fitting the decay of codespace populations to steady-state values; steady-state codespace occupancy is also quantified. Dephasing mechanisms are dissected into: (i) measurement-induced dephasing due to imperfect indistinguishability of even-parity states given finite χ/κ, characterized by distinguishability D of state pairs and readout measurement rate Γm with efficiency η; (ii) dephasing from transitions between odd and even parity (resonator ring-down) parameterized by photon numbers ni in the odd-parity steady state; and (iii) dephasing from static ZZ couplings βij between neighboring qubits that cause unknown phases to accrue during the random detection latency between error and correction. Photon numbers (n0≈7, n1≈6) are estimated from measurement rate and quantum efficiency; distinguishability ratios are compared to theory; ZZ couplings are measured via conditional Ramsey (β01=0.49 MHz, β12=1.05 MHz, β02<2 kHz) to determine energy splittings of parity subspaces and predict phase accrual distributions using the measured latency histograms. Fabrication and setup: Devices are niobium-on-silicon with Al/AlOx/Al junctions (bridge-free Manhattan style) and bandaid galvanic connections; outer qubits are flux-tunable (ranges 260 MHz and 220 MHz), middle qubit is fixed-frequency. Readout chains use JPAs operated phase-sensitively; signals are further amplified by cryogenic HEMTs; a TWPA is inserted in one output chain. Infrared filters are used on input lines; outer qubits are flux tuned by off-chip coils. An external AWG generates cavity tones and JPA pumps; an FPGA (Innovative Integration X6-1000M) controls qubits and demodulates readout with on-board filters (32 ns low-pass and 1536 ns accumulator), manages instruction queues for pulses, virtual Z rotations via phase increments, and continuous thresholding for feedback. Filter thresholds were optimized by constructing confusion matrices from labeled single-flip/no-flip data and minimizing squared classification error.
Key Findings
- Implemented continuous quantum bit-flip correction using two direct continuous ZZ parity measurements in a three-qubit repetition code without ancillas or entangling gates. - Bit-flip detection efficiencies: Q0=90%, Q1=86%, Q2=91%; average detection/correction latency 3.1–3.4 μs after an error. - Dark count rates for spurious flip detections (ms⁻1): (Q0, Q1, Q2) = (3.4, 1.0, 4.0); thermal excitation rates estimated (1.8, 1.0, 2.0) ms⁻1. - Dominant logical-error mechanism identified as closely spaced two-qubit flips within the measurement time: measured dead times (probability of logical error equals probability of correct sequence) range 1.6–2.6 μs depending on flip pair and order. - Logical T1 enhancement: In the odd-odd codespace, excited-state population decays with T1≈66 μs versus bare-qubit T1≈20–24 μs, a 2.7× improvement; shortest logical codespace T1≈32 μs (even-even). - Under continuous feedback, steady-state population within the selected codespace reaches 87–99.6%. - Measurement parameters: measurement rate Γ≈0.40 MHz; resonator linewidths κ≈(636, 810) kHz; dispersive shifts χ≈(2.02, 2.34) MHz; predicted even-state distinguishability ratios D(0,0)11/D(0,1)11≈40 and 33, consistent with measurements. - Estimated measurement-induced dephasing rates when in even states: ≈0.05 μs⁻1 (R0 pair) and ≈0.07 μs⁻1 (R1 pair); excess dephasing from odd→even transitions due to ring-down ≈0.06 μs⁻1 given n≈7 and 6 photons; ZZ-induced dephasing for odd-odd codespace estimated ≈0.3 μs⁻1, with observed coherence oscillations during correction consistent with β01=0.49 MHz and β12=1.05 MHz. - FPGA thresholding approach achieved performance comparable in practice to Bayesian filtering while being resource-efficient for real-time implementation.
Discussion
Continuous monitoring of two multipartite stabilizers with active FPGA feedback enables real-time identification and correction of bit-flip errors while maintaining logical coherence and extending logical-state lifetime beyond bare T1 limits. By avoiding ancillas and entangling gates, the approach reduces overhead and associated error channels, addressing a common bottleneck in discrete QEC implementations. The observed logical-error dead time highlights a fundamental trade-off in continuous schemes between measurement rate, filtering latency, and misclassification during closely spaced events; increasing measurement rate and optimizing filters can mitigate this. The dephasing analysis isolates three implementation-induced channels—measurement-induced dephasing from imperfect even-state indistinguishability (finite χ/κ), transient dephasing during odd→even transitions due to resonator ring-down, and static ZZ coupling leading to random phase accrual during detection latency. These are not intrinsic to continuous QEC and can be reduced by increasing χ/κ, optimizing κ for fixed measurement rate, engineering couplings to suppress ZZ, and applying additional feedback to cancel measurement-induced dephasing. The planar superconducting implementation is compatible with existing architectures and can be combined with gate-based phase-flip protection or extended to continuous XX measurements to realize fully protected logical encodings.
Conclusion
The work experimentally demonstrates resource-efficient continuous quantum error correction using direct parity measurements to correct bit-flip errors in a three-qubit code, achieving up to 91% detection efficiency, reducing logical errors outside dead times, and extending logical T1 by 2.7× over bare qubits. It validates continuous stabilizer monitoring with real-time feedback as a viable path to fault tolerance without ancillas or entangling gates. Future directions include: increasing χ/κ and tailoring κ to reduce measurement-induced and ring-down dephasing; engineering reduced static ZZ coupling (e.g., multi-path coupling) to suppress latency-induced dephasing; implementing additional feedback to undo measurement-induced dephasing; integrating continuous XX measurements to protect against phase errors and stabilize fully protected logical states; and scaling to larger codes within planar superconducting architectures.
Limitations
- The demonstrated code protects only against bit flips; phase-flip errors are not corrected and coherence can be degraded by induced dephasing. - Detection efficiency is limited by finite measurement rate and T1 decay that can return excitations to the ground state before detection. - A dead time of ~1.6–2.6 μs after an error introduces vulnerability to closely spaced multi-qubit errors, leading to logical errors. - Implementation-induced dephasing arises from: (i) imperfect indistinguishability of even-parity states at finite χ/κ (measurement-induced dephasing); (ii) resonator ring-down during odd→even parity transitions; and (iii) static ZZ couplings causing random phase accrual due to uncertain detection latencies. - Thresholding, while hardware-efficient, can misclassify events compared to optimal Bayesian filters; dark counts are present at rates comparable to thermal excitation for some qubits. - The approach relies on specific hardware parameters (JPA operation, resonator couplings, photon numbers) that may constrain generalizability without careful optimization.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny