logo
ResearchBunny Logo
Overcoming the coherence time barrier in quantum machine learning on temporal data

Engineering and Technology

Overcoming the coherence time barrier in quantum machine learning on temporal data

F. Hu, S. A. Khan, et al.

Discover how NISQRC, a groundbreaking machine learning algorithm developed by Fangjun Hu, Saeed A. Khan, Nicholas T. Bronn, Gerasimos Angelatos, Graham E. Rowlands, Guilhem J. Ribeill, and Hakan E. Türeci, allows qubit-based quantum systems to perform inference on temporal data without being hindered by coherence time. This innovative approach leverages mid-circuit measurements to maintain persistent temporal memory, showcasing its prowess through successful experiments on a 7-qubit quantum processor.

00:00
00:00
~3 min • Beginner • English
Introduction
The study addresses the challenge of performing online machine learning on temporal data using current noisy intermediate-scale quantum (NISQ) hardware. Traditional quantum machine learning methods are hampered by quantum sampling noise (QSN), finite coherence times, and training difficulties such as barren plateaus. Moreover, repeated measurements required for processing streaming data induce backaction and information scrambling, limiting persistent memory even on fault-tolerant devices. The authors aim to establish conditions under which a quantum system can maintain a persistent, fading memory suitable for online inference over arbitrarily long input sequences. They propose a framework grounded in a quantum generalization of Volterra Series theory to analyze memory properties in monitored quantum systems and introduce an algorithm (NISQRC) that uses mid-circuit measurement and deterministic reset to overcome decoherence and scrambling constraints.
Literature Review
Prior work in temporal and sequential learning has leveraged classical architectures such as recurrent neural networks and transformers, and physical neural networks including reservoir computing (RC), which performs learning with a fixed nonlinear dynamical system and a trainable linear readout. Physical reservoir computing has been demonstrated across diverse platforms with efficient convex optimization. Quantum systems offer potentially exponential state spaces, suggesting advantages for scalable machine learning; however, practical QML has largely focused on static data due to QSN, barren plateaus, and limitations from finite coherence. Repeated measurements in quantum systems can cause information scrambling and thermalization, challenging long-term memory retention even in idealized coherent systems. Classical RC relies on fading memory and Volterra Series to characterize input-output maps, but a comparable comprehensive theory for temporally monitored quantum systems has been lacking. Recent monitored-circuit studies highlight measurement-induced transitions and thermalization effects, and some quantum RC approaches have employed repeated measurements or ancilla-mediated readout, but without guaranteeing persistent memory.
Methodology
The authors develop Quantum Volterra Theory (QVT) to generalize classical Volterra Series to monitored quantum systems, explicitly incorporating measurement backaction. They propose the NISQ Reservoir Computing (NISQRC) algorithm: a qubit register is partitioned into M memory qubits and R readout qubits (L = M + R). For each time step n, input u_n is encoded via a parameterized quantum channel U(u_n) acting on all qubits without monitoring during encoding. Then only the R readout qubits are measured and deterministically reset to |0⟩ regardless of measurement outcome, while the memory qubits retain conditional state information. This iterate encode–measure–reset scheme generates a time series of measurement outcomes b(n). Features x(n) are the probabilities of readout outcomes at each step, estimated by empirical frequencies X(n) from S shots; a linear projector with time-independent weights w is trained to map features to targets y(n). The approach reduces runtime from O(N^2 S) in prior RC schemes (which re-encode the full history) to O(NS) by avoiding re-encoding. QVT formalizes the existence of a time-invariant input-output map with fading memory by analyzing the null-input single-step map P_0 acting on the memory subsystem. The spectrum of P_0 determines the memory time, with η_m = −1/ln|λ_2| (dimensionless), where λ_2 is the second-largest eigenvalue magnitude. QVT shows that without reset, the map is unital, driving the memory to the fully mixed state and yielding trivial Volterra kernels (no persistent memory). Deterministic reset breaks unitality and enables nontrivial kernels and persistent memory. The theory guides design choices for encoding duration τ, qubit partitioning M/R, and connectivity to match task-specific correlation times and nonlinearity order. The authors also present a simulation method to sample deep circuits with partial measurements efficiently by computing infinite-shot probabilities x_j(n) and then drawing i.i.d. samples to emulate finite-shot behavior, avoiding explicit quantum trajectory branching. Two encoding ansätze are used: (1) a Hamiltonian encoding H(u) = H_0 + u H_1 with dissipative dynamics for duration τ (with qubit T1), and (2) a gate-based circuit ansatz suitable for superconducting devices: U(u_n) = [W(J) R_x(θ + δ u_n) R_z(θ)] repeated n_1 times on a 7-qubit linear chain, interleaved with mid-circuit measurement and reset of readout qubits. Training minimizes cross-entropy of a linear readout over features from 100 messages (N = 100) in simulation or shorter sequences on hardware due to buffer limits. Volterra kernel computation and spectral analysis inform hyperparameter choices to achieve a memory time commensurate with the channel equalization task.
Key Findings
• QVT proves that deterministic post-measurement reset is necessary to avoid unital dynamics and information thermalization; without reset, all Volterra kernels vanish and the reservoir is amnesiac. • Memory time is governed by the spectrum of the null-input map P_0; the second eigenvalue sets η_m = −1/ln|λ_2|, defining the fading memory envelope of Volterra kernels. Increasing encoding duration τ systematically tunes η_m across orders of magnitude. • Runtime improvement: partial measurement with reset enables online processing with O(NS) runtime, versus O(N^2 S) when re-encoding full histories. • Robustness to dissipation: If T1/τ exceeds the lossless memory time η_0, η_m ≈ η_0 and is essentially independent of T1, so total run time T_run can far exceed qubit lifetimes. • Simulations on channel equalization (CE) with a (2+4)-qubit reservoir show NISQRC significantly outperforms logistic regression and approaches a theoretical inverse-filter bound as S → ∞. With finite sampling S up to 10^5, performance degrades predictably due to QSN but remains superior to linear baselines. Connected reservoirs (effective Hilbert space 2^L) outperform split reservoirs with identical measured feature counts, reflecting higher Jacobian rank and expressivity. • Persistence over long sequences: Accurate inference is maintained for test lengths N_s up to 5000 symbols at SNR = 20 dB in simulation, with circuit duration reported as ~500× individual qubit lifetimes, confirming that performance is not constrained by coherence time when memory is matched to task correlations. • Experimental validation on a 7-qubit IBM Quantum device (ibm_algiers) with a (3+4)-qubit linear chain demonstrates CE at SNR = 20 dB and message length N = 20. Average coherence times were T1 ≈ 124 μs, T2 ≈ 91 μs, while T_run = 117 μs. Performance closely matches simulations assuming infinite coherence; introducing realistic decay in simulation leaves performance nearly unchanged. • Necessity of reset: Removing resets causes performance to collapse to random guessing, empirically confirming the theoretical requirement for non-unital maps. A QND ancilla readout scheme with ancilla resets but no system resets also fails, mirroring no-reset behavior. • Connectivity matters: Severing a coupling to form two smaller chains degrades performance in both simulation and experiment, consistent with lower functional independence (lower Jacobian rank). • Practical constraints: Increasing T_run beyond T1 with inserted delays did not degrade performance; experimental discrepancies from simulation are attributed to mid-circuit measurement maturity, buffer limits, shot batching over hours, and parameter drifts.
Discussion
The findings directly address whether quantum systems can process streaming temporal data beyond coherence limits. By formulating QVT, the authors identify deterministic post-measurement reset as the key mechanism to prevent measurement-induced thermalization and to ensure a time-invariant input-output map with fading memory. This enables online inference on arbitrarily long sequences using a fixed-depth iterative circuit, with performance governed by designable memory times rather than hardware coherence. The results demonstrate both in simulation and experiment that, once the reservoir memory η_m matches the task’s correlation time, inference accuracy remains stable for very long signals and is largely insensitive to T1/T2 within broad ranges. The framework generalizes to arbitrary POVMs and finite-dimensional systems and can extend to continuous variables. Design principles emerge: choose M/R, τ, drive strengths, and connectivity to match memory capacity and nonlinearity order to task demands; avoid measuring all qubits; and ensure encodings are non-unital with reset to preserve nontrivial kernels. These insights can guide future quantum temporal learning architectures and motivate tighter integration of mid-circuit measurements, dynamic circuits, and classical post-processing.
Conclusion
This work introduces Quantum Volterra Theory to analyze monitored quantum systems under temporal inputs and presents the NISQRC algorithm, which leverages mid-circuit measurements and deterministic resets to achieve persistent fading memory and online processing with O(NS) runtime. Simulations and experiments on a 7-qubit superconducting processor validate that inference on arbitrarily long sequences is achievable without being constrained by qubit coherence times, provided the reservoir memory is matched to task correlations. The approach outperforms linear baselines in a channel equalization task and is robust to realistic sampling noise and dissipation. Future directions include optimizing circuit ansätze using QVT-derived kernels for specific tasks, exploring qudits and continuous-variable systems, employing continuous weak measurements within the same framework, enhancing connectivity while maintaining fidelity, and developing dynamic control to adapt memory and nonlinearity to nonstationary data.
Limitations
Experimental runs were constrained by early-stage mid-circuit measurement and reset implementations, including classical memory buffer limits, batched shot acquisition over long intervals, and hardware parameter drifts, leading to quantitative deviations from simulations. Connectivity was limited to a nearest-neighbor linear chain, with deeper circuits needed for nonlocal couplings, which can impact fidelity. Training and testing datasets were smaller than ideal due to resource constraints. Certain encoding choices, even with reset, can yield unital maps and zero persistent memory if not carefully designed. While robust to typical T1/T2 values observed here, substantially shorter lifetimes could degrade performance. The work focuses on a single benchmark task (channel equalization) and modest system sizes; broader benchmarks and scaling studies remain to be conducted.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny