logo
ResearchBunny Logo
Long-time simulations for fixed input states on quantum hardware

Physics

Long-time simulations for fixed input states on quantum hardware

J. Gibbs, K. Gil, et al.

This groundbreaking research by Joe Gibbs, Kaitlin Gil, Zoë Holmes, and others demonstrates high-fidelity quantum simulations on Rigetti and IBM quantum computers, utilizing the innovative fixed state Variational Fast Forwarding algorithm to achieve remarkable fidelity. Explore the future of quantum computing with their insights!

00:00
00:00
~3 min • Beginner • English
Introduction
The study of quantum dynamical simulation is valuable in science and technology, but current noisy intermediate-scale quantum (NISQ) devices have short coherence times and limited qubit counts, making deep, fault-tolerant-oriented algorithms impractical. Traditional Hamiltonian simulation techniques such as Trotterization, qubitization, and truncated Taylor series require circuit depths beyond current capabilities. Variational quantum algorithms offer NISQ-suitable approaches by optimizing parameterized circuits with classical feedback. Variational Fast Forwarding (VFF) previously enabled long-time simulation via fixed-depth circuits by approximately diagonalizing a short-time unitary U, allowing evolution to be “fast-forwarded.” However, VFF requires learning a full diagonalization over the entire Hilbert space and uses 2n qubits for the cost evaluation, which can be resource intensive and challenging to train. The present work introduces fixed state Variational Fast Forwarding (fsVFF), which focuses on the evolution of a specific initial state of interest. By only learning the action of U on the subspace spanned by the initial state and its future evolution, fsVFF reduces qubit requirements to n, allows simpler ansätze with fewer parameters, and improves trainability and practicality on current hardware. The paper demonstrates fsVFF experimentally on IBM and Rigetti devices for a 2-qubit XY model and supports scalability via 4-qubit noisy XY and 8-qubit noiseless Fermi-Hubbard simulations.
Literature Review
Classical methods for quantum simulation often scale poorly, while quantum algorithms tailored for fault-tolerant regimes (e.g., Trotterization, qubitization, Taylor-series methods) are too deep for NISQ devices. Variational approaches have been proposed to simulate dynamics using shallow, trainable circuits, including time-dependent variational principles and subspace methods such as the Subspace Variational Quantum Simulator for low-energy sectors. VFF demonstrated fixed-depth long-time simulation by variationally diagonalizing the short-time unitary, but at the expense of full-Hilbert-space expressivity and doubled qubit requirements for its cost function. The fsVFF approach draws from quantum machine learning, using short-time evolution as training data to predict long-time dynamics. It leverages No Free Lunch theorems for quantum learning to determine required training data within the relevant subspace and incorporates results on noise-resilient cost functions (optimal parameter resilience). Compared to related methods, fsVFF targets a fixed initial-state subspace, enabling shallower ansätze, reduced measurement overhead, and improved practical performance on NISQ hardware.
Methodology
Overview: fsVFF seeks a diagonal compilation of a short-time Trotterized unitary U(Δt) that reproduces its action on the subspace spanned by a fixed initial state |ψ0⟩ and its time-evolved states. The ansatz has the structure V(α,Δt) = W(θ) D(γ,Δt) W(θ)†, where W approximately maps the energy eigenvectors overlapped by |ψ0⟩ to computational basis states, and D is diagonal encoding phase evolutions in that subspace. After training, long-time evolution for T = NΔt is implemented with W(θopt) D(γopt, NΔt) W(θopt)†, reusing the same depth while scaling phases by N. Cost function: To learn U on the subspace Sψ spanned by energy eigenstates with non-zero overlap with |ψ0⟩ (dimension n_eig), fsVFF uses training states {U^k |ψ0⟩} for k = 0,…,n_eig−1. The cost C_fsVFF = 1 − (1/n_eig) ∑_{k=0}^{n_eig−1} |⟨ψ0|U^k|ψ0⟩|^2 is evaluated via Loschmidt-echo-type overlap circuits on n qubits. In the limit of negligible leakage due to Trotterization, C_fsVFF is faithful: it vanishes if and only if the fast-forwarded simulation achieves unit fidelity for all times. The cost inherits noise resilience (optimal parameter resilience) for broad classes of incoherent noise. Determining n_eig via Krylov/Gramian: n_eig equals the number of unique energy eigenvalues overlapped by |ψ0⟩ and equals the dimension of the Krylov subspace spanned by V_k = { |ψ_j⟩ = U^j |ψ0⟩ } until linear dependence occurs. Build the Gramian G(k) with entries ⟨ψ_i|ψ_j⟩ measured by Hadamard-test circuits. The smallest k such that det G(k) = 0 gives n_eig = k. Only the first row is required due to Hermiticity and overlap symmetries, reducing the number of circuit evaluations. Alternatively, one can increment the number of training states and monitor observable convergence without explicitly computing n_eig. Randomized (mini-batch) training: To avoid scaling the number of circuits linearly with n_eig, fsVFF can sample random time steps r in [−1,1] with a maximum τ_max and define a stochastic cost of the form C̃ = 1 − |⟨ψ0| V(−r τ_max) U(τ_max) |ψ0⟩|, drawing r from a distribution biased toward the interval edges to maintain gradient magnitudes. This approach does not require an a priori n_eig and can reduce circuit depths because the U decomposition as U(Δt)^N is not required. Training and optimization: Gradients are computed using parameter-shift rules with the same-depth circuits as cost evaluation. Classical optimizers (e.g., gradient descent with momentum, stochastic optimization) update θ and γ. To improve trainability and minimize depth, fsVFF uses problem-informed, compact, or adaptive ansätze that preserve relevant symmetries (e.g., particle-number conservation). Gate pruning can be used to remove redundancies. Ansatz details: D(γ,Δt) is chosen as a product of diagonal Z-type exponentials; for practical ansätze, the number of terms is kept polynomial. W(θ) is a hardware-efficient or adaptive circuit; symmetry-preserving gates reduce parameters and noise. For 2-qubit XY hardware runs, the optimal W used one CNOT and two single-qubit rotations, and D used a single Rz rotation. For larger systems (e.g., 4-qubit XY), an adaptive, particle-number-preserving W was learned. Execution: After training, long-time fast-forwarding is realized by scaling the diagonal phases in D by N while keeping circuit depth fixed. Fidelity is assessed against exact or high-accuracy reference evolutions via state tomography or overlap metrics. Energy estimation from fsVFF: The learned diagonalization enables eigenstate identification by sampling W|ψ0⟩ and mapping computational-basis outcomes |v_k⟩ back with W† to obtain |E_k⟩. Eigenvalue differences can be extracted from D when feasible. fsVFF also reduces the depth of QPE/QEE by replacing controlled-U with controlled-D in the learned subspace and obviating eigenstate preparation in favor of W†-mapped basis inputs.
Key Findings
- Hardware fast-forwarding on 2-qubit XY model: Using fsVFF trained on IBM (ibmq_toronto) and Rigetti (Aspen-8) devices, the fast-forwarded simulation on ibmq_rome maintained fidelity F > 0.9 up to 625 time steps and F > 0.8 up to 1275 steps. Iterated Trotter fell below F = 0.9 after 4 steps and F = 0.8 after 8 steps. Fast-forwarding ratios R (time before dropping below threshold for fsVFF vs Trotter) were ~156 (for 0.9) and ~159 (for 0.8). - Gramian method for n_eig: On a 2-qubit XY chain, measuring det G(k) on Honeywell hardware accurately indicated n_eig for different initial states (n_eig = 1,2,3) by the first k for which det G(k) ≈ 0, matching classical simulation. - Noise resilience: Across experiments, the measured (noisy) cost saturated around 1e−1 while the corresponding noise-free cost reached ~1e−3 on the same parameters, indicating the cost’s resilience to incoherent noise and validating trainability on current hardware. - Noisy 4-qubit XY simulation: With a symmetry-preserving adaptive ansatz (final W had 50 CNOTs), fsVFF maintained fidelity above 0.8 for ~700 steps under a realistic IBM-like noise model, while iterated Trotter fell below 0.8 after ~8 steps. Fast-forwarding ratio R ≈ 87.5. - 8-qubit Fermi-Hubbard (L=4, J=1, U=2, half-filling, n_eig=5): Under a trapped-ion-like noisy simulator, different diagonalization qualities showed a depth–noise trade-off. A high-quality diagonalization with final cost ~1.1×10^−5 maintained F > 0.8 up to T < 800, whereas iterated Trotter dropped below 0.8 at T > 4.6, giving R ≈ 174. - Randomized training scalability: Batched training using only 2 training states per evaluation minimized cost to ~1e−5 for 5- and 6-qubit XY cases (n_eig=9 and 12, respectively), achieving noiseless fast-forwarding error <1e−2 for over 100 steps. - fsVFF vs VFF trainability: For an XXZ Hamiltonian with a number-preserving hardware-efficient ansatz, an 8-layer fsVFF ansatz achieved C_fsVFF ≈ 1e−6 and successful fast-forwarding, whereas VFF with deeper W (12 layers) plateaued at ~1e−2 and failed to simulate for any significant time. - Observable convergence with training states: For a 5-qubit XY with |ψ0⟩ = |10000⟩ (n_eig = 5), increasing the number of training states k led to convergence of Z2 expectation values. Convergence occurred by k ≥ 3, fewer than n_eig, attributed to small amplitudes for two eigencomponents. - QPE/QEE enhancement: fsVFF-enhanced QPE on hardware (ibmq_boeblingen) yielded a variation distance of 0.578 vs 0.917 for standard QPE (target 3-bit phase 001). In noisy simulation, fsVFF-enhanced QPE had variation distance 0.233 vs 0.394 for VFF-enhanced QPE, despite VFF having a slightly lower noise-free cost. QEE with fsVFF on a 3-qubit XY model estimated eigenvalues with mean-squared error 4.37×10^−3 after removing a global phase. - Resource reductions: fsVFF halves qubit requirements for the cost (n vs 2n in VFF), avoids generating 2^n Bell pairs and numerous non-local CNOT/SWAPs, and permits shallower diagonalizing circuits tailored to the n_eig-dimensional subspace.
Discussion
The results demonstrate that focusing on a fixed initial-state subspace enables practical long-time quantum simulations on NISQ devices. By learning a diagonalization only on the n_eig-dimensional subspace relevant to the chosen |ψ0⟩, fsVFF reduces width, depth, and parameter counts, improving both hardware feasibility and optimization. Experimental demonstrations showed order-of-magnitude improvements in simulation horizons over iterated Trotter, validating that fixed-depth fast-forwarding can overcome coherence constraints. The cost function’s noise resilience allowed effective training even when noisy and noise-free costs diverged by orders of magnitude, and the Gramian-based Krylov approach provided a hardware-friendly method to estimate n_eig. Comparisons against VFF and Trotterization emphasize fsVFF’s advantage in resource use and trainability, enabling higher fidelity at longer times. Furthermore, leveraging the learned diagonalization to reduce QPE/QEE depth shows fsVFF’s broader utility for spectral estimation tasks on NISQ hardware. Overall, fsVFF addresses the central challenge posed in the introduction: enabling reliable, long-time dynamical simulations on today’s quantum computers by sacrificing universality in favor of a fixed initial state, thereby substantially reducing resource demands while maintaining accuracy.
Conclusion
The paper introduces the fixed state Variational Fast Forwarding (fsVFF) algorithm, which enables long-time, high-fidelity simulation of quantum dynamics on NISQ hardware by diagonalizing only the subspace relevant to a fixed initial state. Experimentally, fsVFF achieved F > 0.9 for over 600 steps in a 2-qubit XY model—about 150× longer than iterated Trotter—and provided large fast-forwarding ratios in noisy 4-qubit XY and noiseless 8-qubit Fermi-Hubbard simulations. fsVFF halves the qubit requirement for cost evaluation relative to VFF, reduces circuit depth and parameter counts through compact or adaptive ansätze, and exhibits cost-function noise resilience. The learned diagonalization also enables QPE/QEE depth reductions and accurate eigenvalue estimation. Future directions include mitigating Trotter errors via fixed-state adaptations of Hamiltonian diagonalization methods, employing error mitigation to cope with larger n_eig, designing problem-inspired symmetry-preserving ansätze to reduce scaling, and exploring fixed-observable variants to further economize resources while retaining practical utility.
Limitations
- Dependence on Trotterization: fsVFF inherits Trotter approximation error from using U(Δt) unless replaced by more accurate unitary generation (e.g., via Hamiltonian diagonalization). This error can cause leakage outside the target subspace if Δt or Trotter order is insufficient. - Subspace size (n_eig): The approach is efficient when the initial state overlaps a non-exponential number of eigenstates. Evaluating C_fsVFF scales with n_eig, and implementing U^{n_eig} (or sufficient training coverage) must be feasible within coherence limits. - Trainability of global cost: The global cost can exhibit barren plateaus for larger systems; local-cost variants are proposed to mitigate this but may introduce additional design complexity. - Hardware noise and depth trade-offs: Although fsVFF reduces depth, practical performance still depends on noise levels; deeper ansätze for higher-quality diagonalization can degrade performance due to noise, requiring careful depth–accuracy trade-offs. - Estimation of n_eig: While the Gramian method can determine n_eig, it requires additional circuits and is sensitive to noise; alternatively, convergence-based strategies may be used at the cost of extra training runs. - Spectral phase ambiguity: Extracted eigenvalue information from D determines energy differences up to a global phase; additional procedures are needed for absolute energies.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny