Introduction
Determining the ground state of interacting quantum many-body systems is a long-standing challenge in condensed-matter and quantum physics, particularly for complex, large-scale two-dimensional systems. Existing numerical methods, such as exact diagonalization, quantum Monte Carlo, and tensor network methods, each face limitations: the curse of dimensionality, the sign problem, and the growth of entanglement complexity, respectively. A prime example of such complexity is the putative quantum-spin-liquid (QSL) phase in frustrated magnets, where the nature of many presumed QSLs, such as in J₁-J₂ Heisenberg magnets on square and triangular lattices, remains debated despite various numerical approaches. Neural quantum states (NQSs), leveraging artificial neural networks to encode many-body wavefunctions, present a promising alternative. While NQSs have shown progress, they're hindered by the limitations of current optimization algorithms, particularly the high computational cost of stochastic reconfiguration (SR) for deep networks. This necessitates the use of shallow networks with limited parameter counts, preventing the exploitation of the full expressive power of deep neural networks.
Literature Review
Significant progress has been made using NQSs for studying quantum spin liquids. However, the computational cost of the standard optimization method, Stochastic Reconfiguration (SR), scales quadratically with the number of parameters, limiting the size and depth of the neural networks that can be used. This has restricted the application of NQSs to relatively shallow networks, such as Restricted Boltzmann Machines (RBMs) and shallow Convolutional Neural Networks (CNNs). Various attempts have been made to overcome this limitation, including the use of iterative solvers, approximate optimizers, and high-performance computing resources. However, the cost of SR remains a major bottleneck, preventing the full realization of the potential of deep NQSs for solving complex physics problems. This paper addresses this limitation by proposing a new optimization algorithm.
Methodology
This paper introduces the minimum-step stochastic reconfiguration (MinSR) algorithm as an alternative to the standard SR method for training NQSs. MinSR significantly reduces the computational cost while maintaining accuracy. In SR, the quantum distance between the new variational state and the exact imaginary-time evolved state is minimized. This involves minimizing a quantity that can be expressed as a least-squares problem involving a matrix (the quantum metric). Traditional SR requires inverting this matrix, which has O(Ns³) complexity (where Ns is the number of samples). MinSR introduces the neural tangent kernel, which has the same non-zero eigenvalues as the quantum metric but a much smaller size (Ns x Ns), leading to a significant reduction in computational complexity to O(NsNθ + Ns³), where Nθ is the number of parameters. For large networks (Ns << Nθ), this is essentially linear in Nθ. The effectiveness of MinSR is demonstrated by training deep neural networks with up to 64 layers and 10⁹ parameters. The study employs two different ResNet architectures (ResNet1 and ResNet2) to variationally learn the ground states of the spin-1/2 Heisenberg J₁-J₂ model on a square lattice, focusing on J₂/J₁ = 0 (non-frustrated) and J₂/J₁ = 0.5 (strongly frustrated). Zero-variance extrapolation is used to improve the accuracy of the estimated ground-state energies, especially in the frustrated case. The energy gap between the ground state and the first excited state is also calculated to investigate the nature of the QSL phase, and finite-size scaling is performed to extrapolate to the thermodynamic limit.
Key Findings
MinSR allows training of significantly larger and deeper NQSs compared to traditional SR. For a non-frustrated 10x10 square lattice Heisenberg model, the deep NQS trained with MinSR achieves unprecedented accuracy, outperforming other variational methods, with variational energies approaching machine precision. For the strongly frustrated J₁-J₂ model (J₂/J₁ = 0.5) on a 10x10 square lattice, MinSR again delivers improved accuracy, surpassing existing results and reaching variational energies below those attainable by other numerical schemes. The resulting variational energies (e.g., -0.4976921(4) for the 10x10 lattice) allow for a precise zero-variance extrapolation of the ground state energy (-0.497715(9)). The superior performance persists for larger lattices (16x16), yielding the best variational energy to date. Crucially, the findings provide strong evidence for gapless quantum spin liquid (QSL) phases in both the square and triangular lattice J₁-J₂ models, resolving an open question in the field. Analysis of the energy gap between the ground state and the first excited state, extrapolated to the thermodynamic limit via finite-size scaling, strongly suggests a vanishing gap, supporting the existence of a gapless QSL. This contrasts with some previous studies that suggested gapped QSLs.
Discussion
The results demonstrate the power of combining deep NQSs with the efficient MinSR optimization algorithm. The method significantly improves the accuracy of variational calculations, especially for challenging strongly correlated systems such as frustrated magnets. The accuracy achieved, particularly for large lattice sizes, enables the reliable extraction of physically meaningful information, like the identification of gapless QSL phases. This addresses a long-standing open question in the field. The success in the J₁-J₂ model suggests the broad applicability of MinSR to other frustrated quantum magnets and beyond. The algorithm's generality makes it applicable to different variational wavefunctions, enhancing the capabilities of established methods like tensor networks.
Conclusion
This work introduces MinSR, a highly efficient optimization algorithm for training deep NQSs. Its application to frustrated spin models demonstrates its remarkable ability to achieve high accuracy and to provide valuable insights into the nature of QSL phases. Future research directions include extending the approach to fermionic systems (Hubbard model, ab initio quantum chemistry) and exploring MinSR's potential in other machine-learning domains where efficient optimization of deep networks is crucial.
Limitations
While the MinSR algorithm significantly improves the efficiency and scalability of training NQSs, limitations remain. The accuracy of the results depends on the choice of neural network architecture and the size of the training dataset. Moreover, the computational cost, although greatly reduced compared to traditional SR, can still be substantial for extremely large systems. Finally, the extrapolation to the thermodynamic limit relies on certain assumptions about the behavior of the energy variance and the energy gap.
Related Publications
Explore these studies to deepen your understanding of the subject.