logo
ResearchBunny Logo
Deep quantum neural networks on a superconducting processor

Physics

Deep quantum neural networks on a superconducting processor

X. Pan, Z. Lu, et al.

This groundbreaking research by Xiaoxuan Pan and colleagues showcases the training of deep quantum neural networks on a six-qubit superconducting processor, achieving remarkable mean fidelity and accuracy. Their findings are pivotal for advancing quantum machine learning applications.

00:00
00:00
Playback language: English
Introduction
Deep learning's success in diverse fields, from game playing to protein structure prediction, stems from deep neural networks' ability to extract high-level features from data using multiple hidden layers. The backpropagation (BP) algorithm efficiently calculates gradients for parameter updates via gradient descent. Quantum machine learning leverages quantum mechanics (superposition and entanglement) for potential advantages over classical methods. Recent progress includes quantum speedups in classification and generative models, and experimental demonstrations of quantum convolutional neural networks and adversarial learning. This paper focuses on deep quantum neural networks (DQNNs), which have a layer-by-layer architecture with multiple hidden layers, trainable via a quantum analog of the BP algorithm. The 'deep' refers to multiple hidden layers, not necessarily circuit depth. This work experimentally demonstrates training DQNNs using BP on a superconducting processor, showcasing their potential for various applications.
Literature Review
The authors review existing work on quantum machine learning, highlighting rigorous quantum speedups demonstrated in classification and generative models. They also cite previous experimental work on quantum neural networks, including implementations of quantum convolutional neural networks and quantum adversarial learning on superconducting quantum processors. The concept of deep quantum neural networks (DQNNs) with layer-by-layer architectures and training via quantum backpropagation is introduced, referencing relevant prior research. The paper positions its work within this existing body of knowledge, emphasizing the novelty of its experimental demonstration of training DQNNs on a superconducting processor.
Methodology
The experiment uses a six-qubit programmable superconducting processor with frequency-tunable transmon qubits. The DQNN architecture has a layer-by-layer structure, with quantum perceptrons as building blocks. Each perceptron consists of two single-qubit rotation gates and a controlled-Phase gate. The forward pass of the backpropagation algorithm is performed on the quantum processor, while the backward pass is classically simulated. For learning a quantum channel, the goal is to maximize the mean fidelity between the DQNN output and the target channel output. For learning ground state energy, the aim is to minimize the energy estimate. The training process involves initializing parameters, performing the forward pass to obtain quantum states, performing the classical backward pass to calculate gradients, updating parameters based on the gradients, and iterating this process. The experimental setup details the superconducting processor's components (qubits, control lines, readout resonators) and characterized gate fidelities. Numerical simulations account for qubit decoherence and residual ZZ interactions between qubits, assessing their influence on training performance. Different DQNN architectures (three-layer and six-layer) are used for learning two-qubit and one-qubit quantum channels, respectively. Quantum state tomography is used to extract the quantum states of the hidden layers and output layer.
Key Findings
The three-layer DQNN (DQNN1) achieved a mean fidelity of up to 96.0% in learning a two-qubit quantum channel, and an accuracy up to 93.3% in learning the ground state energy of molecular hydrogen. The six-layer DQNN (DQNN2) reached a mean fidelity of up to 94.8% in learning a one-qubit quantum channel. The experimental results demonstrate the efficacy of the quantum backpropagation algorithm and the feasibility of training deep quantum neural networks on near-term quantum hardware. Classical simulations help to isolate the effects of experimental imperfections such as decoherence and residual ZZ interactions. Results show that the residual ZZ interaction strength has a more significant effect than coherence time on the accuracy of the energy estimate. The distribution of converged fidelities and energy estimates across multiple training runs with different initial parameters are presented and analyzed. The fidelity of the trained DQNNs are significantly better than untrained DQNNs when tested with a set of randomly generated quantum states. The number of qubits required for training does not scale with network depth because of qubit reuse, a significant finding for scalability.
Discussion
The experimental results validate the potential of DQNNs for quantum machine learning tasks. The high fidelities and accuracy achieved in learning quantum channels and ground state energies demonstrate the effectiveness of the quantum backpropagation algorithm. The scalability of the approach, as suggested by the lack of scaling of the number of qubits needed with network depth, opens exciting possibilities for larger and more complex quantum machine learning models. The comparison between experimental and numerical results helps quantify the effects of noise and imperfections on the training process, providing valuable insights for future improvements in quantum hardware and algorithms. The study’s results contribute to the advancement of quantum machine learning by demonstrating the practicality of training deep quantum neural networks on currently available noisy quantum computers.
Conclusion
This work experimentally demonstrates the successful training of deep quantum neural networks using a quantum analog of the backpropagation algorithm on a six-qubit superconducting processor. The high fidelities and accuracy achieved in learning both quantum channels and ground state energy showcase the potential of this approach. The finding that the number of qubits required does not scale with depth is significant for scalability. Future research could explore larger-scale DQNNs with enhanced hardware and algorithms, investigating the potential for quantum advantage in more complex machine learning tasks. Further optimization of the quantum hardware and the incorporation of error mitigation techniques would improve training performance and expand the scope of possible applications.
Limitations
The study is limited by the size and coherence time of the currently available superconducting quantum processor. The backward pass of the backpropagation algorithm was classically simulated, rather than implemented on the quantum device, potentially limiting the overall speed and efficiency. The choice of specific quantum perceptrons and the ansatz for the target quantum channels might affect the generalizability of the results. Experimental imperfections such as residual ZZ interactions and qubit decoherence impacted the accuracy of the results.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny