logo
ResearchBunny Logo
A PRACTICAL OVERVIEW OF QUANTUM COMPUTING: IS EXASCALE POSSIBLE?

Computer Science

A PRACTICAL OVERVIEW OF QUANTUM COMPUTING: IS EXASCALE POSSIBLE?

J. H. Davenport, J. R. Jones, et al.

Explore the exciting frontier of exascale quantum computing with insights from researchers James H Davenport, Jessica R Jones, and Matthew Thomason. This paper tackles the complexities of hybrid quantum systems, algorithms like Shor's and Grover's, and the essential infrastructure needed for a quantum revolution!

00:00
00:00
~3 min • Beginner • English
Introduction
The paper examines what production quantum computers might look like and whether an exa-op quantum system could be realized. Drawing parallels with classical HPC’s trajectory toward exascale, the authors argue that while a hybrid quantum-classical machine may be feasible perhaps by mid-century, large-scale, general-purpose quantum computing remains distant. They foreground constraints unique to quantum devices—limited parallelism relative to classical HPC, error sensitivity, and the need for new toolchains and training—while noting that many HPC skills will remain relevant. The central question is whether exascale (in the sense of exa-operations per second) quantum computation is achievable and what practical barriers stand in the way.
Literature Review
The work synthesizes literature on quantum algorithms, error correction, and quantum-HPC integration. It references foundational algorithms such as Grover’s search and Shor’s factoring, as well as analyses of their practical resource requirements. It surveys error mitigation and correction advances (e.g., zero-noise extrapolation, probabilistic error cancellation, randomized compiling, surface codes, logical qubits) and recent critiques regarding experimental demonstrations of Shor’s algorithm. It also covers classical factoring via GNFS and associated linear algebra (block Wiedemann), as well as proposals for hybrid schemes that offload subproblems (e.g., smoothness tests in sieving) to quantum devices. The review extends to software and tooling needs for quantum programming, including simulators (Qiskit, qsim) and control/OS layers, and operational considerations like energy use, maintenance, and reliability gleaned from HPC practice and early quantum deployments.
Methodology
The paper presents a conceptual and comparative analysis rather than an empirical study. Methods include: - Parallelism analysis: An illustrative calculation using AES-128 key search contrasts classical parallel brute force with quantum Grover search, showing why naïvely subdividing the search space does not yield linear quantum speedups; e.g., Grover over 2^128 space requires ~2^64 oracle calls. Splitting across 1,000 machines leaves per-machine search size ~2^118, implying ~2^59 oracle calls and ~16 years at 1 ns/oracle, requiring ~10^6 machines to approach half a year. - Error correction and noise: Discussion of NISQ-era limitations, sensitivity to gate and coherence errors, and scaling overheads of error mitigation/correction (surface codes, ZNE, PEC, randomized compiling). Emphasizes the impracticality of checkpoint/restart in quantum computations and the need for higher reliability. - Factoring case study: Detailed outline of classical GNFS steps (setup, sieving for B-smooth relations in integer and algebraic sides, sparse linear algebra via block Wiedemann) with the conjectured complexity exp((64/9)^{1/3}+o(1)) (log N)^{1/3} (log log N)^{2/3}. - Shor’s algorithm resource analysis: Notes the theoretical O(poly(n)) time but practical resource needs: for RSA-2048, about 4100 error-free logical qubits, depth O(N^3), and ~10^10 error-free cycles. Summarizes mappings/estimates: 20 million noisy qubits in ~8 hours under certain assumptions; alternative estimates of 177 days with 13,436 qubits and multimode memory; simulator-based estimate of 10,241 qubits, 2.22×10^12 gates, depth 1.79×10^12. - Hybrid pragmatic approach: Uses Grover to accelerate GNFS sieving by dividing search into tiles expected to contain exactly one B-smooth pair (a,b), reducing the L_{1/3} exponent from ~1.923 to ~1.387: exp((8/3)^{1/3}+o(1)) (log N)^{1/3}(log log N)^{2/3}. Analyzes qubit requirements O(n^{2/3}), with a refined bound 8 n^{2/3} + 4 n^{2/3} log(1.44 n^{2/3}). Discusses ambiguity when a tile fails (no solution, multiple solutions affecting Grover, or computation error) and Poisson statistics: P(exactly one solution)=1/e≈0.368; P(two)=1/(2e)≈0.184; P(>2)≈0.08. Notes need for classical verification of returned pairs. - Comparative summary: Contrasts GNFS, Shor, and the hybrid approach in terms of asymptotic time and qubit counts and discusses crossover where the hybrid becomes asymptotically qubit-cheaper than Shor (around n≈4×10^5 bits).
Key Findings
- Early production systems will likely be hybrid quantum-classical architectures; compilers, libraries, and training must be rethought, but core HPC skills will remain valuable. - Parallelism does not translate straightforwardly: quantum speedups via Grover cannot be linearly parallelized across many devices as in classical HPC. Example: AES-128 Grover requires ~2^64 oracle calls. At 1 ns/oracle this is ~500 years; splitting across 1,000 devices still implies ~16 years per device; about 10^6 devices are needed to approach ~0.5 years. - Error correction remains a major obstacle. NISQ devices are limited to shallow circuits, constraining practical workloads. Quantum checkpoint/restart is not available; reliability and maintenance uncertainties are high. - Classical GNFS factoring has conjectured complexity exp((64/9)^{1/3}+o(1)) (log N)^{1/3} (log log N)^{2/3}, with (64/9)^{1/3}≈1.923. - Shor’s algorithm is polynomial-time but practically demanding: for RSA-2048, about 4100 error-free logical qubits and ~10^10 error-free cycles; depth O(N^3). Resource mappings vary widely: 20 million noisy qubits for 8 hours under certain assumptions; 177 days with 13,436 qubits and multimode memory; simulator estimates of 10,241 qubits, 2.22×10^12 gates, depth 1.79×10^12. - A hybrid GNFS+Grover approach reduces the L_{1/3} exponent to (8/3)^{1/3}≈1.387, with qubit usage O(n^{2/3}) (refined bound 8 n^{2/3} + 4 n^{2/3} log(1.44 n^{2/3})). However, practical gains depend on engineering challenges (e.g., tile ambiguity, error rates), and the hybrid uses fewer qubits than Shor only for extremely large n (≈4×10^5 bits). - Probability modeling for tiling shows only ~36.8% of tiles contain exactly one solution; ~18.4% contain two; ~8% contain more than two, affecting Grover convergence and requiring careful handling and classical verification. - Energy and operations: Frontier consumes ~21 MW for 1.102 exaFLOPS. Contemporary quantum systems devote ~20–30 kW to cooling plus ~1–2 kW during runs; scaling behavior of power with qubit count and maintenance cycles is unknown. Cooling/warmup cycles take days, impacting availability and energy use. - “Green” advantages are unproven for early quantum systems when full infrastructure and maintenance are included; long-term, higher FLOPS/Watt may be achievable once systems mature and software adapts. - Exa-op quantum capability might be plausible by late century, with more modest hybrid systems potentially by ~2050, but at scales below general-purpose needs initially.
Discussion
The analysis suggests that while exascale-like quantum performance may eventually be achievable, significant obstacles separate current capabilities from that vision. Quantum parallelism is fundamentally constrained compared with classical data-parallel scaling, limiting near-term throughput gains. Error correction and mitigation remain the gating factors for depth and reliability; without robust fault tolerance, algorithms like Shor cannot be run at meaningful scales. The hybrid GNFS+Grover approach provides a pragmatic path to near- to mid-term quantum acceleration for specific subroutines, yielding a substantial asymptotic reduction in the L_{1/3} exponent compared with pure classical GNFS. However, practical engineering issues (error rates, ambiguity in tile outcomes, need for classical verification) and the large qubit counts required for impactful problem sizes temper expectations. Operational realities—energy use, lack of checkpointing, maintenance-induced downtime—further complicate deployment in supercomputing centers. Consequently, early quantum deployment will likely focus on specialized hybrid acceleration in well-chosen workloads, leveraging existing HPC expertise while new toolchains and training mature. This aligns with the central question: exascale quantum may ultimately be possible, but the immediate future is hybrid, incremental, and domain-targeted rather than a wholesale replacement of classical exascale.
Conclusion
Quantum computing differs fundamentally from classical HPC both at small and large scales. Near-term systems will be hybrid, with quantum devices acting as accelerators. Programming models, toolchains, and operator practices must evolve, and workforce development is critical. Error correction remains the key technical bottleneck; until robust fault tolerance is realized, large-scale algorithms like Shor’s will not be reliably practical. A hybrid approach that accelerates specific GNFS subroutines via Grover can yield meaningful asymptotic improvements with moderate qubit counts, but significant engineering challenges persist. Operationally, quantum systems will demand substantial energy and careful maintenance planning, likely resulting in high initial OPEX and uncertain “green” benefits. In the long run, however, quantum-enabled supercomputers may offer superior performance per watt and broaden the set of tractable problems. Future work should prioritize fault-tolerant architectures, scalable error mitigation/correction, robust software stacks (compilers, debuggers, profilers), and realistic performance/energy models to inform data center integration.
Limitations
The work is a conceptual overview without experimental validation. Many quantitative estimates (qubit counts, runtimes, error rates, energy) depend on assumptions that vary widely across technologies and are based on simulations or theoretical mappings rather than operational systems. Error correction overheads and reliability in production environments are uncertain, as are maintenance schedules and energy scaling with system size. The practicality of the hybrid GNFS+Grover approach depends on unresolved engineering issues (e.g., handling multiple-solution tiles and quantum errors) and its benefits are asymptotic, with crossover points far beyond current cryptographic sizes.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny