scispace - formally typeset
Open accessPosted Content

Simulating the Sycamore quantum supremacy circuits

Abstract: We propose a general tensor network method for simulating quantum circuits. The method is massively more efficient in computing a large number of correlated bitstring amplitudes and probabilities than existing methods. As an application, we study the sampling problem of Google's Sycamore circuits, which are believed to be beyond the reach of classical supercomputers and have been used to demonstrate quantum supremacy. Using our method, employing a small computational cluster containing 60 graphical processing units (GPUs), we have generated one million correlated bitstrings with some entries fixed, from the Sycamore circuit with 53 qubits and 20 cycles, with linear cross-entropy benchmark (XEB) fidelity equals 0.739, which is much higher than those in Google's quantum supremacy experiments.

... read more

Topics: Qubit (53%)
Citations
  More

22 results found


Open accessJournal ArticleDOI: 10.1016/J.SCIB.2021.10.017
Qingling Zhu1, Sirui Cao1, Fusheng Chen1, Ming-Cheng Chen1  +49 moreInstitutions (2)
Abstract: To ensure a long-term quantum computational advantage, the quantum hardware should be upgraded to withstand the competition of continuously improved classical algorithms and hardwares. Here, we demonstrate a superconducting quantum computing systems Zuchongzhi 2.1, which has 66 qubits in a two-dimensional array in a tunable coupler architecture. The readout fidelity of Zuchongzhi 2.1 is considerably improved to an average of 97.74%. The more powerful quantum processor enables us to achieve larger-scale random quantum circuit sampling, with a system scale of up to 60 qubits and 24 cycles, and fidelity of F XEB = ( 3.66 ± 0.345 ) × 10 - 4 . The achieved sampling task is about 6 orders of magnitude more difficult than that of Sycamore [Nature 574, 505 (2019)] in the classic simulation, and 3 orders of magnitude more difficult than the sampling task on Zuchongzhi 2.0 [arXiv:2106.14734 (2021)]. The time consumption of classically simulating random circuit sampling experiment using state-of-the-art classical algorithm and supercomputer is extended to tens of thousands of years (about 4.8 × 10 4 years), while Zuchongzhi 2.1 only takes about 4.2 h, thereby significantly enhancing the quantum computational advantage.

... read more

Topics: Quantum computer (63%), Qubit (61%), Superconducting quantum computing (60%) ... read more

7 Citations


Open accessPosted Content
Abstract: We study large-scale applications using a GPU-accelerated version of the massively parallel Julich universal quantum computer simulator (JUQCS--G) First, we benchmark JUWELS Booster, a GPU cluster with 3744 NVIDIA A100 Tensor Core GPUs Then, we use JUQCS--G to study the relation between quantum annealing (QA) and the quantum approximate optimization algorithm (QAOA) We find that a very coarsely discretized version of QA, termed approximate quantum annealing (AQA), performs surprisingly well in comparison to the QAOA It can either be used to initialize the QAOA, or to avoid the costly optimization procedure altogether Furthermore, we study the scaling of the success probability when using AQA for problems with 30 to 40 qubits We find that the case with largest discretization error performs most favorably, surpassing the best result obtained from the QAOA

... read more

Topics: Quantum annealing (64%), Quantum Turing machine (56%), GPU cluster (55%) ... read more

5 Citations


Open accessPosted Content
Abstract: My 2018 lecture at the ICA workshop in Singapore dealt with quantum computation as a meeting point of the laws of computation and the laws of quantum mechanics. We described a computational complexity argument against the feasibility of quantum computers: we identified a very low-level complexity class of probability distributions described by noisy intermediate-scale quantum computers, and explained why it would allow neither good-quality quantum error-correction nor a demonstration of "quantum supremacy," namely, the ability of quantum computers to make computations that are impossible or extremely hard for classical computers. We went on to describe general predictions arising from the argument and proposed general laws that manifest the failure of quantum computers. In October 2019, "Nature" published a paper describing an experimental work that took place at Google. The paper claims to demonstrate quantum (computational) supremacy on a 53-qubit quantum computer, thus clearly challenging my theory. In this paper, I will explain and discuss my work in the perspective of Google's supremacy claims.

... read more

Topics: Quantum computer (65%), Computation (55%), Quantum (51%)

3 Citations


Open accessPosted Content
Abstract: We study the problem of generating independent samples from the output distribution of Google's Sycamore quantum circuits with a target fidelity, which is believed to be beyond the reach of classical supercomputers and has been used to demonstrate quantum supremacy. We propose a new method to classically solve this problem by contracting the corresponding tensor network just once, and is massively more efficient than existing methods in obtaining a large number of uncorrelated samples with a target fidelity. For the Sycamore quantum supremacy circuit with $53$ qubits and $20$ cycles, we have generated one million uncorrelated bitstrings $\{\mathbf s\}$ which are sampled from a distribution $\widehat P(\mathbf s)=|\widehat \psi(\mathbf s)|^2$, where the approximate state $\widehat \psi$ has fidelity $F\approx 0.0037$. The whole computation has cost about $15$ hours on a computational cluster with $512$ GPUs. The obtained one million samples, the contraction code and contraction order are made public. If our algorithm could be implemented with high efficiency on a modern supercomputer with ExaFLOPS performance, we estimate that ideally, the simulation would cost a few dozens of seconds, which is faster than Google's quantum hardware.

... read more

Topics: Qubit (51%)

3 Citations


Open accessPosted Content
Abstract: A major challenge in the development of near-term quantum computers is to characterize the underlying quantum noise with minimal hardware requirements. We show that random circuit sampling (RCS) is a powerful benchmarking primitive that can be used to efficiently extract the total amount of quantum noise of a many qubit system by creating an exponential decay of fidelity. Compared with randomized benchmarking (RB) and its scalable variants, RCS benchmarking has the unique advantage of being flexible with the gate set, as the fidelity decay in RCS benchmarking comes from scrambling properties of random quantum circuits, which hold for generic gate sets and do not rely on any group structure. First, under a first order approximation in the noise rate, we rigorously prove the exponential decay of average fidelity of low-depth noisy random circuits, using a technique that maps random quantum circuits to a classical spin model. Second, we use the unbiased linear cross entropy as a sample efficient estimator of fidelity and numerically verify its correctness by simulating up to 20 qubits with different noise models. Third, we develop a theoretical model of the total variance of RCS benchmarking, which generalizes previous statistical analysis of cross entropy by considering the variance across different circuits. Finally, we experimentally demonstrate RCS benchmarking on IBM Quantum hardware with up to 20 superconducting qubits, which verifies our theoretical analysis. We expect RCS benchmarking to be widely applicable across different hardware platforms due to its flexibility with the gate set and architecture.

... read more

Topics: Quantum computer (58%), Qubit (55%), Quantum noise (54%) ... read more

3 Citations


References
  More

20 results found


Journal ArticleDOI: 10.1103/REVMODPHYS.53.385
T. A. Brody1, Jorge Flores1, J. B. French2, Pier A. Mello1  +2 moreInstitutions (3)
Abstract: It now appears that the general nature of the deviations from uniformity in the spectrum of a complicated nucleus is essentially the same in all regions of the spectrum and over the entire Periodic Table. This behavior, moreover, is describable in terms of standard Hamiltonian ensembles which could be generated on the basis of simple information-theory concepts, and which give also a good account of fluctuation phenomena of other kinds and, apparently, in other many-body systems besides nuclei. The main departures from simple behavior are ascribable to the moderation of the level repulsion by effects due to symmetries and collectivities, for the description of which more complicated ensembles are called for. One purpose of this review is to give a self-contained account of the theory, using methods: sometimes approximate: which are consonant with the usual theory of stochastic processes. Another purpose is to give a proper foundation for the use of ensemble theory, to make clear the origin of the simplicities in the observable fluctuations, and to derive other general fluctuation results. In comparing theory and experiment, the authors give an analysis of much of the nuclear-energy-level data, as well as an extended discussion of observable effects in nuclear transitionsmore » and reactions and in the low-temperature thermodynamics of aggregates of small metallic particles.« less

... read more

Topics: Statistical mechanics (52%), Random matrix (52%), Observable (51%) ... read more

1,510 Citations


Journal ArticleDOI: 10.1103/PHYSREV.104.483
Charles E. Porter1, R. G. Thomas2Institutions (2)
15 Oct 1956-Physical Review
Abstract: The fluctuations of the neutron reduced widths from the resonance region of intermediate and heavy nuclei have been analyzed by a statistical procedure which is based on the method of maximum likelihood. It is found that a chi-squared distribution with one degree of freedom is quite consistent with the data while a chi-squared distribution with two degrees of freedom (an exponential distribution) is not. The former distribution corresponds to a Gaussian distribution for the reduced-width amplitude, and a plausibility argument is given for it which is based on the consideration of the matrix elements for neutron emission from the compound nucleus and of the central limit theorem of statistics. This argument also suggests that within the framework of the compound-nucleus theory all reduced-width amplitudes have Gaussian distributions, and that many of the distributions for the various channels may be independent. One consequence of the latter suggestion is that the total radiation width for a given spin state which is formed in neutron capture will be essentially constant, in agreement with some observations, because it is the sum of many partial radiation widths. The fluctuations of the provisional fission widths of ${\mathrm{U}}^{235}$ are best described by a chisquared distribution with about 2\textonehalf{} degrees of freedom, indicating that there are effectively only a few independently contributing fission channels.

... read more

Topics: Neutron (55%), Neutron emission (55%), Exponential distribution (54%) ... read more

737 Citations


Open accessJournal ArticleDOI: 10.1073/PNAS.1312486110
Florent Krzakala1, Cristopher Moore2, Elchanan Mossel3, Joe Neeman3  +3 moreInstitutions (4)
Abstract: Spectral algorithms are classic approaches to clustering and community detection in networks. However, for sparse networks the standard versions of these algorithms are suboptimal, in some cases completely failing to detect communities even when other algorithms such as belief propagation can do so. Here, we present a class of spectral algorithms based on a nonbacktracking walk on the directed edges of the graph. The spectrum of this operator is much better-behaved than that of the adjacency matrix or other commonly used matrices, maintaining a strong separation between the bulk eigenvalues and the eigenvalues relevant to community structure even in the sparse case. We show that our algorithm is optimal for graphs generated by the stochastic block model, detecting communities all of the way down to the theoretical limit. We also show the spectrum of the nonbacktracking operator for some real-world networks, illustrating its advantages over traditional spectral clustering.

... read more

Topics: Stochastic block model (61%), Cluster analysis (59%), Spectral clustering (58%) ... read more

621 Citations


Open accessJournal ArticleDOI: 10.1038/S41567-018-0124-X
Sergio Boixo1, Sergei V. Isakov1, Vadim Smelyanskiy1, Ryan Babbush1  +6 moreInstitutions (4)
23 Apr 2018-Nature Physics
Abstract: A critical question for quantum computing in the near future is whether quantum devices without error correction can perform a well-defined computational task beyond the capabilities of supercomputers. Such a demonstration of what is referred to as quantum supremacy requires a reliable evaluation of the resources required to solve tasks with classical approaches. Here, we propose the task of sampling from the output distribution of random quantum circuits as a demonstration of quantum supremacy. We extend previous results in computational complexity to argue that this sampling task must take exponential time in a classical computer. We introduce cross-entropy benchmarking to obtain the experimental fidelity of complex multiqubit dynamics. This can be estimated and extrapolated to give a success metric for a quantum supremacy demonstration. We study the computational cost of relevant classical algorithms and conclude that quantum supremacy can be achieved with circuits in a two-dimensional lattice of 7 × 7 qubits and around 40 clock cycles. This requires an error rate of around 0.5% for two-qubit gates (0.05% for one-qubit gates), and it would demonstrate the basic building blocks for a fault-tolerant quantum computer. As a benchmark for the development of a future quantum computer, sampling from random quantum circuits is suggested as a task that will lead to quantum supremacy—a calculation that cannot be carried out classically.

... read more

Topics: Quantum information (69%), Quantum computer (68%), Quantum simulator (65%) ... read more

567 Citations


Open accessJournal ArticleDOI: 10.1137/050644756
Igor L. Markov1, Yaoyun ShiInstitutions (1)
Abstract: The treewidth of a graph is a useful combinatorial measure of how close the graph is to a tree. We prove that a quantum circuit with $T$ gates whose underlying graph has a treewidth $d$ can be simulated deterministically in $T^{O(1)}\exp[O(d)]$ time, which, in particular, is polynomial in $T$ if $d=O(\log T)$. Among many implications, we show efficient simulations for log-depth circuits whose gates apply to nearby qubits only, a natural constraint satisfied by most physical implementations. We also show that one-way quantum computation of Raussendorf and Briegel (Phys. Rev. Lett., 86 (2001), pp. 5188-5191), a universal quantum computation scheme with promising physical implementations, can be efficiently simulated by a randomized algorithm if its quantum resource is derived from a small-treewidth graph with a constant maximum degree. (The requirement on the maximum degree was removed in [I. L. Markov and Y. Shi, preprint:quant-ph/0511069].)

... read more

Topics: Quantum algorithm (63%), Tree decomposition (62%), Degree (graph theory) (62%) ... read more

308 Citations


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
202120
20202
Network Information
Related Papers (5)