scispace - formally typeset
Open AccessJournal ArticleDOI

Quantum variational algorithms are swamped with traps

Eric R. Anschuetz, +1 more
- 11 May 2022 - 
- Vol. 13, Iss: 1
TLDR
In this paper , it was shown that a wide class of variational quantum models have only a superpolynomially small fraction of local minima within any constant energy from the global minimum, rendering them untrainable if no good initial guess of the optimal parameters is known.
Abstract
Abstract One of the most important properties of classical neural networks is how surprisingly trainable they are, though their training algorithms typically rely on optimizing complicated, nonconvex loss functions. Previous results have shown that unlike the case in classical neural networks, variational quantum models are often not trainable. The most studied phenomenon is the onset of barren plateaus in the training landscape of these quantum models, typically when the models are very deep. This focus on barren plateaus has made the phenomenon almost synonymous with the trainability of quantum models. Here, we show that barren plateaus are only a part of the story. We prove that a wide class of variational quantum models—which are shallow, and exhibit no barren plateaus—have only a superpolynomially small fraction of local minima within any constant energy from the global minimum, rendering these models untrainable if no good initial guess of the optimal parameters is known. We also study the trainability of variational quantum algorithms from a statistical query framework, and show that noisy optimization of a wide variety of quantum models is impossible with a sub-exponential number of queries. Finally, we numerically confirm our results on a variety of problem instances. Though we exclude a wide variety of quantum algorithms here, we give reason for optimism for certain classes of variational algorithms and discuss potential ways forward in showing the practical utility of such algorithms.

read more

Content maybe subject to copyright    Report

Citations
More filters

Synergy Between Quantum Circuits and Tensor Networks: Short-cutting the Race to Practical Quantum Advantage

TL;DR: This work introduces a scalable procedure for harnessing classical computing resources to determine task-specific initializations of PQCs, and effectively translates increases in classical resources to enhanced performance and speed in training quantum circuits.

Quantum algorithm for ground state energy estimation using circuit depth with exponentially improved dependence on precision

TL;DR: This work develops and analyze ground state energy estimation algorithms that use just one auxilliary qubit and for which the circuit depths scale as O (1 / ∆ · polylog(∆ /(cid:15) )), where ∆ ≥ (cid):15) is a lower bound on the energy gap of the Hamiltonian.
Journal ArticleDOI

Theoretical Guarantees for Permutation-Equivariant Quantum Neural Networks

TL;DR: This work provides the first theoretical guarantees for equivariant QNNs, thus indicating the extreme power and potential of GQML.

Avoiding barren plateaus via transferability of smooth solutions in a Hamiltonian variational ansatz

TL;DR: In this paper , the authors show that by employing iterative search schemes one can efficiently prepare the ground state of paradigmatic quantum many-body models, circumventing also the barren plateau phenomenon.

Training Variational Quantum Circuits with CoVaR: Covariance Root Finding with Classical Shadows

Gregory Boyd, +1 more
- 18 Apr 2022 - 
TL;DR: The most remarkable feature of the CoVaR approach is that it allows us to fully exploit the extremely powerful classical shadow techniques, i.e., the authors simultaneously estimate a very large number of covariances, which is directly analogous to stochastic gradient-based optimisations of paramount importance to classical machine learning.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Posted Content

PyTorch: An Imperative Style, High-Performance Deep Learning Library

TL;DR: PyTorch as discussed by the authors is a machine learning library that provides an imperative and Pythonic programming style that makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs.
Journal ArticleDOI

No free lunch theorems for optimization

TL;DR: A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving and a number of "no free lunch" (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.
Proceedings ArticleDOI

A theory of the learnable

TL;DR: This paper regards learning as the phenomenon of knowledge acquisition in the absence of explicit programming, and gives a precise methodology for studying this phenomenon from a computational viewpoint.
Journal ArticleDOI

Quantum Computing in the NISQ era and beyond

TL;DR: Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future as mentioned in this paper, which will be useful tools for exploring many-body quantum physics, and may have other useful applications.
Related Papers (5)