scispace - formally typeset
Search or ask a question

Showing papers on "Quantum complexity theory published in 2002"


Journal ArticleDOI
TL;DR: It is shown that two-bit operations characterized by 4 × 4 matrices in which the sixteen entries obey a set of five polynomial relations can be composed according to certain rules to yield a class of circuits that can be simulated classically in polynometric time.
Abstract: A model of quantum computation based on unitary matrix operations was introduced by Feynman and Deutsch. It has been asked whether the power of this model exceeds that of classical Turing machines. We show here that a significant class of these quantum computations can be simulated classically in polynomial time. In particular we show that two-bit operations characterized by 4 × 4 matrices in which the sixteen entries obey a set of five polynomial relations can be composed according to certain rules to yield a class of circuits that can be simulated classically in polynomial time. This contrasts with the known universality of two-bit operations and demonstrates that efficient quantum computation of restricted classes is reconcilable with the Polynomial Time Turing Hypothesis. Therefore, it is possible that, The techniques introduced bring the quantum computational model within the realm of algebraic complexity theory. In a manner consistent with one view of quantum physics, the wave function is simulated deterministically, and randomization arises only in the course of making measurements. The results generalize the quantum model in that they do not require the matrices to be unitary. In a different direction these techniques also yield deterministic polynomial time algorithms for the decision and parity problems for certain classes of read-twice Boolean formulae. All our results are based on the use of gates that are defined in terms of their graph matching properties.

336 citations


Posted Content
TL;DR: Aharonov and Naveh as mentioned in this paper showed that a natural extension of 3-SAT, namely local Hamiltonians, is QMA complete, based on the classical Cook-Levin proof of the NP completeness of SAT.
Abstract: We describe Kitaev's result from 1999, in which he defines the complexity class QMA, the quantum analog of the class NP, and shows that a natural extension of 3-SAT, namely local Hamiltonians, is QMA complete The result builds upon the classical Cook-Levin proof of the NP completeness of SAT, but differs from it in several fundamental ways, which we highlight This result raises a rich array of open problems related to quantum complexity, algorithms and entanglement, which we state at the end of this survey This survey is the extension of lecture notes taken by Naveh for Aharonov's quantum computation course, held in Tel Aviv University, 2001

197 citations


Journal ArticleDOI
TL;DR: It is proved that for a broad class of protocols the entangled state can enhance the efficiency of solving the problem in the quantum protocol over any classical one if and only if the state violates Bell's inequality for two qutrits.
Abstract: We formulate a two-party communication complexity problem and present its quantum solution that exploits the entanglement between two qutrits. We prove that for a broad class of protocols the entangled state can enhance the efficiency of solving the problem in the quantum protocol over any classical one if and only if the state violates Bell's inequality for two qutrits.

160 citations


Proceedings ArticleDOI
19 May 2002
TL;DR: A lower bound of Ω(n 1/5 ) for the number of queries needed by a quantum computer to solve the collision problem with bounded error probability was shown in this paper.
Abstract: (MATH) The collision problem is to decide whether a function X: { 1,…,n} → { 1, …,n} is one-to-one or two-to-one, given that one of these is the case. We show a lower bound of Ω(n1/5) on the number of queries needed by a quantum computer to solve this problem with bounded error probability. The best known upper bound is O(n1/3), but obtaining any lower bound better than Ω(1) was an open problem since 1997. Our proof uses the polynomial method augmented by some new ideas. We also give a lower bound of Ω(n1/7) for the problem of deciding whether two sets are equal or disjoint on a constant fraction of elements. Finally we give implications of these results for quantum complexity theory.

141 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study summation of sequences and integration in the quantum model of computation and develop quantum algorithms for computing the mean of sequences that satisfy a p-summability condition and for integration of functions from Lebesgue spaces.

141 citations


Journal ArticleDOI
TL;DR: Several results relating time-bounded C, CD, and CND complexity and their applications to a variety of questions in computational complexity theory are shown, including showing how to approximate the size of a set using CD complexity without using the random string as needed in Sipser's earlier proof of a similar result.
Abstract: We take a fresh look at CD complexity, where CDt(x) is the size of the smallest program that distinguishes x from all other strings in time t(|x|). We also look at CND complexity, a new nondeterministic variant of CD complexity, and time-bounded Kolmogorov complexity, denoted by C complexity. We show several results relating time-bounded C, CD, and CND complexity and their applications to a variety of questions in computational complexity theory, including the following: Showing how to approximate the size of a set using CD complexity without using the random string as needed in Sipser's earlier proof of a similar result. Also, we give a new simpler proof of this result of Sipser's. Improving these bounds for almost all strings, using extractors. A proof of the Valiant--Vazirani lemma directly from Sipser's earlier CD lemma. A relativized lower bound for CND complexity. Exact characterizations of equivalences between C, CD, and CND complexity. Showing that satisfying assignments of a satisfiable Boolean formula can be enumerated in time polynomial in the size of the output if and only if a unique assignment can be found quickly. This answers an open question of Papadimitriou. A new Kolmogorov complexity-based proof that BPP\subseteq\Sigma_2^p$. New Kolmogorov complexity based constructions of the following relativized worlds: There exists an infinite set in P with no sparse infinite NP subsets. EXP=NEXP but there exists a NEXP machine whose accepting paths cannot be found in exponential time. Satisfying assignments cannot be found with nonadaptive queries to SAT.

64 citations


Journal Article
TL;DR: In this article, a lower bound of Ω(√n) on the bounded-error quantum query complexity of read-once Boolean functions is established via an inductive argument, together with an extension of the lower bound method of Ambainis.
Abstract: We establish a lower bound of Ω(√n) on the bounded-error quantum query complexity of read-once Boolean functions. The result is proved via an inductive argument, together with an extension of a lower bound method of Ambainis. Ambainis' method involves viewing a quantum computation as a mapping from inputs to quantum states (unit vectors in a complex inner-product space) which changes as the computation proceeds. Initially the mapping is constant (the state is independent of the input). If the computation evalutes the function f then at the end of the computation the two states associated with any f-distinguished pair of inputs (having different f values) are nearly orthogonal. Thus the inner product of their associated states must have changed from 1 to nearly 0. For any set of f-distinguished pairs of inputs, the sum of the inner products of the corresponding pairs of states must decrease significantly during the computation, By deriving an upper bound on the decrease in such a sum, during a single step, for a carefully selected set of input pairs, one can obtain a lower bound on the number of steps. We extend Ambainis' bound by considering general weighted sums of f-distinguished pairs. We then prove our result for read-once functions by induction on the number of variables, where the induction step involves a careful choice of weights depending on f to optimize the lower bound attained.

58 citations


Journal ArticleDOI
01 May 2002
TL;DR: The theory of computational complexity has some interesting links to physics, in particular to quantum computing and statistical mechanics as mentioned in this paper, and the article contains an informal introduction to this theory and its connections to physics.
Abstract: The theory of computational complexity has some interesting links to physics, in particular to quantum computing and statistical mechanics. The article contains an informal introduction to this theory and its links to physics.

48 citations


Posted Content
TL;DR: This article considers the amount of communication that is required to transform a bipartite state into another, typically more entangled, state and obtains lower bounds in this setting by studying the Renyi entropy of the marginal density matrices of the distributed system.
Abstract: In this article we establish new bounds on the quantum communication complexity of distributed problems. Specifically, we consider the amount of communication that is required to transform a bipartite state into another, typically more entangled, state. We obtain lower bounds in this setting by studying the Renyi entropy of the marginal density matrices of the distributed system. The communication bounds on quantum state transformations also imply lower bounds for the model of communication complexity where the task consists of the the distributed evaluation of a function f(x,y). Our approach encapsulates several known lower bound methods that use the log-rank or the von Neumann entropy of the density matrices involved. The technique is also effective for proving lower bounds on problems involving a promise or for which the "hard" distributions of inputs are correlated. As examples, we show how to prove a nearly tight bound on the bounded-error quantum communication complexity of the inner product function in the presence of unlimited amounts of EPR-type entanglement and a similarly strong bound on the complexity of the shifted quadratic character problem.

47 citations


Book ChapterDOI
14 Mar 2002
TL;DR: In this paper, a basic complexity theory for the parameterized analogues of classical complexity classes and complete problems and logical descriptions for all these classes are given, among other things, complete problems.
Abstract: We describe parameterized complexity classes by means of classical complexity theory and descriptive complexity theory. For every classical complexity class we introduce a parameterized analogue in a natural way. In particular, the analogue of polynomial time is the class of all fixed-parameter tractable problems. We develop a basic complexity theory for the parameterized analogues of classical complexity classes and give, among other things, complete problems and logical descriptions. We then show that most of the well-known intractable parameterized complexity classes are not analogues of classical classes. Nevertheless, for all these classes we can provide natural logical descriptions.

45 citations


Journal ArticleDOI
TL;DR: This work considers a quantum computer consisting of n spins with an arbitrary but fixed pair-interaction Hamiltonian and describes how to simulate other pair- interactions by interspersing the natural time evolution with fast local transformations.
Abstract: We consider a quantum computer consisting of n spins with an arbitrary but fixed pair-interaction Hamiltonian and describe how to simulate other pair-interactions by interspersing the natural time evolution with fast local transformations. Calculating the minimal time overhead of such a simulation leads to a convex optimization problem. Lower and upper bounds on the minimal time overhead are derived in terms of chromatic indices of interaction graphs and spectral majorization criteria. These results classify Hamiltonians with respect to their computational power. For a specific Hamiltonian, namely σz ⊗ σz-interactions between all spins, the optimization is mathematically equivalent to a separability problem of n-qubit density matrices. We compare the complexity defined by such a quantum computer with the usual gate complexity.

Posted Content
15 Nov 2002
TL;DR: In this article, the authors propose a multivalued solution to the many-body problem, which reveals the true, complex-dynamical basis of solid-state dynamics, including the origin and internal dynamics of macroscopic quantum states.
Abstract: Any real interaction process produces many incompatible system versions, or realisations, giving rise to omnipresent dynamic randomness and universally defined complexity (arXiv:physics/9806002) Since quantum behaviour dynamically emerges as the lowest complexity level (arXiv:quant-ph/9902016), quantum interaction randomness can only be relatively strong, which reveals the causal origin of quantum indeterminacy (arXiv:quant-ph/9511037) and true quantum chaos (arXiv:quant-ph/9511035), but rigorously excludes the possibility of unitary quantum computation, even in an "ideal", noiseless system Any real computation is an internally chaotic (multivalued) process of system complexity development occurring in different regimes Unitary quantum machines, including their postulated "magic", cannot be realised as such because their dynamically single-valued scheme is incompatible with the irreducibly high dynamic randomness at quantum complexity levels and should be replaced by explicitly chaotic, intrinsically creative machines already realised in living organisms and providing their quite different, realistic kind of magic The related concepts of reality-based, complex-dynamical nanotechnology, biotechnology and intelligence are outlined, together with the ensuing change in research strategy The unreduced, dynamically multivalued solution to the many-body problem reveals the true, complex-dynamical basis of solid-state dynamics, including the origin and internal dynamics of macroscopic quantum states The critical, "end-of-science" state of unitary knowledge and the way to positive change are causally specified within the same, universal concept of complexity

Posted Content
TL;DR: This paper proposes a definition for (honest verifier) quantum statistical zero-knowledge interactive proof systems and studies the resulting complexity class, which is denote QSZK, and proves several facts regarding this class.
Abstract: In this paper we propose a definition for (honest verifier) quantum statistical zero-knowledge interactive proof systems and study the resulting complexity class, which we denote QSZK. We prove several facts regarding this class that establish close connections between classical statistical zero-knowledge and our definition for quantum statistical zero-knowledge, and give some insight regarding the effect of this zero-knowledge restriction on quantum interactive proof systems.

Proceedings ArticleDOI
22 Jul 2002
TL;DR: It is shown that algorithms using spatial-temporal continuous elementary instructions (a-recursive functions) represent not only a new world in computing, but also a more general type of logic inferencing.
Abstract: Computational complexity and computer complexity issues are studied in different architectural settings. Three mathematical machines are considered: the universal machine on integers (UMZ), the universal machine on reals (UMR) and the universal machine on flows (UMF). The three machines induce different kinds of computational difficulties: combinatorial, algebraic, and dynamic, respectively. After a broader overview of computational complexity issues, it is shown, following the reasoning related the UMR, that in many cases the size is not the most important parameter related to computational complexity. Emerging new computing and computer architectures as well as their physical implementation suggest a new look at computational and computer complexities. The new analogic cellular array computer paradigm based on the CNN Universal Machine (generalized to UMF), and its physical implementation in CMOS and optical technologies, proves experimentally the relevance of the role of accuracy and problem parameter role in computational complexity, as well as the need of rigorous definition of computational complexity for UMF. It is also shown that choosing the spatial temporal elementary instructions, as well as taking into account the area and power dissipation, these choices inherently influence computational complexity and computer complexity, respectively. Comments related to biology relevance of the UMF are presented in relation to complexity theory. It is shown that algorithms using spatial-temporal continuous elementary instructions (a-recursive functions) represent not only a new world in computing, but also a more general type of logic inferencing.

Journal ArticleDOI
TL;DR: It is shown that the computational mechanics approach is suitable for analyzing the dynamic complexity of molecular systems and offers new insight into the process.
Abstract: Methods for the calculation of complexity have been investigated as a possible alternative for the analysis of the dynamics of molecular systems “Computational mechanics” is the approach chosen to describe emergent behavior in molecular systems that evolve in time A novel algorithm has been developed for symbolization of a continuous physical trajectory of a dynamic system A method for calculating statistical complexity has been implemented and tested on representative systems It is shown that the computational mechanics approach is suitable for analyzing the dynamic complexity of molecular systems and offers new insight into the process

Journal ArticleDOI
TL;DR: This paper represents and analyzes two important quantum algorithms – Finding the hidden subgroup and Grover search and mentions some pieces of ``Fact'' and ``Folklore'' associated to quantum computing.
Abstract: We represent and analyze two important quantum algorithms – Finding the hidden subgroup and Grover search. As the analysis goes on, we mention some pieces of ``Fact'' and ``Folklore'' associated to quantum computing.

Book ChapterDOI
25 Aug 2002
TL;DR: The ∃Q-operator, which is an abstraction of QMA, and its complement, the ∀Q- operator, which not only define Quantum NP but also build a quantum hierarchy, similar to the Meyer-Stockmeyer polynomial hierarchy, based on two-sided bounded-error quantum computation.
Abstract: The complexity class NP is quintessential and ubiquitous in theoretical computer science. Two different approaches have been made to define “Quantum NP,” the quantum analogue of NP: NQP by Adleman, DeMarrais, and Huang, and QMA by Knill, Kitaev, and Watrous. From an operator point of view, NP can be viewed as the result of the ∃-operator applied to P. Recently, Green, Homer, Moore, and Pollett proposed its quantum version, called the N-operator, which is an abstraction of NQP. This paper introduces the ∃Q-operator, which is an abstraction of QMA, and its complement, the ∀Q-operator. These operators not only define Quantum NP but also build a quantum hierarchy, similar to the Meyer-Stockmeyer polynomial hierarchy, based on two-sided bounded-error quantum computation.

Journal ArticleDOI
TL;DR: The equivalence between inverting a permutation and that of constructing polynomial size quantum networks for reflection operators about a class of quantum states is proved.
Abstract: We discuss the question of the existence of quantum one-way permutations. First, we consider the question: if a state is difficult to prepare, is the reflection operator about that state difficult to construct? By revisiting Grover's algorithm, we present the relationship between this question and the existence of quantum one-way permutations. Next, we prove the equivalence between inverting a permutation and that of constructing polynomial size quantum networks for reflection operators about a class of quantum states. We will consider both the worst case and the average case complexity scenarios for this problem. Moreover, we compare our method to Grover's algorithm and discuss possible applications of our results.

Journal ArticleDOI
TL;DR: This paper presents quantum algorithms for some famous NP problems in graph theory and combination theory that are at least quadratically faster than the classical ones.
Abstract: It is known that quantum computer is more powerful than classical computer. In this paper we present quantum algorithms for some famous NP problems in graph theory and combination theory, these quantum algorithms are at least quadratically faster than the classical ones.

Journal ArticleDOI
TL;DR: This paper considers problems over the real numbers which are related to Lyapunov theory for dynamical systems and introduces a notion of reducibility among problems and shows existence of complete problems for U and for PU, a polynomial hierarchy of continuous-time problems.
Abstract: Recent years have seen an increasing interest in the study of continuous-time computational models. However, not so much has been done with respect to setting up a complexity theoretic framework for such models. The present paper intends to go a step into this direction. We consider problems over the real numbers which we try to relate to Lyapunov theory for dynamical systems: The global minimizers of particular energy functions are supposed to give solutions of the problem. The structure of such energy functions leads to the introduction of problem classes U and NU; for the systems we are considering they parallel the classical complexity classes P and NP. We then introduce a notion of reducibility among problems and show existence of complete problems for NU and for PU, a polynomial hierarchy of continuous-time problems. For previous work on the computational capabilities of continuous-time systems see the surveys by Cris Moore [9] and by Pekka Orponen [10]. Our paper presents a step into the direction of creating a general framework for a complexity theory of continuous-time systems as outlined in [10]. It is closely related to work done by A. Ben-Hur, H. Siegelmann, and S. Fishman [12, 11].

Journal ArticleDOI
TL;DR: It is shown how quantum computers avoid the sign problem in some cases by reducing the complexity from exponential to polynomial, based upon the use of isomorphisms of algebras.

Posted Content
TL;DR: A representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars, and an algebraic approach to high level Quantum Languages is developed using Quantum Assembly language and Quantum C language as examples.
Abstract: We show a representation of Quantum Computers defines Quantum Turing Machines with associated Quantum Grammars. We then create examples of Quantum Grammars. Lastly we develop an algebraic approach to high level Quantum Languages using Quantum Assembly language and Quantum C language as examples.

Book ChapterDOI
30 May 2002
TL;DR: The use of complexity theory to study computational aspects of learning and combinatorial optimization in the context of neural networks and the PAC model of learning is considered, emphasizing some negative results based on complexity theoretic assumptions.
Abstract: We survey some relationships between computational complexity and neural network theory. Here, only networks of binary threshold neurons are considered.We begin by presenting some contributions of neural networks in structural complexity theory. In parallel complexity, the class TCk0 of problems solvable by feed-forward networks with k levels and a polynomial number of neurons is considered. Separation results are recalled and the relation between TC0 = ?TCk0 and NC1 is analyzed. In particular, under the conjecture TC ? NC1, we characterize the class of regular languages accepted by feed-forward networks with a constant number of levels and a polynomial number of neurons.We also discuss the use of complexity theory to study computational aspects of learning and combinatorial optimization in the context of neural networks. We consider the PAC model of learning, emphasizing some negative results based on complexity theoretic assumptions. Finally, we discussed some results in the realm of neural networks related to a probabilistic characterization of NP.

Posted Content
TL;DR: The results may be the first non-trivial quantum zero- knowledge proofs secure even against dishonest quantum verifiers, since the protocols are non-interactive, and thus the zero-knowledge property does not depend on whether the verifier in the protocol is honest or not.
Abstract: This paper introduces quantum analogues of non-interactive perfect and statistical zero-knowledge proof systems. Similar to the classical cases, it is shown that sharing randomness or entanglement is necessary for non-trivial protocols of non-interactive quantum perfect and statistical zero-knowledge. It is also shown that, with sharing EPR pairs a priori, the class of languages having one-sided bounded error non-interactive quantum perfect zero-knowledge proof systems has a natural complete problem. Non-triviality of such a proof system is based on the fact proved in this paper that the Graph Non-Automorphism problem, which is not known in BQP, can be reduced to our complete problem. Our results may be the first non-trivial quantum zero-knowledge proofs secure even against dishonest quantum verifiers, since our protocols are non-interactive, and thus the zero-knowledge property does not depend on whether the verifier in the protocol is honest or not. A restricted version of our complete problem derives a natural complete problem for BQP.

Journal ArticleDOI
TL;DR: This paper introduces a general method of establishing tight linear inequalities between different types of predictive complexity, namely, logarithmic complexity, which coincides with a variant of Kolmogorov complexity, and square-loss complexity,which is interesting for applications.

Journal Article
TL;DR: In this paper, the emergence reasons and characteristics of quantum neural computation are discussed first, then some typical computational models are introduced in detail, at the same time, several solutions to existing problems in these models are put forward.
Abstract: Quantum neural computation is a new paradigm based on the combination of classical neural computation and quantum theory. In this paper, the emergence reasons and characteristics of quantum neural computation are discussed first, then some typical computational models are introduced in detail, at the same time, several solutions to existing problems in these models are put forward. Finally, other related questions are also discussed.

Journal ArticleDOI
TL;DR: A classical theorem by Ladner for the Turing model is examined in these different frameworks and how the complexity of this problem might change if the authors consider real data together with an algebraic model of computation instead of rational inputs together with the Turing machine model.

Journal ArticleDOI
TL;DR: This paper introduces two mathematical models of realistic quantum computation such as NMR (Nuclear Magnetic Resonance) quantum computation and defines bulk quantum Turing machine (BQTM) as a model of bulk quantum computation, and shows that BQTMs are polynomially related to ordinary QTMs as long as they are used to solve decision problems.
Abstract: In this paper, we introduce two mathematical models of realistic quantum computation. First, we develop a theory of bulk quantum computation such as NMR (Nuclear Magnetic Resonance) quantum computation. For this purpose, we define bulk quantum Turing machine (BQTM for short) as a model of bulk quantum computation. Then, we define complexity classes EBQP, BBQP and ZBQP as counterparts of the quantum complexity classes EQP, BQP and ZQP, respectively, and show that EBQP=EQP, BBQP=BQP and ZBQP=ZQP. This implies that BQTMs are polynomially related to ordinary QTMs as long as they are used to solve decision problems. We also show that these two types of QTMs are also polynomially related when they solve a function problem which has a unique solution. Furthermore, we show that BQTMs can solve certain instances of NP-complete problems efficiently.On the other hand, in the theory of quantum computation, only feed-forward quantum circuits are investigated, because a quantum circuit represents a sequence of applications of time evolution operators. But, if a quantum computer is a physical device where the gates are interactions controlled by a current computer such as laser pulses on trapped ions, NMR and most implementation proposals, it is natural to describe quantum circuits as ones that have feedback loops if we want to visualize the total amount of the necessary hardware. For this purpose, we introduce a quantum recurrent circuit model, which is a quantum circuit with feedback loops. Let C be a quantum recurrent circuit which solves the satisfiability problem for a blackbox Boolean function including n variables with probability at least 1/2. And let s be the size of C (i.e. the number of the gates in C) and t be the number of iterations that is needed for C to solve the satisfiability problem. Then, we show that, for those quantum recurrent circuits, the minimum value of max(s, t) is O(n22n/3).

Proceedings ArticleDOI
21 May 2002
TL;DR: Several trends in the history of computational complexity are described, including: the early history of complexity; the development of NP-completeness and the structure of complexity classes; how randomness, parallelism and quantum mechanics has forced us to reexamine the authors' notions of efficient computation.
Abstract: Summary form only given. We describe several trends in the history of computational complexity, including: the early history of complexity; the development of NP-completeness and the structure of complexity classes; how randomness, parallelism and quantum mechanics has forced us to reexamine our notions of efficient computation and how computational complexity has responded to these new models; the meteoric rise and fall of circuit complexity; and the marriage of complexity and cryptography and how research on a cryptographic model led to limitations of approximation.

Journal ArticleDOI
TL;DR: The computational efficiency of practical procedures to extract the results of a quantum computation from the wave function respresenting the final state of the quantum computer is addressed.
Abstract: We discuss the impact of the physical implementation of a quantum computer on its computational efficiency, using computer simulations of physical models of quantum computer hardware. We address the computational efficiency of practical procedures to extract the results of a quantum computation from the wave function respresenting the final state of the quantum computer.