scispace - formally typeset
Search or ask a question

Showing papers on "Quantum complexity theory published in 2006"


Journal Article
TL;DR: This theorem has the conceptual implication that quantum states, despite being exponentially long vectors, are nevertheless ‘reasonable’ in a learning theory sense and has two applications to quantum computing: first, a new simulation of quantum one-way communication protocols and second, the use of trusted classical advice to verify untrusted quantum advice.
Abstract: Traditional quantum state tomography requires a number of measurements that grows exponentially with the number of qubits n . But using ideas from computational learning theory, we show that one can do exponentially better in a statistical setting. In particular, to predict the outcomes of most measurements drawn from an arbitrary probability distribution, one needs only a number of sample measurements that grows linearly with n . This theorem has the conceptual implication that quantum states, despite being exponentially long vectors, are nevertheless ‘reasonable’ in a learning theory sense. The theorem also has two applications to quantum computing: first, a new simulation of quantum one-way communication protocols and second, the use of trusted classical advice to verify untrusted quantum advice.

149 citations


Posted Content
TL;DR: In this article, the complexity of a quantum analogue of the satisfiability problem is studied and a polynomial-time algorithm for the classical 2-SAT is presented, which is complete in the complexity class QMA with one-sided error.
Abstract: Complexity of a quantum analogue of the satisfiability problem is studied. Quantum k-SAT is a problem of verifying whether there exists n-qubit pure state such that its k-qubit reduced density matrices have support on prescribed subspaces. We present a classical algorithm solving quantum 2-SAT in a polynomial time. It generalizes the well-known algorithm for the classical 2-SAT. Besides, we show that for any k>=4 quantum k-SAT is complete in the complexity class QMA with one-sided error.

149 citations


Proceedings ArticleDOI
21 May 2006
TL;DR: This paper proves that several interactive proof systems are zero-knowledge against general quantum attacks and establishes for the first time that true zero- knowledge is indeed possible in the presence of quantum information and computation.
Abstract: This paper proves that several interactive proof systems are zero-knowledge against general quantum attacks. This includes the well-known Goldreich-Micali-Wigderson classical zero-knowledge protocols for Graph Isomorphism and Graph 3-Coloring (assuming the existence of quantum computationally concealing commitment schemes in the second case). Also included is a quantum interactive protocol for a complete problem for the complexity class of problems having "honest verifier" quantum statistical zero-knowledge proofs, which therefore establishes that honest verifier and general quantum statistical zero-knowledge are equal: QSZK = QSZKHV. Previously no non-trivial proof systems were known to be zero-knowledge against quantum attacks, except in restricted settings such as the honest-verifier and common reference string models. This paper therefore establishes for the first time that true zero-knowledge is indeed possible in the presence of quantum information and computation.

117 citations


Posted Content
TL;DR: In this paper, it was shown that any quantum computation can be replaced by an additive approximation of the Jones polynomial, evaluated at almost any primitive root of unity, which is a #P-hard problem.
Abstract: We analyze relationships between quantum computation and a family of generalizations of the Jones polynomial. Extending recent work by Aharonov et al., we give efficient quantum circuits for implementing the unitary Jones-Wenzl representations of the braid group. We use these to provide new quantum algorithms for approximately evaluating a family of specializations of the HOMFLYPT two-variable polynomial of trace closures of braids. We also give algorithms for approximating the Jones polynomial of a general class of closures of braids at roots of unity. Next we provide a self-contained proof of a result of Freedman et al. that any quantum computation can be replaced by an additive approximation of the Jones polynomial, evaluated at almost any primitive root of unity. Our proof encodes two-qubit unitaries into the rectangular representation of the eight-strand braid group. We then give QCMA-complete and PSPACE-complete problems which are based on braids. We conclude with direct proofs that evaluating the Jones polynomial of the plat closure at most primitive roots of unity is a #P-hard problem, while learning its most significant bit is PP-hard, circumventing the usual route through the Tutte polynomial and graph coloring.

73 citations


Journal Article
TL;DR: This review gives a survey of numerical algorithms and software to simulate quantum computers and includes a few examples that illustrate the use of simulation software for ideal and physical models of quantum computers.
Abstract: This review gives a survey of numerical algorithms and software to simulate quantum computers. It covers the basic concepts of quantum computation and quantum algorithms and includes a few examples that illustrate the use of simulation software for ideal and physical models of quantum computers.

53 citations


Proceedings ArticleDOI
09 Jul 2006
TL;DR: In this article, it was shown that for the set of all d times n binary matrices with entries 0 or 1 and d < n, MKCS exactly recovers the input for an overwhelming fraction of the matrices provided the Kolmogorov complexity of the input is O(d).
Abstract: Consider a d times n matrix A, with d < n. The problem of solving for x in y = Ax is underdetermined, and has infinitely many solutions (if there are any). Given y, the minimum Kolmogorov complexity solution (MKCS) of the input x is defined to be an input z (out of many) with minimum Kolmogorov-complexity that satisfies y = Az. One expects that if the actual input is simple enough, then MKCS will recover the input exactly. This paper presents a preliminary study of the existence and value of the complexity level up to which such a complexity-based recovery is possible. It is shown that for the set of all d times n binary matrices (with entries 0 or 1 and d < n), MKCS exactly recovers the input for an overwhelming fraction of the matrices provided the Kolmogorov complexity of the input is O(d). A weak converse that is loose by a log n factor is also established for this case. Finally, we investigate the difficulty of finding a matrix that has the property of recovering inputs with complexity of O(d) using MKCS

45 citations


Journal ArticleDOI
TL;DR: A quantum version of this theorem is proved, connecting the von Neumann entropy rate and two notions of quantum Kolmogorov complexity, both based on the shortest qubit descriptions of qubit strings that, run by a universal quantum Turing machine, reproduce them as outputs.
Abstract: In classical information theory, entropy rate and algorithmic complexity per symbol are related by a theorem of Brudno. In this paper, we prove a quantum version of this theorem, connecting the von Neumann entropy rate and two notions of quantum Kolmogorov complexity, both based on the shortest qubit descriptions of qubit strings that, run by a universal quantum Turing machine, reproduce them as outputs.

41 citations


Journal ArticleDOI
TL;DR: A sequence of new algorithms whose error/cost properties improve from step to step are defined, which yield new upper complexity bounds, which differ from known lower bounds by only an arbitrarily small positive parameter in the exponent, and a logarithmic factor.

39 citations


Journal ArticleDOI
TL;DR: In this paper, the authors prove lower bounds on the complexity of explicitly given graphs and prove new lower bounds for boolean functions, as well as new proofs of some known lower bounds in the graph-theoretic framework.
Abstract: By the complexity of a graph we mean the minimum number of union and intersection operations needed to obtain the whole set of its edges starting from stars. This measure of graphs is related to the circuit complexity of boolean functions.We prove some lower bounds on the complexity of explicitly given graphs. This yields some new lower bounds for boolean functions, as well as new proofs of some known lower bounds in the graph-theoretic framework. We also formulate several combinatorial problems whose solution would have intriguing consequences in computational complexity.

39 citations


Journal ArticleDOI
TL;DR: This work introduces a Classically controlled Quantum Turing Machine (CQTM), which is a Turing machine with a quantum tape for acting on quantum data, and a classical transition function for formalised classical control, and proves that any classical Turing machine can be simulated by a CQTM without loss of efficiency.
Abstract: It is reasonable to assume that quantum computations take place under the control of the classical world. For modelling this standard situation, we introduce a Classically controlled Quantum Turing Machine (CQTM), which is a Turing machine with a quantum tape for acting on quantum data, and a classical transition function for formalised classical control. In a CQTM, unitary transformations and quantum measurements are allowed. We show that any classical Turing machine can be simulated by a CQTM without loss of efficiency. Furthermore, we show that any $k$-tape CQTM can be simulated by a 2-tape CQTM with a quadratic loss of efficiency. In order to compare CQTMs with existing models of quantum computation, we prove that any uniform family of quantum circuits (Yao 1993) is efficiently approximated by a CQTM. Moreover, we prove that any semi-uniform family of quantum circuits (Nishimura and Ozawa 2002), and any measurement calculus pattern (Danos et al. 2004) are efficiently simulated by a CQTM. Finally, we introduce a Measurement-based Quantum Turing Machine (MQTM), which is a restriction of CQTMs in which only projective measurements are allowed. We prove that any CQTM is efficiently simulated by a MQTM. In order to appreciate the similarity between programming classical Turing machines and programming CQTMs, some examples of CQTMs are given.

34 citations


Journal ArticleDOI
TL;DR: A survey of all the important aspects and results that have shaped the field of quantum computation and quantum information and their applications to the general theory of information, cryptography, algorithms, computational complexity and error-correction.
Abstract: The paper is intended to be a survey of all the important aspects and results that have shaped the field of quantum computation and quantum information. The reader is first familiarized with those features and principles of quantum mechanics providing a more efficient and secure information processing. Their applications to the general theory of information, cryptography, algorithms, computational complexity and error-correction are then discussed. Prospects for building a practical quantum computer are also analyzed. †This research was supported by the Natural Sciences and Engineering Research Council of Canada.

Posted Content
TL;DR: Several BQP-complete problems are presented, including Local Hamiltonian Eigenvalue Sampling and Phase Estimation Sampling, which are closely related to the well-known quantum algorithm and quantum complexity theories.
Abstract: A central problem in quantum computing is to identify computational tasks which can be solved substantially faster on a quantum computer than on any classical computer. By studying the hardest such tasks, known as BQP-complete problems, we deepen our understanding of the power and limitations of quantum computers. We present several BQP-complete problems, including Local Hamiltonian Eigenvalue Sampling and Phase Estimation Sampling. Different than the previous known BQP-complete problems (the Quadratically Signed Weight Enumerator problem [KL01] and the Approximation of Jones Polynomials [FKW02, FLW02, AJL06]), our problems are of a basic linear algebra nature and are closely related to the well-known quantum algorithm and quantum complexity theories.

Journal ArticleDOI
TL;DR: This work focuses on the comparison among complexity classes for membrane systems with active membranes and the classes PSPACE, EXP, and EXPSPACE, defined within the framework of membrane systems.
Abstract: We compare various computational complexity classes defined within the framework of membrane systems, a distributed parallel computing device which is inspired from the functioning of the cell, with usual computational complexity classes for Turing machines. In particular, we focus our attention on the comparison among complexity classes for membrane systems with active membranes (where new membranes can be created by division of existing membranes) and the classes PSPACE, EXP, and EXPSPACE.

Journal ArticleDOI
TL;DR: If the rules of the P system are applied sequentially, then the accepted language class is strictly included in the class of languages accepted by one-way Turing machines with a logarithmically bounded workspace, and if the rules are applied in the maximally parallel manner, thenThe class of context-sensitive languages is obtained.
Abstract: We characterize the classes of languages over finite alphabets which may be described by P automata, i.e., accepting P systems with communication rules only. Motivated by properties of natural computing systems, and the actual behavior of P automata, we study computational complexity classes with a certain restriction on the use of the available workspace in the course of computations and relate these to the language classes described by P automata. We prove that if the rules of the P system are applied sequentially, then the accepted language class is strictly included in the class of languages accepted by one-way Turing machines with a logarithmically bounded workspace, and if the rules are applied in the maximally parallel manner, then the class of context-sensitive languages is obtained.

Journal ArticleDOI
TL;DR: This paper considers the realizability of quantum gates from the perspective of information complexity, and argues that the gate operations are irreversible if there is a difference in the accuracy associated with input and output variables.
Abstract: This paper considers the realizability of quantum gates from the perspective of information complexity. Since the gate is a physical device that must be controlled classically, it is subject to random error. We define the complexity of gate operation in terms of the difference between the entropy of the variables associated with initial and final states of the computation. We argue that the gate operations are irreversible if there is a difference in the accuracy associated with input and output variables. It is shown that under some conditions the gate operation may be associated with unbounded entropy, implying impossibility of implementation.

Proceedings ArticleDOI
30 Jul 2006
TL;DR: This paper introduces a very natural and simple model of a space-bounded quantum online machine and proves an exponential separation of classical and quantum online space complexity, in the bounded-error setting and for a total language.
Abstract: The main objective of quantum computation is to exploit the natural parallelism of quantum mechanics to solve problems using less computational resources than classical computers. Although quantum algorithms realizing an exponential time speed-up over the best known classical algorithms exist, no quantum algorithm is known performing computation using less space resources than classical algorithms. In this paper, we study, for the first time explicitly, spacebounded quantum algorithms for computational problems where the input is given not as a whole, but bit by bit. We show that there exist such problems that a quantum computer can solve using exponentiallyless work space than a classical computer. More precisely, we introduce a very natural and simple model of a space-bounded quantum online machine and prove an exponential separation of classical and quantum online space complexity, in the bounded-error setting and for a total language. The language we consider is inspired bya communication problem that Buhrman, Cleve and Wigderson used to show an almost quadratic separation of quantum and classical bounded-error communication complexity. We prove that, in the framework of online space complexity, the separation becomes exponential.

Posted Content
TL;DR: In this article, a semantic complexity class based on the model of quantum computing with just one pure qubit was defined and its computational power in terms of the problem of estimating the trace of a large unitary matrix was discussed.
Abstract: We define a semantic complexity class based on the model of quantum computing with just one pure qubit (as introduced by Knill and Laflamme) and discuss its computational power in terms of the problem of estimating the trace of a large unitary matrix. We show that this problem is complete for the complexity class, and derive some further fundamental features of the class. We conclude with a discussion of some associated open conjectures and new oracle separations between classes.

Proceedings ArticleDOI
11 Sep 2006
TL;DR: This paper presents a constructive synthesis algorithm for any n-qubit reversible function, where N distinct input patterns different from their corresponding outputs, and shows that this circuit can be synthesized by at most 2nldrN '(n - 1)'-CNOT gates and 4n2 ldr N NOT gates.
Abstract: This paper presents a constructive synthesis algorithm for any n-qubit reversible function. Given any n-qubit reversible function, there are N distinct input patterns different from their corresponding outputs, where N les 2n, and the other (2n - N) input patterns will be the same as their outputs. We show that this circuit can be synthesized by at most 2nldrN '(n - 1)'-CNOT gates and 4n2 ldr N NOT gates. The time complexity of our algorithm has asymptotic upper bound O(n ldr 4n). The space complexity of our synthesis algorithm is also O(n ldr 2n). The computational complexity of our synthesis algorithm is exponentially lower than the complexity of breadth-first search based synthesis algorithm.

Proceedings ArticleDOI
16 Jul 2006
TL;DR: This paper introduces a new technique for removing existential quantifiers over quantum states, and shows that there is no way to pack an exponential number of bits into a polynomial-size quantum state, in such a way that the value of any one of those bits can later be proven with the help of a polymouthsize quantum witness.
Abstract: This paper introduces a new technique for removing existential quantifiers over quantum states. Using this technique, we show that there is no way to pack an exponential number of bits into a polynomial-size quantum state, in such a way that the value of any one of those bits can later be proven with the help of a polynomial-size quantum witness. We also show that any problem in QMA with polynomial-size quantum advice, is also in PSPACE with polynomial-size classical advice. This builds on our earlier result that BQP/qpoly /spl sube/ PP/poly, and offers an intriguing counterpoint to the recent discovery of Raz that QIP/qpoly = ALL. Finally, we show that QCMA/qpoly /spl sube/ PP/poly and that QMA/rpoly = QMA/poly.

Journal ArticleDOI
TL;DR: This work defines the complexity of a quantum state by means of the classical description complexity of an (abstract) experimental procedure that allows us to prepare the state with a given fidelity, and argues that this definition satisfies the intuitive idea of complexity as a measure of how difficult it is to prepare a state.
Abstract: We give a definition for the Kolmogorov complexity of a pure quantum state. In classical information theory, the algorithmic complexity of a string is a measure of the information needed by a universal machine to reproduce the string itself. We define the complexity of a quantum state by means of the classical description complexity of an (abstract) experimental procedure that allows us to prepare the state with a given fidelity. We argue that our definition satisfies the intuitive idea of complexity as a measure of "how difficult" it is to prepare a state. We apply this definition to give an upper bound on the algorithmic complexity of a number of known states. Furthermore, we establish a connection between the entanglement of a quantum state and its algorithmic complexity.

26 May 2006
TL;DR: The role of data complexity in the context of binary classification problems is investigated, and it is illustrated that a data set is best approximated by its principal subsets which are Pareto optimal with respect to the complexity and the set size.
Abstract: We investigate the role of data complexity in the context of binary classification problems. The universal data complexity is defined for a data set as the Kolmogorov complexity of the mapping enforced by the data set. It is closely related to several existing principles used in machine learning such as Occam's razor, the minimum description length, and the Bayesian approach. The data complexity can also be defined based on a learning model, which is more realistic for applications. We demonstrate the application of the data complexity in two learning problems, data decomposition and data pruning. In data decomposition, we illustrate that a data set is best approximated by its principal subsets which are Pareto optimal with respect to the complexity and the set size. In data pruning, we show that outliers usually have high complexity contributions, and propose methods for estimating the complexity contribution. Since in practice we have to approximate the ideal data complexity measures, we also discuss the impact of such approximations.

Proceedings ArticleDOI
11 Sep 2006
TL;DR: This paper investigates the applicability of KC as an estimator of problem difficulty for optimization in the black box scenario and concludes that high KC implies hardness however, while easy fitness functions have low KC the reverse is not necessarily true.
Abstract: The Kolmogorov complexity (KC) of a string is defined as the length of the shortest program that can print that string and halts. This measure of complexity is often used in optimization to indicate expected function difficulty. While it is often used, there are known counterexamples. This paper investigates the applicability of KC as an estimator of problem difficulty for optimization in the black box scenario. In particular we address the known counterexamples (e.g., pseudorandom functions, the NIAH) and explore the connection of KC to the NFLTs. We conclude that high KC implies hardness however, while easy fitness functions have low KC the reverse is not necessarily true.

01 Jan 2006
TL;DR: An exponential separation between one-way quantum and classical communication protocols for twopartial Boolean functions, both of which are variants of the Boolean Hidden Matching Problem of Bar-Yossef et al. are given.

Journal Article
TL;DR: In this article, a lower bound of (p n) for the randomized one-way communication complexity of the Boolean Hidden Matching Problem was given. But this lower bound was later broken by Gavinsky, Kempe, de Wolf, and de Wolf.
Abstract: We give a tight lower bound of ( p n) for the randomized one-way communication complexity of the Boolean Hidden Matching Problem [BJK04]. Since there is a quantum one-way communication complexity protocol of O(logn) qubits for this problem, we obtain an exponential separation of quantum and classical one-way communication complexity for partial functions. A similar result was independently obtained by Gavinsky, Kempe, de Wolf [GKdW06]. Our lower bound is obtained by Fourier analysis, using the Fourier coecients inequality of Kahn Kalai and Linial [KKL88].

Journal ArticleDOI
07 Feb 2006
TL;DR: The idea is that information is an extension of the concept 'algorithmic complexity' from a class of desirable and concrete processes to a class more general that can only in pragmatic terms be regarded as existing in the conception.
Abstract: We study complexity and information and introduce the idea that while complexity is relative to a given class of processes, information is process independent: Information is complexity relative to the class of all conceivable processes. In essence, the idea is that information is an extension of the concept 'algorithmic complexity' from a class of desirable and concrete processes, such as those represented by binary decision trees, to a class more general that can only in pragmatic terms be regarded as existing in the conception. It is then precisely the fact that information is defined relative to such a large class of processes that it becomes an effective tool for analyzing phenomena in a wide range of disciplines.We test these ideas on the complexity of classical states. A domain is used to specify the class of processes, and both qualitative and quantitative notions of complexity for classical states emerge. The resulting theory is used to give new proofs of fundamental results from classical information theory, to give a new characterization of entropy in quantum mechanics, to establish a rigorous connection between entanglement transformation and computation, and to derive lower bounds on algorithmic complexity. All of this is a consequence of the setting which gives rise to the fixed point theorem: The least fixed point of the copying operator above complexity is information.

Proceedings ArticleDOI
16 Jul 2006
TL;DR: In this article, the authors established a connection between (sub)exponential time complexity and parameterized complexity by proving that the so-called miniaturization mapping is a reduction preserving isomorphism between the two theories.
Abstract: We establish a close connection between (sub)exponential time complexity and parameterized complexity by proving that the so-called miniaturization mapping is a reduction preserving isomorphism between the two theories.

Book ChapterDOI
15 Aug 2006
TL;DR: In this paper, the complexity of some computational problems on finite black-box rings whose elements are encoded as strings of a given length and the ring operations are performed by a black box oracle was studied.
Abstract: We study the complexity of some computational problems on finite black-box rings whose elements are encoded as strings of a given length and the ring operations are performed by a black-box oracle. We give a polynomial-time quantum algorithm to compute a basis representation for a given black-box ring. Using this result we obtain polynomial-time quantum algorithms for several natural computational problems over black-box rings.


Journal Article
TL;DR: In this article, it was shown that even the weak version of quantum nondeterminism is strictly stronger than classical non-deterministic communication complexity, and that classical proofs can be checked more efficiently by quantum protocols than by classical ones.
Abstract: In this paper we study a weak version of quantum nondeterministic communication complexity, corresponding to the most natural generalization of classical nondeterminism, in which a classical proof has to be checked with probability one by a quantum protocol. We prove that, in the framework of communication complexity, even the weak version of quantum nondeterminism is strictly stronger than classical nondeterminism. More precisely, we show the first separation, for a total function, of quantum weakly nondeterministic and classical nondeterministic communication complexity. This separation is quadratic and shows that classical proofs can be checked more efficiently by quantum protocols than by classical ones.

Proceedings Article
01 Jan 2006
TL;DR: It is shown that both of the opening book problem and the closing book problem are NP-hard.
Abstract: Origami is the centuries-old art of folding paper, and recently, it is investigated as science. In this paper, another hundreds-old art of folding paper, a pop-up book, is studied. A model for the pop-up book design problem is given, and its complexity is investigated. We show that both of the opening book problem and the closing book problem are NP-hard.