scispace - formally typeset
Search or ask a question

Showing papers on "Quantum complexity theory published in 2011"


Book
21 Dec 2011
TL;DR: This is the second volume of a systematic two-volume presentation of the various areas of research in the field of structural complexity, addressed to graduate students and researchers and assumes knowledge of the topics treated in the first volume but is otherwise nearly self-contained.
Abstract: This is the second volume of a systematic two-volume presentation of the various areas of research in the field of structural complexity. The mathematical theory of computation has developed into a broad and rich discipline within which the theory of algorithmic complexity can be approached from several points of view. This volume is addressed to graduate students and researchers and assumes knowledge of the topics treated in the first volume but is otherwise nearly self-contained. Topics covered include vector machines, parallel computation, alternation, uniform circuit complexity, isomorphism, biimmunity and complexity cores, relativization and positive relativization, the low and high hierarchies, Kolmogorov complexity and probability classes. Numerous exercises and references are given.

330 citations


Proceedings ArticleDOI
22 Oct 2011
TL;DR: It is obtained that the general adversary bound characterizes the quantum query complexity of any function whatsoever, implying that discrete and continuous-time query models are equivalent in the bounded-error setting, even for the general state-conversion problem.
Abstract: State conversion generalizes query complexity to the problem of converting between two input-dependent quantum states by making queries to the input. We characterize the complexity of this problem by introducing a natural information-theoretic norm that extends the Schur product operator norm. The complexity of converting between two systems of states is given by the distance between them, as measured by this norm. In the special case of function evaluation, the norm is closely related to the general adversary bound, a semi-definite program that lower-bounds the number of input queries needed by a quantum algorithm to evaluate a function. We thus obtain that the general adversary bound characterizes the quantum query complexity of any function whatsoever. This generalizes and simplifies the proof of the same result in the case of boolean input and output. Also in the case of function evaluation, we show that our norm satisfies a remarkable composition property, implying that the quantum query complexity of the composition of two functions is at most the product of the query complexities of the functions, up to a constant. Finally, our result implies that discrete and continuous-time query models are equivalent in the bounded-error setting, even for the general state-conversion problem.

176 citations


Proceedings ArticleDOI
23 Jan 2011
TL;DR: It is shown that any boolean function can be evaluated optimally by a quantum query algorithm that alternates a certain fixed, input-independent reflection with a second reflection that coherently queries the input string.
Abstract: We show that any boolean function can be evaluated optimally by a quantum query algorithm that alternates a certain fixed, input-independent reflection with a second reflection that coherently queries the input string. Originally introduced for solving the unstructured search problem, this two-reflections structure is therefore a universal feature of quantum algorithms.Our proof goes via the general adversary bound, a semi-definite program (SDP) that lower-bounds the quantum query complexity of a function. By a quantum algorithm for evaluating span programs, this lower bound is known to be tight up to a sub-logarithmic factor. The extra factor comes from converting a continuous-time query algorithm into a discrete-query algorithm. We give a direct and simplified quantum algorithm based on the dual SDP, with a bounded-error query complexity that matches the general adversary bound.Therefore, the general adversary lower bound is tight; it is in fact an SDP for quantum query complexity. This implies that the quantum query complexity of the composition f o (g,..., g) of two boolean functions f and g matches the product of the query complexities of f and g, without a logarithmic factor for error reduction. It efficiently characterizes the quantum query complexity of a read-once formula over any finite gate set. It further shows that span programs are equivalent to quantum query algorithms.

121 citations


Proceedings ArticleDOI
06 Jun 2011
TL;DR: The question whether the same exponential separation can be achieved with a quantum protocol that uses only one round of communication is settled in the affirmative.
Abstract: In STOC 1999, Raz presented a (partial) function for which there is a quantum protocol communicating only O(log n) qubits, but for which any classical (randomized, bounded-error) protocol requires poly(n) bits of communication. That quantum protocol requires two rounds of communication. Ever since Raz's paper it was open whether the same exponential separation can be achieved with a quantum protocol that uses only one round of communication. Here we settle this question in the affirmative.

109 citations


Posted Content
TL;DR: The power of the approach is proved by designing a quantum algorithm for the triangle problem with query complexity O(n35/27) that is better than O( n13/10) of the best previously known algorithm by Magniez et al.
Abstract: Besides the Hidden Subgroup Problem, the second large class of quantum speed-ups is for functions with constant-sized 1-certificates. This includes the OR function, solvable by the Grover algorithm, the distinctness, the triangle and other problems. The usual way to solve them is by quantum walk on the Johnson graph. We propose a solution for the same problems using span programs. The span program is a computational model equivalent to the quantum query algorithm in its strength, and yet very different in its outfit. We prove the power of our approach by designing a quantum algorithm for the triangle problem with query complexity $O(n^{35/27})$ that is better than $O(n^{13/10})$ of the best previously known algorithm by Magniez et al.

100 citations


Journal ArticleDOI
TL;DR: The complexity of several constraint satisfaction problems using the quantum adiabatic algorithm in its simplest implementation is determined by studying the size dependence of the gap to the first excited state of "typical" instances and it is found that, at large sizes N, the complexity increases exponentially for all models that are studied.
Abstract: We determine the complexity of several constraint satisfaction problems using the quantum adiabatic algorithm in its simplest implementation. We do so by studying the size dependence of the gap to the first excited state of ``typical'' instances. We find that, at large sizes $N$, the complexity increases exponentially for all models that we study. We also compare our results against the complexity of the analogous classical algorithm WalkSAT and show that the harder the problem is for the classical algorithm, the harder it is also for the quantum adiabatic algorithm.

91 citations


Journal Article
TL;DR: In this paper, it was shown that computing the permanent of an n × n matrix is # P -hard, and that the Valiant9s theorem can be proven by linear-optical quantum computing.
Abstract: One of the crown jewels of complexity theory is Valiant9s theorem that computing the permanent of an n × n matrix is # P -hard. Here we show that, by using the model of linear-optical quantum computing —and in particular, a universality theorem owing to Knill, Laflamme and Milburn—one can give a different and arguably more intuitive proof of this theorem.

72 citations


09 Aug 2011
TL;DR: The authors argue that computational complexity theory leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction and Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest.
Abstract: One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction and Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.

66 citations


Proceedings ArticleDOI
08 Jun 2011
TL;DR: Lower bounds for the QMA-communication complexity of the functions Inner Product and Disjointness are shown, and how one can 'transfer' hardness under an analogous measure in the query complexity model to the communication model using Sherstov's pattern matrix method is described.
Abstract: We show several results related to interactive proof modes of communication complexity. First we show lower bounds for the QMA-communication complexity of the functions Inner Product and Disjointness. We describe a general method to prove lower bounds for QMA-communication complexity, and show how one can 'transfer' hardness under an analogous measure in the query complexity model to the communication model using Sherstov's pattern matrix method.Combining a result by Vereshchagin and the pattern matrix method we find a partial function with AM-communication complexity O(\log n), PP-communication complexity \Omega(n^{1/3}), and QMA-communication complexity \Omega(n^{1/6}). Hence in the world of communication complexity noninteractive quantum proof systems are not able to efficiently simulate co-nondeterminism or interaction. These results imply that the related questions in Turing machine complexity theory cannot be resolved by 'algebrizing' techniques. Finally we show that in MA-protocols there is an exponential gap between one-way protocols and two-way protocols for a partial function (this refers to the interaction between Alice and Bob). This is in contrast to nondeterministic, AM-, and QMA-protocols, where one-way communication is essentially optimal.

63 citations


Book
01 Jan 2011
TL;DR: Entropy and complexity analyses of D-dimensional quantum systems and analysis of the atomic shape function with information theory and complexity measures are analyzed.
Abstract: Atomic and Molecular Complexities: Their Physical and Chemical Interpretations.- Uncertainty relations related to the Renyi entropy.- Entropy and complexity analyses of D-dimensional quantum systems.- Analyzing the atomic shape function with information theory and complexity measures.- Statistical indicators of complexity and its applications.- Renyi and Fisher information as complexity measures.- Statistical complexity in quantum many-body systems.- Structural entropy and its applications.

63 citations


Journal ArticleDOI
30 Sep 2011-Chaos
TL;DR: A geometric approach to complexity based on the principle that complexity requires interactions at different scales of description is developed, which presents a theory of complexity measures for finite random fields using the geometric framework of hierarchies of exponential families.
Abstract: We develop a geometric approach to complexity based on the principle that complexity requires interactions at different scales of description. Complex systems are more than the sum of their parts of any size and not just more than the sum of their elements. Using information geometry, we therefore analyze the decomposition of a system in terms of an interaction hierarchy. In mathematical terms, we present a theory of complexity measures for finite random fields using the geometric framework of hierarchies of exponential families. Within our framework, previously proposed complexity measures find their natural place and gain a new interpretation.


Book ChapterDOI
01 Jul 2011
TL;DR: In this article, the authors studied the complexity of non-signaling distributions, i.e., those where Alice's marginal distribution does not depend on Bob's input, and vice versa.
Abstract: We study a model of communication complexity that encompasses many well-studied problems, including classical and quantum communication complexity, the complexity of simulating distributions arising from bipartite measurements of shared quantum states, and XOR games. In this model, Alice gets an input x, Bob gets an input y, and their goal is to each produce an output a, b distributed according to some pre-specified joint distribution p(a, b|x, y). Our results apply to any non-signaling distribution, that is, those where Alice's marginal distribution does not depend on Bob's input, and vice versa. By taking a geometric view of the non-signaling distributions, we introduce a simple new technique based on affine combinations of lower-complexity distributions, and we give the first general technique to apply to all these settings, with elementary proofs and very intuitive interpretations. Specifically, we introduce two complexity measures, one which gives lower bounds on classical communication, and one for quantum communication. These measures can be expressed as convex optimization problems. We show that the dual formulations have a striking interpretation, since they coincide with maximum violations of Bell and Tsirelson inequalities. The dual expressions are closely related to the winning probability of XOR games. Despite their apparent simplicity, these lower bounds subsume many known communication complexity lower bound methods, most notably the recent lower bounds of Linial and Shraibman for the special case of Boolean functions. We show that as in the case of Boolean functions, the gap between the quantum and classical lower bounds is at most linear in the size of the support of the distribution, and does not depend on the size of the inputs. This translates into a bound on the gap between maximal Bell and Tsirelson inequality violations, which was previously known only for the case of distributions with Boolean outcomes and uniform marginals. It also allows us to show that for some distributions, information theoretic methods are necessary to prove strong lower bounds. Finally, we give an exponential upper bound on quantum and classical communication complexity in the simultaneous messages model, for any non-signaling distribution. One consequence of this is a simple proof that any quantum distribution can be approximated with a constant number of bits of communication.

Posted Content
TL;DR: The results imply that Kitaev's celebrated Toric code construction is, in a well defined sense, optimal as a construction of Topological Order based on commuting Hamiltonians, and imply that in all such systems, the entanglement in the ground states is local.
Abstract: The local Hamiltonian problem plays the equivalent role of SAT in quantum complexity theory. Understanding the complexity of the intermediate case in which the constraints are quantum but all local terms in the Hamiltonian commute, is of importance for conceptual, physical and computational complexity reasons. Bravyi and Vyalyi showed in 2003, using a clever application of the representation theory of C*-algebras, that if the terms in the Hamiltonian are all two-local, the problem is in NP, and the entanglement in the ground states is local. The general case remained open since then. In this paper we extend the results of Bravyi and Vyalyi beyond the two-local case, to the case of three-qubit interactions. We then extend our results even further, and show that NP verification is possible for three-wise interaction between qutrits as well, as long as the interaction graph is embedded on a planar lattice, or more generally, "Nearly Euclidean" (NE). The proofs imply that in all such systems, the entanglement in the ground states is local. These extensions imply an intriguing sharp transition phenomenon in commuting Hamiltonian systems: 3-local NE systems based on qubits and qutrits cannot be used to construct Topological order, as their entanglement is local, whereas for higher dimensional qudits, or for interactions of at least 4 qudits, Topological Order is already possible, via Kitaev's Toric Code construction. We thus conclude that Kitaev's Toric Code construction is optimal for deriving topological order based on commuting Hamiltonians.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the computational complexity of commuting Hamiltonians and show that deciding whether the ground state minimizes the energy of each local term individually is in the complexity class NP.
Abstract: We consider the computational complexity of Hamiltonians which are sums of commuting terms acting on plaquettes in a square lattice of qubits, and we show that deciding whether the ground state minimizes the energy of each local term individually is in the complexity class NP. That is, if the ground states has this property, this can be proven using a classical certificate which can be efficiently verified on a classical computer. Different to previous results on commuting Hamiltonians, our certificate proves the existence of such a state without giving instructions on how to prepare it.

Proceedings ArticleDOI
Alexander A. Sherstov1
06 Jun 2011
TL;DR: This work proves that quantum communication complexity obeys an SDPT whenever the communication lower bound for a single instance is proved by the generalized discrepancy method, the strongest technique in that model.
Abstract: A strong direct product theorem (SDPT) states that solving n instances of a problem requires Omega(n) times the resources for a single instance, even to achieve success probability 2-Ω(n). We prove that quantum communication complexity obeys an SDPT whenever the communication lower bound for a single instance is proved by the generalized discrepancy method, the strongest technique in that model. We prove that quantum query complexity obeys an SDPT whenever the query lower bound for a single instance is proved by the polynomial method, one of the two main techniques in that model. In both models, we prove the corresponding XOR lemmas and threshold direct product theorems.

Journal ArticleDOI
TL;DR: This work studies the properties of other measures that arise naturally in this framework and introduces yet more notions of resource-bounded Kolmogorov complexity, to demonstrate that other complexity measures such as branching-program size and formula size can also be discussed in terms of Kolmogsorv complexity.

Book ChapterDOI
01 Jan 2011
TL;DR: In this paper, the authors review the present knowledge about the analytic information theory of quantum systems with non-standard dimensionality in the position and momentum spaces, and apply them to general systems, then to single particle systems in central potentials and, finally, to hydrogenic systems in D-dimensions.
Abstract: This chapter briefly reviews the present knowledge about the analytic information theory of quantum systems with non-standard dimensionality in the position and momentum spaces. The main concepts of this theory are the power and entropic moments, which are very fertile largely because of their flexibility and multiple interpretations. They are used here to study the most relevant information-theoretic one-element (Fisher, Shannon, Renyi, Tsallis) and some composite two-elements (Fisher-Shannon, LMC shape and Cramer-Rao complexities) measures which describe the spreading measures of the position and momentum probability densities farther beyond the standard deviation. We first apply them to general systems, then to single particle systems in central potentials and, finally, to hydrogenic systems in D-dimensions.

Journal ArticleDOI
TL;DR: This paper surveys results and the quantum toolbox they use in diverse classical (non-quantum) areas, such as coding theory, communication complexity, and polynomial approximations.
Abstract: Alongside the development of quantum algorithms and quantum complexity theory in recent years, quantum techniques have also proved instrumental in obtaining results in diverse classical (non-quantum) areas, such as coding theory, communication complexity, and polynomial approximations. In this paper we survey these results and the quantum toolbox they use.

Journal ArticleDOI
TL;DR: The difficulty of both problems is exactly captured by a class which is the counting version of the quantum complexity class quantum Merlin Arthur, which implies that computing the ground state degeneracy or the density of states for classical Hamiltonians is just as hard as it is for quantum Hamiltonians.
Abstract: We study the computational difficulty of computing the ground state degeneracy and the density of states for local Hamiltonians. We show that the difficulty of both problems is exactly captured by a class which we call #BQP, which is the counting version of the quantum complexity class quantum Merlin Arthur. We show that #BQP is not harder than its classical counting counterpart #P, which in turn implies that computing the ground state degeneracy or the density of states for classical Hamiltonians is just as hard as it is for quantum Hamiltonians.

Journal ArticleDOI
TL;DR: In this paper, the entropy and complexity properties of potentials with one and two Dirac-delta functions are discussed in both position and momentum spaces, and the information-theoretic lengths of Fisher, Renyi and Shannon types as well as the Cramer-Rao, Fisher-Shannon and LMC shape complexities of the lowest-lying stationary states of one-and two-dimensional potentials are analyzed.
Abstract: The Dirac-delta-like quantum-mechanical potentials are frequently used to describe and interpret numerous phenomena in many scientific fields including atomic and molecular physics, condensed matter and quantum computation. The entropy and complexity properties of potentials with one and two Dirac-delta functions are here analytically calculated and numerically discussed in both position and momentum spaces. We have studied the information-theoretic lengths of Fisher, Renyi and Shannon types as well as the Cramer–Rao, Fisher–Shannon and LMC shape complexities of the lowest-lying stationary states of one-delta and twin-delta. They allow us to grasp and quantify different facets of the spreading of the charge and momentum of the system far beyond the celebrated standard deviation.

Journal ArticleDOI
TL;DR: Aharonov et al. as mentioned in this paper used the detectability lemma (DL) in the context of the quantum PCP challenge to derive a simpler and more intuitive proof of Hastings' seminal one-dimensional (1D) area law.
Abstract: Quantum Hamiltonian complexity, an emerging area at the intersection of condensed matter physics and quantum complexity theory, studies the properties of local Hamiltonians and their ground states. In this paper we focus on a seemingly specialized technical tool, the detectability lemma (DL), introduced in the context of the quantum PCP challenge (Aharonov et al 2009 arXiv:0811.3412), which is a major open question in quantum Hamiltonian complexity. We show that a reformulated version of the lemma is a versatile tool that can be used in place of the celebrated Lieb-Robinson (LR) bound to prove several important results in quantum Hamiltonian complexity. The resulting proofs are much simpler, more combinatorial and provide a plausible path toward tackling some fundamental open questions in Hamiltonian complexity. We provide an alternative simpler proof of the DL that removes a key restriction in the original statement (Aharonov et al 2009 arXiv:0811.3412), making it more suitable for the broader context of quantum Hamiltonian complexity. Specifically, we first use the DL to provide a one-page proof of Hastings' result that the correlations in the ground states of gapped Hamiltonians decay exponentially with distance (Hastings 2004 Phys. Rev. B 69 104431). We then apply the DL to derive a simpler and more intuitive proof of Hastings' seminal one-dimensional (1D) area law (Hastings 2007 J. Stat. Mech. (2007) P8024) (both these proofs are restricted to frustration-free systems). Proving the area law for two and higher dimensions is one of the most important open questions in the field of Hamiltonian complexity, and the combinatorial nature of the

Proceedings ArticleDOI
22 Oct 2011
TL;DR: In this paper, Bravyi and Vyalyi showed that the Toric code construction is optimal as a construction of topological order based on commuting Hamiltonians for three-qubit interactions.
Abstract: The local Hamiltonian problem plays the equivalent role of SAT in quantum complexity theory. Understanding the complexity of the intermediate case in which the constraints are quantum but all local terms in the Hamiltonian commute, is of importance for conceptual, physical and computational complexity reasons. Bravyi and Vyalyi showed in 2003, using a clever application of the representation theory of C*-algebras, that if the terms in the Hamiltonian are all two-local, the problem is in NP, and the entanglement in the ground states is local. The general case remained open since then. In this paper we extend this result beyond the two-local case, to the case of three-qubit interactions. We then extend our results even further, and show that NP verification is possible for three-wise interaction between qutrits as well, as long as the interaction graph is planar and also " nearly Euclidean & quot, in some well-defined sense. The proofs imply that in all such systems, the entanglement in the ground states is local. These extensions imply an intriguing sharp transition phenomenon in commuting Hamiltonian systems: the ground spaces of 3-local " physical & quot, systems based on qubits and qutrits are diagonalizable by a basis whose entanglement is highly local, while even slightly more involved interactions (the particle dimensionality or the locality of the interaction is larger) already exhibit an important long-range entanglement property called Topological Order. Our results thus imply that Kitaev's celebrated Toric code construction is, in a well defined sense, optimal as a construction of Topological Order based on commuting Hamiltonians.

Posted Content
TL;DR: The learning graph technique from arXiv:1105.4024 is used to give a quantum algorithm for k-distinctness problem that runs in o(n^{3/4})$ queries, for a fixed $k$, given some prior knowledge on the structure of the input.
Abstract: It is known that the dual of the general adversary bound can be used to build quantum query algorithms with optimal complexity. Despite this result, not many quantum algorithms have been designed this way. This paper shows another example of such algorithm. We use the learning graph technique from arXiv:1105.4024 to give a quantum algorithm for $k$-distinctness problem that runs in $o(n^{3/4})$ queries, for a fixed $k$, given some prior knowledge on the structure of the input. The best known quantum algorithm for the unconditional problem uses $O(n^{k/(k+1)})$ queries.

Proceedings ArticleDOI
23 May 2011
TL;DR: It is shown that ternary encoding leads to quantum circuits that have significantly less qud its and lower quantum costs and in case of serial realization of quantum computers, the ternaries algorithms and circuits are also faster.
Abstract: The paper presents a generalization of the well-known Grover Algorithm to operate on ternary quantum circuits. We compare complexity of oracles and some of their commonly used components for binary and ternary cases and various sizes and densities of colored graphs. We show that ternary encoding leads to quantum circuits that have significantly less qud its and lower quantum costs. In case of serial realization of quantum computers, our ternary algorithms and circuits are also faster.

Journal ArticleDOI
TL;DR: This work determines the complexity of several constraint-satisfaction problems using the heuristic algorithm WalkSAT, and finds the hardest model is the one for which there is a polynomial time algorithm.
Abstract: We determine the complexity of several constraint-satisfaction problems using the heuristic algorithm WalkSAT. At large sizes N, the complexity increases exponentially with N in all cases. Perhaps surprisingly, out of all the models studied, the hardest for WalkSAT is the one for which there is a polynomial time algorithm.

Book
30 Jun 2011
TL;DR: Error and complexity in Numerical Methods and Error-Free, Parallel, and Probabilistic Computations as mentioned in this paper was studied in error-free, parallel, and probabilistic computations.
Abstract: Chapter 1: Introduction Chapter 2: Error: Precisely What, Why, and How Chapter 3: Complexity: What, Why, and How Chapter 4: Errors and Approximations in Digital Computers Chapter 5: Error and Complexity in Numerical Methods Chapter 6: Error and Complexity in Error-Free, Parallel, and Probabilistic Computations Index

Journal ArticleDOI
TL;DR: An extension of the complexity space of partial functions is constructed and it is shown that it is an appropriate mathematical tool for the complexity analysis of algorithms and for the validation of recursive definitions of programs.
Abstract: The study of the dual complexity space, introduced by S. Romaguera and M.P. Schellekens [Quasi-metric properties of complexity spaces, Topol. Appl. 98 (1999), pp. 311-322], constitutes a part of the interdisciplinary research on Computer Science and Topology. The relevance of this theory is given by the fact that it allows one to apply fixed point techniques of denotational semantics to complexity analysis. Motivated by this fact and with the intention of obtaining a mixed framework valid for both disciplines, a new complexity space formed by partial functions was recently introduced and studied by S. Romaguera and O. Valero [On the structure of the space of complexity partial functions, Int. J. Comput. Math. 85 (2008), pp. 631-640]. An application of the complexity space of partial functions to model certain processes that arise, in a natural way, in symbolic computation was given in the aforementioned reference. In this paper, we enter more deeply into the relationship between semantics and complexity analysis of programs. We construct an extension of the complexity space of partial functions and show that it is, at the same time, an appropriate mathematical tool for the complexity analysis of algorithms and for the validation of recursive definitions of programs. As applications of our complexity framework, we show the correctness of the denotational specification of the factorial function and give an alternative formal proof of the asymptotic upper bound for the average case analysis of Quicksort.

Dissertation
01 Jan 2011
TL;DR: This thesis presents a refined framework that is suitable for discussing computational complexity, and the key idea is to use (a certain class of) string functions as names representing these objects, which are more expressive than infinite sequences.
Abstract: Computable analysis studies problems involving real numbers, sets and functions from the viewpoint of computability. Elements of uncountable sets (such as real numbers) are represented through approximation and processed by Turing machines. However, application of this approach to computational complexity has been limited in generality. In this thesis, we present a refined framework that is suitable for discussing computational complexity. The key idea is to use (a certain class of) string functions as names representing these objects. These are more expressive than infinite sequences, which served as names in prior work that formulated complexity in more restricted settings. An important advantage of using string functions is that we can define their size in the way inspired by higher-type complexity theory. This enables us to talk about computation on string functions whose time or space is bounded polynomially in the input size, giving rise to more general analogues of the classes P, NP, and PSPACE. We also define NP- and PSPACE-completeness under suitable many-one reductions. Because our framework separates machine computation and semantics, it can be applied to problems on sets of interest in analysis once we specify a suitable representation (encoding). As prototype applications, we consider the complexity of several problems whose inputs and outputs are real numbers, real sets, and real functions. The latter two cannot be represented succinctly using existing approaches based on infinite sequences, so ours is the first treatment of functions on them. As an interesting example, the task of numerical algorithms for solving the initial value problem of differential equations is naturally viewed as an operator taking real functions to real functions. Because there was no complexity theory for operators, previous results could only state how complex the solution can be. We now reformulate them and show that the operator itself is polynomial-space complete. We survey some of such complexity results involving real numbers and cast them in our framework.

Book
29 Jun 2011
TL;DR: This chapter discusses one-way functions, pseudo-random generators, and tail bounds in the context of quantum computation and abstract complexity theory.
Abstract: Contents Preface. 1. Preliminaries. 2. Abstract complexity theory. 3. P, NP, and E. 4. Quantum computation. 5. One-way functions, pseudo-random generators. 6. Optimization problems. A. Tail bounds. Bibliography. Index.