scispace - formally typeset
Search or ask a question

Showing papers on "Quantum complexity theory published in 2012"


Journal ArticleDOI
TL;DR: This work shows how to systematically construct quantum models that break this classical bound, and that the system of minimal entropy that simulates such processes must necessarily feature quantum dynamics.
Abstract: Mathematical models are an essential component of quantitative science. They generate predictions about the future, based on information available in the present. In the spirit of simpler is better; should two models make identical predictions, the one that requires less input is preferred. Yet, for almost all stochastic processes, even the provably optimal classical models waste information. The amount of input information they demand exceeds the amount of predictive information they output. Here we show how to systematically construct quantum models that break this classical bound, and that the system of minimal entropy that simulates such processes must necessarily feature quantum dynamics. This indicates that many observed phenomena could be significantly simpler than classically possible should quantum effects be involved.

132 citations


Proceedings ArticleDOI
20 Oct 2012
TL;DR: In this paper, a quantum algorithm for the k-distinctness problem is presented, which solves the problem in a less number of queries than the previous algorithm by Ambainis.
Abstract: We present a quantum algorithm solving the k-distinctness problem in a less number of queries than the previous algorithm by Ambainis. The construction uses a modified learning graph approach. Compared to the recent paper by Belovs and Lee, the algorithm doesn't require any prior information on the input, and the complexity analysis is much simpler.

95 citations


Proceedings ArticleDOI
08 Jan 2012
TL;DR: It is shown that there are n-bit correlated equilibria which can be generated by only one EPR pair followed by local operation (without communication), but need at least log2(n) classical shared random bits plus communication.
Abstract: We propose a simple yet rich model to extend strategic games to the quantum setting, in which we define quantum Nash and correlated equilibria and study the relations between classical and quantum equilibria. Unlike all previous work that focused on qualitative questions on specific games of very small sizes, we quantitatively address the following fundamental question for general games of growing sizes:How much "advantage" can playing quantum strategies provide, if any?Two measures of the advantage are studied.1. Since game mainly is about each player trying to maximize individual payoff, a natural measure is the increase of payoff by playing quantum strategies. We consider natural mappings between classical and quantum states, and study how well those mappings preserve equilibrium properties. Among other results, we exhibit a correlated equilibrium p whose quantum superposition counterpart [EQUATION] is far from being a quantum correlated equilibrium; actually a player can increase her payoff from almost 0 to almost 1 in a [0, 1]-normalized game. We achieve this by a tensor product construction on carefully designed base cases. The result can also be interpreted as in Meyer's comparison [47]: In a state no classical player can gain, one player using quantum computers has an huge advantage than continuing to play classically.2. Another measure is the hardness of generating correlated equilibria, for which we propose to study correlation complexity, a new complexity measure for correlation generation. We show that there are n-bit correlated equilibria which can be generated by only one EPR pair followed by local operation (without communication), but need at least log2(n) classical shared random bits plus communication. The randomized lower bound can be improved to n, the best possible, assuming (even a much weaker version of) a recent conjecture in linear algebra. We believe that the correlation complexity, as a complexity-theoretical counterpart of the celebrated Bell's inequality, has independent interest in both physics and computational complexity theory and deserves more explorations.

76 citations


Posted Content
TL;DR: Complex is a special attribute the authors can give to many kinds of systems, and although it is used often as a synonym of “difficult,” it has a specific epistemological meaning, which is going to be shared by the incoming science of complexity.
Abstract: 'Complex' is a special attribute we can give to many kinds of systems. Although it is used often as a synonym of 'difficult,' it has a specific epistemological meaning, which is going to be shared by the incoming science of complexity. 'Difficult' is an object which, by means of an adequate computational power, can be deterministically or stochastically predictable. On the contrary 'complex' is an object which can not be predictable because of logical impossibility or because its predictability would require a computational power far beyond any physical feasibility, now and forever. For complexity refers to some observing system, it is always subjective, and thus it is defined as observed irreducible complexity. Human systems are affected by several sources of complexity, belonging to three classes, in order of descending restrictivity. Systems belonging to the first class are not predictable at all, those belonging to the second class are predictable only through an infinite computational capacity, and those belonging to the third class are predictable only through a trans-computational capacity. The first class has two sources of complexity: logical complexity, directly deriving from self reference and Godel’s incompleteness theorems, and relational complexity, resulting in a sort of indeterminacy principle occurring in social systems. The second class has three sources of complexity: gnosiological complexity, which consists of the variety of possible perceptions; semiotic complexity, which represents the infinite possible interpretations of signs and facts; and chaotic complexity, which characterizes phenomena of nonlinear dynamic systems. The third class coincides with computational complexity, which basically coincides with the mathematical concept of intractability. Artificial, natural, biological and human systems are characterized by the influence of different sources of complexity, and the latter appear to be the most complex.

70 citations


BookDOI
31 Jul 2012
TL;DR: This book consists of four survey papers concerning these recent studies on resource bounded Kolmogorov complexity and computational complexity and is the only collection of survey papers on this subject and provides fundamental information for researchers in the field.
Abstract: There are many ways to measure the complexity of a given object, but there are two measures of particular importance in the theory of computing: One is Kolmogorov complexity, which measures the amount of information necessary to describe an object. Another is computational complexity, which measures the computational resources necessary to recognize (or produce) an object. The relation between these two complexity measures has been studied since the 1960s. More recently, the more generalized notion of resource bounded Kolmogorov complexity and its relation to computational complexity have received much attention. Now many interesting and deep observations on this topic have been established. This book consists of four survey papers concerning these recent studies on resource bounded Kolmogorov complexity and computational complexity. It also contains one paper surveying several types of Kolmogorov complexity measures. The papers are based on invited talks given at the AAAI Spring Symposium on Minimal-Length Encoding in 1990. The book is the only collection of survey papers on this subject and provides fundamental information for researchers in the field.

53 citations


Proceedings ArticleDOI
19 May 2012
TL;DR: In this article, the authors proposed a quantum algorithm for the triangle problem with query complexity O(n35/27) that is better than O( n13/10) of the best known algorithm by Magniez et al.
Abstract: Besides the Hidden Subgroup Problem, the second large class of quantum speed-ups is for functions with constant-sized 1-certificates. This includes the OR function, solvable by the Grover algorithm, the element distinctness, the triangle and other problems. The usual way to solve them is by quantum walk on the Johnson graph. We propose a solution for the same problems using span programs. The span program is a computational model equivalent to the quantum query algorithm in its strength, and yet very different in its outfit. We prove the power of our approach by designing a quantum algorithm for the triangle problem with query complexity O(n35/27) that is better than O(n13/10) of the best previously known algorithm by Magniez et al.

49 citations


Posted Content
TL;DR: In this paper, the authors argue that the standard scientific paradigm of "predict and verify" cannot be applied to testing quantum mechanics in this limit of high complexity, and they describe how QM can be tested in this regime by extending the usual scientific paradigm to include interactive experiments.
Abstract: Quantum computation teaches us that quantum mechanics exhibits exponential complexity. We argue that the standard scientific paradigm of "predict and verify" cannot be applied to testing quantum mechanics in this limit of high complexity. We describe how QM can be tested in this regime by extending the usual scientific paradigm to include {\it interactive experiments}.

33 citations


Journal ArticleDOI
TL;DR: Results of case analysis show that the associative neural network model proposed in this paper based on quantum learning is much better and optimized than other researchers’ counterparts both in terms of avoiding the additional qubits or extraordinary initial operators, storing pattern and improving the recalling speed.
Abstract: Based on analysis on properties of quantum linear superposition, to overcome the complexity of existing quantum associative memory which was proposed by Ventura, a new storage method for multiply patterns is proposed in this paper by constructing the quantum array with the binary decision diagrams. Also, the adoption of the nonlinear search algorithm increases the pattern recalling speed of this model which has multiply patterns to $O( {\log_{2}}^{2^{n -t}} ) = O( n - t )$ time complexity, where n is the number of quantum bit and t is the quantum information of the t quantum bit. Results of case analysis show that the associative neural network model proposed in this paper based on quantum learning is much better and optimized than other researchers’ counterparts both in terms of avoiding the additional qubits or extraordinary initial operators, storing pattern and improving the recalling speed.

33 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a method for estimating the complexity of an image based on Bennett's concept of logical depth, which is used to classify images by their information content.
Abstract: We present a method for estimating the complexity of an image based on Bennett's concept of logical depth. Bennett identified logical depth as the appropriate measure of organized complexity, and hence as being better suited to the evaluation of the complexity of objects in the physical world. Its use results in a different, and in some sense a finer characterization than is obtained through the application of the concept of Kolmogorov complexity alone. We use this measure to classify images by their information content. The method provides a means for classifying and evaluating the complexity of objects by way of their visual representations. To the authors' knowledge, the method and application inspired by the concept of logical depth presented herein are being proposed and implemented for the first time. © 2011 Wiley Periodicals, Inc. Complexity, 2011 © 2012 Wiley Periodicals, Inc.

33 citations


Journal ArticleDOI
TL;DR: A novel quantum genetic algorithm is introduced that has a quantum crossover procedure performing crossovers among all chromosomes in parallel for each generation and a quadratic speedup is achieved over its classical counterpart in the dominant factor of the run time to handle each generation.
Abstract: In the context of evolutionary quantum computing in the literal meaning, a quantum crossover operation has not been introduced so far. Here, we introduce a novel quantum genetic algorithm which has a quantum crossover procedure performing crossovers among all chromosomes in parallel for each generation. A complexity analysis shows that a quadratic speedup is achieved over its classical counterpart in the dominant factor of the run time to handle each generation.

33 citations


Proceedings ArticleDOI
17 Jan 2012
TL;DR: The results show that the product of two n x n Boolean matrices can be computed on a quantum computer in time O(n3/2 + nl3/4), improving over the output-sensitive quantum algorithm by Buhrman and Spalek that runs in [EQUATION] time.
Abstract: We present new quantum algorithms for Boolean Matrix Multiplication in both the time complexity and the query complexity settings. As far as time complexity is concerned, our results show that the product of two n x n Boolean matrices can be computed on a quantum computer in time O(n3/2 + nl3/4), where l is the number of non-zero entries in the product, improving over the output-sensitive quantum algorithm by Buhrman and Spalek that runs in [EQUATION] time. This is done by constructing a quantum version of a recent algorithm by Lingas, using quantum techniques such as quantum counting to exploit the sparsity of the output matrix. As far as query complexity is concerned, our results improve over the quantum algorithm by Vassilevska Williams and Williams based on a reduction to the triangle finding problem. One of the main contributions leading to this improvement is the construction of a triangle finding quantum algorithm tailored especially for the tripartite graphs appearing in the reduction.

Proceedings ArticleDOI
20 Oct 2012
TL;DR: The following compression lemma is proved: given a protocol for a function f, one can construct a zero-communication protocol that has non-abort probability at least 2-O(I) and that computes f correctly with high probability conditioned on not aborting.
Abstract: We show that almost all known lower bound methods for communication complexity are also lower bounds for the information complexity. In particular, we define a relaxed version of the partition bound of Jain and Klauck and prove that it lower bounds the information complexity of any function. Our relaxed partition bound subsumes all norm based methods (e.g. the ?2 method) and rectangle-based methods (e.g. the rectangle/corruption bound, the smooth rectangle bound, and the discrepancy bound), except the partition bound. Our result uses a new connection between rectangles and zero-communication protocols where the players can either output a value or abort. We prove the following compression lemma: given a protocol for a function f with information complexity I, one can construct a zero-communication protocol that has non-abort probability at least 2^{-O(I)} and that computes f correctly with high probability conditioned on not aborting. Then, we show how such a zero-communication protocol relates to the relaxed partition bound. We use our main theorem to resolve three of the open questions raised by Braver man. First, we show that the information complexity of the Vector in Subspace Problem is O(n^{1/3}), which, in turn, implies that there exists an exponential separation between quantum communication complexity and classical information complexity. Moreover, we provide an O(n) lower bound on the information complexity of the Gap Hamming Distance Problem.

Journal ArticleDOI
TL;DR: This paper studies the quantum query complexity of finding all k false coins from the N given coins using a balance scale and shows that for any k and N such that k it is shown that k.

Journal ArticleDOI
TL;DR: This note studies the number of quantum queries required to identify an unknown multilinear polynomial of degree d in n variables over a finite field F"q" and gives an exact quantum algorithm that uses O(n^d^-^1) queries for constant d, which is optimal.

Proceedings ArticleDOI
26 Sep 2012
TL;DR: This paper proposes a unified natural framework for the study of computability and complexity of partition functions and graph polynomials and shows how classical results can be cast in this framework.
Abstract: Partition functions and graph polynomials have found many applications in combinatorics, physics, biology and even the mathematics of finance. Studying their complexity poses some problems. To capture the complexity of their combinatorial nature, the Turing model of computation and Valiant's notion of counting complexity classes seem most natural. To capture the algebraic and numeric nature of partition functions as real or complex valued functions, the Blum-Shub-Smale ($\BSS$) model of computation seems more natural. As a result many papers use a naive hybrid approach in discussing their complexity or restrict their considerations to sub-fields of $\C$ which can be coded in a way to allow dealing with Turing computability. In this paper we propose a unified natural framework for the study of computability and complexity of partition functions and graph polynomials and show how classical results can be cast in this framework.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a new complexity class called PQMA log(2), which is the class of languages for which membership has a logarithmic-size quantum proof with perfect completeness and soundness, in a context where the verifier is provided a proof with two unentangled parts.
Abstract: In this article, we introduce a new complexity class called PQMA log(2). Informally, this is the class of languages for which membership has a logarithmic-size quantum proof with perfect completeness and soundness, which is polynomially close to 1 in a context where the verifier is provided a proof with two unentangled parts. We then show that PQMA log(2) = NP. For this to be possible, it is important, when defining the class, not to give too much power to the verifier. This result, when compared to the fact that QMA log = BQP, gives us new insight into the power of quantum information and the impact of entanglement.


Journal ArticleDOI
TL;DR: A new formulation of the complexity profile is presented, which expands its possible application to high-dimensional real-world and mathematically defined systems and defines a class of related complexity profile functions for a given system, demonstrating the generality of the formalism.
Abstract: Quantifying the complexity of systems consisting of many interacting parts has been an important challenge in the field of complex systems in both abstract and applied contexts. One approach, the complexity profile, is a measure of the information to describe a system as a function of the scale at which it is observed. We present a new formulation of the complexity profile, which expands its possible application to high-dimensional real-world and mathematically defined systems. The new method is constructed from the pairwise dependencies between components of the system. The pairwise approach may serve as both a formulation in its own right and a computationally feasible approximation to the original complexity profile. We compare it to the original complexity profile by giving cases where they are equivalent, proving properties common to both methods, and demonstrating where they differ. Both formulations satisfy linear superposition for unrelated systems and conservation of total degrees of freedom (sum rule). The new pairwise formulation is also a monotonically non-increasing function of scale. Furthermore, we show that the new formulation defines a class of related complexity profile functions for a given system, demonstrating the generality of the formalism.

Posted Content
TL;DR: A new model of communication complexity, the garden-hose model, is defined, which enables us to prove upper bounds on the number of EPR pairs needed to attack position-based quantum cryptography schemes.
Abstract: We study position-based cryptography in the quantum setting. We examine a class of protocols that only require the communication of a single qubit and 2n bits of classical information. To this end, we define a new model of communication complexity, the garden-hose model, which enables us to prove upper bounds on the number of EPR pairs needed to attack such schemes. This model furthermore opens up a way to link the security of position-based quantum cryptography to traditional complexity theory.

01 Jan 2012
TL;DR: In this paper, it was shown that the lower bound for approximate degree and bounded error quantum query complexity is Ω(log n/ log log log n) for Boolean functions.
Abstract: It has long been known that any Boolean function that depends on n input variables has both degree and exact quantum query complexity of Ω(log n), and that this bound is achieved for some functions. In this paper, we study the case of approximate degree and bounded-error quantum query complexity. We show that for these measures, the correct lower bound is Ω(log n/ log log n), and we exhibit quantum algorithms for two functions where this bound is achieved.

Journal ArticleDOI
TL;DR: In the paper, several complexity issues inspired by computational biology are presented.
Abstract: The progress of research in the area of computational biology, visible in last decades, brought, among others, a new insight into the complexity issues. The latter, previously studied mainly on the ground of computer science or operational research, gained by a confrontation with problems from the new area. In the paper, several complexity issues inspired by computational biology are presented.

Proceedings ArticleDOI
01 Jun 2012
TL;DR: In this paper, some complexity aspects of incremental algorithm for creation of generalized one-sided concept lattices are provided and it is shown that complexity of presented algorithm asymptotically becomes linear function depending on the number of objects in formal context.
Abstract: In this paper we provide some complexity aspects of incremental algorithm for creation of generalized one-sided concept lattices. The novelty of this algorithm is in its possibility to work with different types of attributes and produce one-sided concept lattice from the generalized one-sided formal context. As it is shown in the paper, the complexity of the algorithm is in general exponential. However, in practice it is reasonable to consider special cases, where the number of attributes is fixed. Then complexity of presented algorithm asymptotically becomes linear function depending on the number of objects in formal context.

Book ChapterDOI
01 Jan 2012
TL;DR: This paper focuses its attention on satisfiability problems because they play a key role in the definition of both parameterized complexity and structural complexity classes, and because they model numerous important problems in computer science.
Abstract: Since its inception in the 1990's, parameterized complexity has established itself as one of the major research areas in theoretical computer science. Parameterized and kernelization algorithms have proved to be very useful for solving important problems in various domains of science and technology. Moreover, parameterized complexity has shown deep connections to traditional areas of theoretical computer science, such as structural complexity theory and approximation algorithms. In this paper, we discuss some of the recent results pertaining to the relation between parameterized complexity and subexponential-time computability. We focus our attention on satisfiability problems because they play a key role in the definition of both parameterized complexity and structural complexity classes, and because they model numerous important problems in computer science.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the Fisher-Shannon statistical measure of complexity for a continuous manifold of quantum observables and showed that evaluating this measure only in the configuration or in the momentum spaces does not provide an adequate characterization of the complexity of some quantum systems.
Abstract: The Fisher–Shannon statistical measure of complexity is analyzed for a continuous manifold of quantum observables. It is shown that evaluating this measure only in the configuration or in the momentum spaces does not provide an adequate characterization of the complexity of some quantum systems. In order to obtain a more complete description of complexity two new measures, respectively based on the minimization and the integration of the usual Fisher–Shannon measure over all the parameter space, are proposed and compared. Finally, these measures are applied to the concrete case of a free particle in a box.

Journal ArticleDOI
TL;DR: An approach to the complexity measure is proposed here, using the quantum information formalism, taking advantage of the generality of the classical-based complexities, and being capable of expressing these systems' complexity on other framework than its algorithmic counterparts.
Abstract: In the past decades, all of the efforts at quantifying systems complexity with a general tool has usually relied on using Shannon's classical information framework to address the disorder of the system through the Boltzmann–Gibbs–Shannon entropy, or one of its extensions. However, in recent years, there were some attempts to tackle the quantification of algorithmic complexities in quantum systems based on the Kolmogorov algorithmic complexity, obtaining some discrepant results against the classical approach. Therefore, an approach to the complexity measure is proposed here, using the quantum information formalism, taking advantage of the generality of the classical-based complexities, and being capable of expressing these systems' complexity on other framework than its algorithmic counterparts. To do so, the Shiner–Davison–Landsberg (SDL) complexity framework is considered jointly with linear entropy for the density operators representing the analyzed systems formalism along with the tangle for the entanglement measure. The proposed measure is then applied in a family of maximally entangled mixed state.

Proceedings ArticleDOI
07 Jul 2012
TL;DR: This tutorial aims to give an in-depth coverage of two topics that received much attention in the last few years of randomized search heuristics: stronger upper bounds and the connection to guessing games and alternative black-box models.
Abstract:  Together with Frank Neumann and Ingo Wegener, he founded the theory track at GECCO and served as its co-chair 2007-2009.  He is a member of the editorial boards of Evolutionary Computation and Information Processing Letters.  His research area includes theoretical aspects of randomized search heuristics, in particular, run-time analysis and complexity theory.  This is a tutorial on black-box complexity. This is currently one of the hottest topics in the theory of randomized search heuristics.  I shall try my best to..  tell you on an elementary level what black-box complexity is and how it shapes our understanding of randomized search heuristics  give an in-depth coverage of two topics that received much attention in the last few years  stronger upper bounds and the connection to guessing games  alternative black-box models  sketch several open problems  Don't hesitate to ask questions whenever they come up! Agenda  Part 1: Black-box complexity: A complexity theory for randomized search heuristics (RSH)  Introduction/definition  Lower bounds for all RSH (example: needle functions)  Thorn in the flesh: Are there better RSH out there? (example onemax)  Different black-box models – what is the right difficulty measure?  Part 2: Tools and techniques (in the language of guessing games)  From black-box to guessing games  A general lower bound  How to play Mastermind  A new game  Summary, open problems Copyright is held by the author/owner(s).  Why a complexity theory for RSH?  Understand problem difficulty!  How?  Black-box complexity!  What can we do with that?  General lower bounds, thorn in the flesh  Different notions of black-box complexity Why a Complexity Theory for RSH?  Understand problem difficulty!  Randomized search heuristics (RSH) like evolutionary algorithms, genetic algorithms, ant colony optimization, simulated annealing, … are very successful for a variety of problems.  Little general advice which problems are suitable for such general methods  Solution: Complexity theory for RSH  Take a similar successful route as classical algorithmics!  Algorithmics: Design good algorithms and analyze their performance  Complexity theory: Show that certain things are just not possible  The interplay between the two areas proved to be very fruitful for the research on classic algorithms Algorithms vs. Complexity Theory for RSH – An Example  Bottom line: Spanning tree is easy for RSH, the Needle problem …

Proceedings ArticleDOI
TL;DR: This article will review the current state of quantum algorithms, focusing on algorithms for problems with an algebraic flavor that achieve an apparent superpolynomial speedup over classical computation.
Abstract: Quantum computers can execute algorithms that sometimes dramatically outperform classical computation. Undoubtedly the best-known example of this is Shor's discovery of an efficient quantum algorithm for factoring integers, whereas the same problem appears to be intractable on classical computers. Understanding what other computational problems can be solved significantly faster using quantum algorithms is one of the major challenges in the theory of quantum computation, and such algorithms motivate the formidable task of building a large-scale quantum computer. This article will review the current state of quantum algorithms, focusing on algorithms for problems with an algebraic flavor that achieve an apparent superpolynomial speedup over classical computation.

Proceedings ArticleDOI
26 Jun 2012
TL;DR: It is a remarkable fact that two prominent problems of algebraic complexity theory, the permanent versus determinant problem and the tensor rank problem (matrix multiplication), can be restated as explicit orbit closure problems, and asymptotic versions of the the latter questions are of relevance in quantum information theory.
Abstract: It is a remarkable fact that two prominent problems of algebraic complexity theory, the permanent versus determinant problem and the tensor rank problem (matrix multiplication), can be restated as explicit orbit closure problems. This offers the potential to prove lower complexity bounds by relying on methods from algebraic geometry and representation theory. While this basic idea for the tensor rank problem goes back to work by Volker Strassen from the mid eighties, the geometric complexity program has gained visibility and momentum in the past years. Some modest lower bounds for border rank have recently been proven by the construction of explicit obstructions. For further progress, a better understanding of irreducible representions of symmetric groups (tensor products and plethysms) is required. Interestingly, asymptotic versions of the the latter questions are of relevance in quantum information theory.

Journal ArticleDOI
TL;DR: In this article, the complexity of analytic functions of two variables is studied in terms of the order of complexity suggested in [1], and the Jacobian conjecture is proved for polynomial mappings of complexity one.
Abstract: The complexity of analytic functions of two variables is studied in terms of the order of complexity suggested in [1]. This paper continues [1]. An estimate for the complexity of a polynomial using its degree is given. Examples of homogeneous and harmonic functions are treated. An estimate for the complexity of a power series in terms of the geometry of the support of the series is given. Differential equations defining classes of complexity are considered. For polynomial mappings of complexity one, the Jacobian conjecture is proved. In this connection, the complexity of mappings of the plane is discussed.

Book ChapterDOI
21 Feb 2012
TL;DR: Some of the well known results for plain and prefix-free complexities to the general case of Blum universal static complexity are extended, proving that transducer complexity is a dual (Blum static) complexity measure.
Abstract: Dual complexity measures have been developed by Burgin, under the influence of the axiomatic system proposed by Blum in [3]. The concept of dual complexity measure is a generalization of Kolmogorov/Chaitin complexity, also known as algorithmic or static complexity. In this paper we continue this effort by extending some of the well known results for plain and prefix-free complexities to the general case of Blum universal static complexity. We also extend some results obtained by Calude in [9] to a larger class of computable measures, proving that transducer complexity is a dual (Blum static) complexity measure.