scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 2009"


Journal ArticleDOI
TL;DR: Algebraic relativization or algebraic algebrization as discussed by the authors is a new barrier to progress in complexity theory, and it has been shown that relativizing some complexity class inclusion should give the simulating machine access not only to an oracle A, but also to a low-degree extension of A over a finite field or ring.
Abstract: Any proof of P ≠ NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (e.g., that PP does not have linear-size circuits) that overcome both barriers simultaneously. So the question arises of whether there is a third barrier to progress on the central questions in complexity theory.In this article, we present such a barrier, which we call algebraic relativization or algebrization. The idea is that, when we relativize some complexity class inclusion, we should give the simulating machine access not only to an oracle A, but also to a low-degree extension of A over a finite field or ring.We systematically go through basic results and open problems in complexity theory to delineate the power of the new algebrization barrier. First, we show that all known nonrelativizing results based on arithmetization---both inclusions such as IP = PSPACE and MIP = NEXP, and separations such as MAEXP ⊄ P/poly---do indeed algebrize. Second, we show that almost all of the major open problems---including P versus NP, P versus RP, and NEXP versus P/poly---will require non-algebrizing techniques. In some cases, algebrization seems to explain exactly why progress stopped where it did: for example, why we have superlinear circuit lower bounds for PromiseMA but not for NP.Our second set of results follows from lower bounds in a new model of algebraic query complexity, which we introduce in this article and which is interesting in its own right. Some of our lower bounds use direct combinatorial and algebraic arguments, while others stem from a surprising connection between our model and communication complexity. Using this connection, we are also able to give an MA-protocol for the Inner Product function with O (Snlogn) communication (essentially matching a lower bound of Klauck), as well as a communication complexity conjecture whose truth would imply NL ≠ NP.

220 citations


Book
Satyanarayana V. Lokam1
24 Jul 2009
TL;DR: This work surveys several techniques for proving lower bounds in Boolean, algebraic, and communication complexity based on certain linear algebraic approaches to study robustness measures of matrix rank that capture the complexity in a given model.
Abstract: We survey several techniques for proving lower bounds in Boolean, algebraic, and communication complexity based on certain linear algebraic approaches. The common theme among these approaches is to study robustness measures of matrix rank that capture the complexity in a given model. Suitably strong lower bounds on such robustness functions of explicit matrices lead to important consequences in the corresponding circuit or communication models. Many of the linear algebraic problems arising from these approaches are independently interesting mathematical challenges.

126 citations


Book
22 Sep 2009
TL;DR: Lower Bounds in Communication Complexity focuses on showing lower bounds on the communication complexity of explicit functions, and treats different variants of communication complexity, including randomized, quantum, and multiparty models.
Abstract: In the 30 years since its inception, communication complexity has become a vital area of theoretical computer science. The applicability of communication complexity to other areas, including circuit and formula complexity, VLSI design, proof complexity, and streaming algorithms, has meant that it has attracted a lot of interest. Lower Bounds in Communication Complexity focuses on showing lower bounds on the communication complexity of explicit functions. It treats different variants of communication complexity, including randomized, quantum, and multiparty models. Many tools have been developed for this purpose from a diverse set of fields including linear algebra, Fourier analysis, and information theory. As is often the case in complexity theory, demonstrating a lower bound is usually the more difficult task. Lower Bounds in Communication Complexity describes a three-step approach for the development and application of these techniques. This approach can be applied in much the same way for different models, be they randomized, quantum, or multiparty. Lower Bounds in Communication Complexity is an ideal primer for anyone with an interest in this current and popular topic.

118 citations


Journal ArticleDOI
TL;DR: It is proved that up to a small multiplicative constant, margin complexity is equal to the inverse of discrepancy, which establishes a strong tie between seemingly very different notions from two distinct areas.
Abstract: This paper has two main focal points. We first consider an important class of machine learning algorithms: large margin classifiers, such as Support Vector Machines. The notion of margin complexity quantifies the extent to which a given class of functions can be learned by large margin classifiers. We prove that up to a small multiplicative constant, margin complexity is equal to the inverse of discrepancy. This establishes a strong tie between seemingly very different notions from two distinct areas. In the same way that matrix rigidity is related to rank, we introduce the notion of rigidity of margin complexity. We prove that sign matrices with small margin complexity rigidity are very rare. This leads to the question of proving lower bounds on the rigidity of margin complexity. Quite surprisingly, this question turns out to be closely related to basic open problems in communication complexity, e.g., whether PSPACE can be separated from the polynomial hierarchy in communication complexity. Communication is a key ingredient in many types of learning. This explains the relations between the field of learning theory and that of communication complexity [6, l0, 16, 26]. The results of this paper constitute another link in this rich web of relations. These new results have already been applied toward the solution of several open problems in communication complexity [18, 20, 29].

75 citations


Proceedings ArticleDOI
15 Jul 2009
TL;DR: In this paper, it was shown that every bounded function g: {0,1} √ n -≫ [0, 1] admits an efficiently computable simulator function h: { 0,1,n-√ n-≫[ 0, 1], such that every fixed polynomial size circuit has approximately the same correlation with g as with h if g describes (up to scaling) a high min-entropy distribution D, then h can be used to efficiently sample a distribution D' of the same minentropy that is indistinguishable from D
Abstract: We show that every bounded function g: {0,1}^n -≫ [0,1] admits an efficiently computable "simulator" function h: {0,1}^n-≫[0,1] such that every fixed polynomial size circuit has approximately the same correlation with g as with h If g describes (up to scaling) a high min-entropy distribution D, then h can be used to efficiently sample a distribution D' of the same min-entropy that is indistinguishable from D by circuits of fixed polynomial size We state and prove our result in a more abstract setting, in which we allow arbitrary finite domains instead of {0,1}^n, and arbitrary families of distinguishers, instead of fixed polynomial size circuits Our result implies (a) the Weak Szemeredi Regularity Lemma of Frieze and Kannan (b) a constructive version of the Dense Model Theorem of Green, Tao and Ziegler with better quantitative parameters (polynomial rather than exponential in the distinguishing probability), and (c) the Impagliazzo Hardcore Set Lemma It appears to be the general result underlying the known connections between "regularity" results in graph theory, "decomposition" results in additive combinatorics, and the Hardcore Lemma in complexity theory We present two proofs of our result, one in the spirit of Nisan's proof of the Hardcore Lemma via duality of linear programming, and one similar to Impagliazzo's "boosting" proof A third proof by iterative partitioning, which gives the complexity of the sampler to be exponential in the distinguishing probability, is also implicit in the Green-Tao-Ziegler proofs of the Dense Model Theorem

49 citations


Journal ArticleDOI
TL;DR: A novel algorithm that is based on the equations linearized from range measurement equations and implements a weighted least square criterion in a computationally efficient way is proposed and evidently can very closely approach the LS solution in estimation performance at a significantly low computational complexity.
Abstract: For range based positioning the least square (LS) criterion and its produced solution exhibit superb estimation performance, but generally at a very high computational complexity. In this letter we consider the issue how to approach such LS solution in estimation performance at low computational complexity. We propose a novel algorithm that is based on the equations linearized from range measurement equations and implements a weighted least square criterion in a computationally efficient way. The proposed algorithm involves a quadratic equation linking the linearization-caused extra variable and the position to be estimated, thus results in a closed form solution.We analyze and simulate its estimation performance, and evidently show that the proposed algorithm can very closely approach the LS solution in estimation performance at a significantly low computational complexity.

40 citations


Book ChapterDOI
24 Aug 2009
TL;DR: In this paper, a computational complexity theory within the framework of Membrane Computing is introduced and many attractive characterizations of P ≠ NP conjecture within theframework of a bio-inspired and non-conventional computing model are deduced.
Abstract: In this paper, a computational complexity theory within the framework of Membrane Computing is introduced. Polynomial complexity classes associated with different models of cell-like and tissue-like membrane systems are defined and the most relevant results obtained so far are presented. Many attractive characterizations of P ≠ NP conjecture within the framework of a bio-inspired and non-conventional computing model are deduced.

35 citations


Journal ArticleDOI
TL;DR: This work defines space complexity classes in the framework of membrane computing, giving some initial results about their mutual relations and their connection with time complexity classes, and identifying some potentially interesting problems which require further research.
Abstract: We define space complexity classes in the framework of membrane computing, giving some initial results about their mutual relations and their connection with time complexity classes, and identifying some potentially interesting problems which require further research

33 citations


01 Jan 2009
TL;DR: Details improvements disclosed include designs of runners, abutments, valving and rotary compressor mechanism.
Abstract: An internal combustion power plant system provides a rotary engine and a rotary fuel/air mixture compressor for the rotary engine on a common driveshaft, coaxially mounting each end and supported between them by a gearbox which synchronizes operation of various ignition and valve and abutment components of the system; compressed fuel/air mixture is supplied to and ignited in a valve-isolated manifold chamber in the rotary engine in successive charges following which each ignited charge is valved radially into one of plural expanding chambers defined by the rotary engine rotor and abutment mechanism, where it urges rotation of the rotor and then exhausts radially; in preferred embodiment of the exhaust actuates a parallel fuel-feed which booster pumps fuel/air mixture into the manifold chamber; detail improvements disclosed include designs of runners, abutments, valving and rotary compressor mechanism.

31 citations


Journal Article
TL;DR: Allender et al. as mentioned in this paper studied the properties of other measures that arise naturally in this framework, such as branching-program size, formula size and branching program size, and showed that distinguishing complexity is closely connected to both FewEXP and to EXP.
Abstract: We continue an investigation into resource-bounded Kolmogorov complexity (Allender et al., 2006 [4]), which highlights the close connections between circuit complexity and Levin's time-bounded Kolmogorov complexity measure Kt (and other measures with a similar flavor), and also exploits derandomization techniques to provide new insights regarding Kolmogorov complexity. The Kolmogorov measures that have been introduced have many advantages over other approaches to defining resource-bounded Kolmogorov complexity (such as much greater independence from the underlying choice of universal machine that is used to define the measure) (Allender et al., 2006 [4]). Here, we study the properties of other measures that arise naturally in this framework. The motivation for introducing yet more notions of resource-bounded Kolmogorov complexity are two-fold:*to demonstrate that other complexity measures such as branching-program size and formula size can also be discussed in terms of Kolmogorov complexity, and *to demonstrate that notions such as nondeterministic Kolmogorov complexity and distinguishing complexity (Buhrman et al., 2002 [15]) also fit well into this framework. The main theorems that we provide using this new approach to resource-bounded Kolmogorov complexity are:*A complete set (R"K"N"t) for NEXP/poly defined in terms of strings of high Kolmogorov complexity. *A lower bound, showing that R"K"N"t is not in [email protected]?coNP. *New conditions equivalent to the conditions ''[email protected]?nonuniform NC^1'' and ''[email protected]?L/poly''. *Theorems showing that ''distinguishing complexity'' is closely connected to both FewEXP and to EXP. *Hardness results for the problems of approximating formula size and branching program size.

29 citations


Journal ArticleDOI
TL;DR: Experimental results show that this new computational complexity control algorithm can effectively control the encoding computational complexity while maintaining a good rate-distortion performance at a range of target complexity levels.
Abstract: A computational complexity control algorithm is proposed for an H.264 encoder running on a processor/power constrained platform. This new computational complexity control algorithm is based on a macroblock mode prediction algorithm that employs a Bayesian framework for accurate early skip decision. Complexity control is achieved by relaxing the Bayesian maximum-likelihood (ML) criterion in order to match the mode decision threshold to a target complexity level. A feedback algorithm is used to maintain the performance of the algorithm with respect to achieving an average target complexity level, reducing frame by frame complexity variance and optimizing rate-distortion performance. Experimental results show that this algorithm can effectively control the encoding computational complexity while maintaining a good rate-distortion performance at a range of target complexity levels.

Proceedings ArticleDOI
31 May 2009
TL;DR: It is proved here that alternate methods for choosing constraints can achieve either linear or O(Nlog2N) complexity, and worst-case bounds on processing are demonstrated to be achieved without reducing the parsing accuracy, in some cases improving the accuracy.
Abstract: In this paper, we extend methods from Roark and Hollingshead (2008) for reducing the worst-case complexity of a context-free parsing pipeline via hard constraints derived from finite-state tagging pre-processing Methods from our previous paper achieved quadratic worst-case complexity We prove here that alternate methods for choosing constraints can achieve either linear or O(Nlog2N) complexity These worst-case bounds on processing are demonstrated to be achieved without reducing the parsing accuracy, in fact in some cases improving the accuracy The new methods achieve observed performance comparable to the previously published quadratic complexity method Finally, we demonstrate improved performance by combining complexity bounding methods with additional high precision constraints

Proceedings ArticleDOI
David P. Woodruff1
23 Mar 2009
TL;DR: For a wide range of values of d and n, a 1-pass algorithm is designed that bypasses the Ω(1/ε2) lower bound that holds in the adversarial and random-order models, thereby showing that this model admits more space-efficient algorithms.
Abstract: We continue the study of approximating the number of distinct elements in a data stream of length n to within a (1 ± e) factor. It is known that if the stream may consist of arbitrary data arriving in an arbitrary order, then any 1-pass algorithm requires Ω(1/e2) bits of space to perform this task. To try to bypass this lower bound, the problem was recently studied in a model in which the stream may consist of arbitrary data, but it arrives to the algorithm in a random order. However, even in this model an Ω(1/e2) lower bound was established. This is because the adversary can still choose the data arbitrarily. This leaves open the possibility that the problem is only hard under a pathological choice of data, which would be of little practical relevance.We study the average-case complexity of this problem under certain distributions. Namely, we study the case when each successive stream item is drawn independently and uniformly at random from an unknown subset of d items for an unknown value of d. This captures the notion of random uncorrelated data. For a wide range of values of d and n, we design a 1-pass algorithm that bypasses the Ω(1/e2) lower bound that holds in the adversarial and random-order models, thereby showing that this model admits more space-efficient algorithms. Moreover, the update time of our algorithm is optimal. Despite these positive results, for a certain range of values of d and n we show that estimating the number of distinct elements requires Ω(1/e2) bits of space even in this model. Our lower bound subsumes previous bounds, showing that even for natural choices of data the problem is hard.

Journal Article
TL;DR: In this article, the authors discuss the geometry of orbit closures and the asymptotic behavior of Kronecker coefficients in the context of the Geometric Complexity Theory program to prove a variant of Valiant's algebraic analog of the P not equal to NP conjecture.
Abstract: We discuss the geometry of orbit closures and the asymptotic behavior of Kronecker coefficients in the context of the Geometric Complexity Theory program to prove a variant of Valiant's algebraic analog of the P not equal to NP conjecture. We also describe the precise separation of complexity classes that their program proposes to demonstrate.

Proceedings ArticleDOI
31 May 2009
TL;DR: This work proves that, under appropriate cryptographic assumptions, the deterministic communication complexity of f is hard to approximate to within some constant, and presents a family of (two-argument) functions for which determining the detergetic communication complexity implies proving circuit lower bounds for some related functions.
Abstract: We consider the following question: given a two-argument boolean function f, represented as an N x N binary matrix, how hard is it to determine the (deterministic) communication complexity of f?We address two aspects of this question. On the computational side, we prove that, under appropriate cryptographic assumptions (such as the intractability of factoring), the deterministic communication complexity of f is hard to approximate to within some constant. Under stronger (yet arguably reasonable) assumptions, we obtain even stronger hardness results that match the best known approximation.On the analytic side, we present a family of (two-argument) functions for which determining the deterministic communication complexity (or even obtaining non-trivial lower bounds on it) implies proving circuit lower bounds for some related functions. Such connections between circuit complexity and communication complexity were known before (Karchmer & Wigderson, 1988) only in the more involved context of relations (search problems) but not in the context of functions (decision problems). This result, in particular, may explain the difficulty of analyzing the communication complexity of certain functions such as the "clique vs. independent-set" family of functions, introduced by Yannakakis (1988).

Journal ArticleDOI
TL;DR: An energy-optimal distributed algorithm is given that constructs an optimal MST with energy complexity O(log n) on average and O( log n log log n) with high probability, an improvement over the previous best known bound on the average energy complexity of Omega(log2 n).
Abstract: Traditionally, the performance of distributed algorithms has been measured in terms of time and message complexity.Message complexity concerns the number of messages transmitted over all the edges during the course of the algorithm. However, in energy-constrained ad hoc wireless networks (e.g., sensor networks), energy is a critical factor in measuring the efficiency of a distributed algorithm. Transmitting a message between two nodes has an associated cost (energy) and moreover this cost can depend on the two nodes (e.g., the distance between them among other things). Thus in addition to the time and message complexity, it is important to consider energy complexity that accounts for the total energy associated with the messages exchanged among the nodes in a distributed algorithm. This paper addresses the minimum spanning tree (MST) problem, a fundamental problem in distributed computing and communication networks. We study energy-efficient distributed algorithms for the Euclidean MST problem assuming random distribution of nodes. We show a non-trivial lower bound of Omega(log n) on the energy complexity of any distributed MST algorithm. We then give an energy-optimal distributed algorithm that constructs an optimal MST with energy complexity O(log n) on average and O(log n log log n) with high probability. This is an improvement over the previous best known bound on the average energy complexity of Omega(log2 n). Our energy-optimal algorithm exploits a novel property of the giant component of sparse random geometric graphs. All of the above results assume that nodes do not know their geometric coordinates. If the nodes know their own coordinates, then we give an algorithm with O(1) energy complexity (which is the best possible) that gives an O(1) approximation to the MST.

Proceedings ArticleDOI
15 Jul 2009
TL;DR: In this paper, it was shown that the communication complexity of the function f(x, y) = T(x \circ y) is O(n/4^d) where (x √ y) was defined so that the resulting tree also has alternating levels of AND and OR gates.
Abstract: We study the 2-party randomized communication complexity of read-once AC0 formulae. For balanced AND-OR trees T with n inputs and depth d, we show that the communication complexity of the function f(x, y) = T(x \circ y) is \Omega(n/4^d) where (x \circ y) is defined so that the resulting tree also has alternating levels of AND and OR gates. For each bit of x \circ y, the operation \circ is either AND or OR depending on the gate in T to which it is an input. Using this, we show that for general AND-OR trees T with n inputs and depth d, the communication complexity of f (x \circ y) is n/2^{\O(d log d)}. These results generalize classical results on the communication complexity of set-disjointness [1], [2] (where T is an OR -gate) and recent results on the communication complexity of the TRIBES functions [3] (where T is a depth-2 read-once formula). Our techniques build on and extend the information complexity methodology [4], [5], [3] for proving lower bounds on randomized communication complexity. Our analysis for trees of depth d proceeds in two steps: (1) reduction to measuring the information complexity of binary depth-d trees, and (2) proving lower bounds on the information complexity of binary trees. In order to execute this program, we carefully construct input distributions under which both these steps can be carried out simultaneously. We believe the tools we develop will prove useful in further studies of information complexity in particular, and communication complexity in general.

Journal IssueDOI
TL;DR: In this paper, the quantum query complexity of finding a certificate for a d-regular, k-level balanced NAND formula was studied up to logarithmic factors, and it was shown that the query complexity is Theta(d^{(k+1)/2}) for 0-certificates, and Theta[d^{k/2] for 1-certificate.
Abstract: We study the quantum query complexity of finding a certificate for a d-regular, k-level balanced NAND formula. Up to logarithmic factors, we show that the query complexity is Theta(d^{(k+1)/2}) for 0-certificates, and Theta(d^{k/2}) for 1-certificates. In particular, this shows that the zero-error quantum query complexity of evaluating such formulas is O(d^{(k+1)/2}) (again neglecting a logarithmic factor). Our lower bound relies on the fact that the quantum adversary method obeys a direct sum theorem.


01 Jan 2009
TL;DR: This research, with out loss of generality, is interested in developing solution techniques to solve general (convex, concave 65 and indefinite) quadratic programming problems.
Abstract: min f (x) = 1 2 xQx + cx s.t. x 2 D (1) 36 where D is a polyhedron in R, c 2 R. Without any 37 loss of generality, we can assume that Q is a real sym38 metric (n n)-matrix. If this is not the case, then the 39 matrix Q can be converted to symmetric form by re40 placing Q by (Q + QT)/2, which does not change the 41 value of the objective function f (x). Note that if Q is 42 positive semidefinite, then Problem (1) is considered to 43 be a convex minimization problem. When Q is negative 44 semidefinite, Problem (1) is considered to be a concave 45 minimization problem. When Q has at least one positive 46 and one negative eigenvalue (i. e., Q is indefinite), Prob47 lem (1) is considered to be an indefinite quadratic pro48 gramming problem. We know that in the case of convex 49 minimization problem, every Kuhn-Tucker point is a lo50 cal minimum, which is also a global minimum. In this 51 case, there are a number of classical optimization meth52 ods that can obtain the globally optimal solutions of 53 quadratic convex programming problems. These meth54 ods can be found in many places in the literature. In 55 the case of concave minimization over polytopes, it is 56 well known that if the problem has an optimal solution, 57 then an optimal solution is attained at a vertex of D. On 58 the other hand, the global minimum is not necessarily 59 attained at a vertex of D for infinite quadratic program60 ming problems. In this case, from second order opti61 mality conditions, the global minimum is attained at the 62 boundary of the feasible domain. In this research, with63 out loss of generality, we are interested in developing 64 solution techniques to solve general (convex, concave 65 and indefinite) quadratic programming problems. 66

Journal ArticleDOI
TL;DR: This paper studies the linear complexity and the k-error linear complexity of 2n-periodic binary sequences in a more general setting using a combination of algebraic, combinatorial, and algorithmic methods and obtains the counting function for the number of 1-error binary sequences with fixed k- error linear complexity for k = 2 and 3.
Abstract: The linear complexity of sequences is an important measure of the cryptographic strength of key streams used in stream ciphers. The instability of linear complexity caused by changing a few symbols of sequences can be measured using k-error linear complexity. In their SETA 2006 paper, Fu et al. (SETA, pp. 88---103, 2006) studied the linear complexity and the 1-error linear complexity of 2 n -periodic binary sequences to characterize such sequences with fixed 1-error linear complexity. In this paper we study the linear complexity and the k-error linear complexity of 2 n -periodic binary sequences in a more general setting using a combination of algebraic, combinatorial, and algorithmic methods. This approach allows us to characterize 2 n -periodic binary sequences with fixed 2- or 3-error linear complexity. Using this characterization we obtain the counting function for the number of 2 n -periodic binary sequences with fixed k-error linear complexity for k = 2 and 3.

Book ChapterDOI
07 Jun 2009
TL;DR: A general framework for algorithmic complexity in type theory is introduced, combining some existing and novel techniques: algorithms are given a shallow embedding as monadically expressed functional programs; a variety of operation-counting monads are introduced to capture worst- and average-case complexity of deterministic and nondeterministic programs.
Abstract: As a case-study in machine-checked reasoning about the complexity of algorithms in type theory, we describe a proof of the average-case complexity of Quicksort in Coq. The proof attempts to follow a textbook development, at the heart of which lies a technical lemma about the behaviour of the algorithm for which the original proof only gives an intuitive justification. We introduce a general framework for algorithmic complexity in type theory, combining some existing and novel techniques: algorithms are given a shallow embedding as monadically expressed functional programs; we introduce a variety of operation-counting monads to capture worst- and average-case complexity of deterministic and nondeterministic programs, including the generalization to count in an arbitrary monoid; and we give a small theory of expectation for such non-deterministic computations, featuring both general map-fusion like results, and specific counting arguments for computing bounds. Our formalization of the average-case complexity of Quicksort includes a fully formal treatment of the `tricky' textbook lemma, exploiting the generality of our monadic framework to support a key step in the proof, where the expected comparison count is translated into the expected length of a recorded list of all comparisons.

Journal Article
TL;DR: In this paper, a broad framework is proposed to study the computational intractability assumptions inherent to cryptography. But the computational assumptions are not considered in this paper. And they are not assumed to be inherent in all the computations in this framework, since the framework contains a large variety of cryptographic tasks.
Abstract: Which computational intractability assumptions are inherent to cryptography? We present a broad framework to pose and investigate this question. We first aim to understand the “cryptographic complexity” of various tasks, independent of any computational assumptions. In our framework the cryptographic tasks are modeled as multi-party computation functionalities. We consider a universally composable secure protocol for one task given access to another task as the most natural complexity reduction between the two tasks. Some of these cryptographic complexity reductions are unconditional, others are unconditionally impossible, but the vast majority appear to depend on computational assumptions; it is this relationship with computational assumptions that we study. In our detailed investigation of large classes of 2-party functionalities, we find that every reduction we are able to classify turns out to be unconditionally true or false, or else equivalent to the existence of one-way functions (OWF) or of semi-honest (equivalently, standalone-secure) oblivious transfer protocols (sh-OT). This leads us to conjecture that there are only a small finite number of distinct computational assumptions that are inherent among the infinite number of different cryptographic reductions in our framework. If indeed only a few computational intractability assumptions manifest in this framework, we propose that they are of an extraordinarily fundamental nature, since the framework contains a large variety of cryptographic tasks, and was formulated without regard to any of the prevalent computational intractability assumptions.

Journal ArticleDOI
TL;DR: This result disproved the conjecture that there exists a trade-off between the linear complexity and the k-error linear complexity of a periodic sequence by Ding by considering the orders of the divisors of xN-1 over \BBF q.
Abstract: Niederreiter showed that there is a class of periodic sequences which possess large linear complexity and large k-error linear complexity simultaneously. This result disproved the conjecture that there exists a trade-off between the linear complexity and the k-error linear complexity of a periodic sequence by Ding By considering the orders of the divisors of xN-1 over \BBF q, we obtain three main results which hold for much larger k than those of Niederreiter : a) sequences with maximal linear complexity and almost maximal k-error linear complexity with general periods; b) sequences with maximal linear complexity and maximal k -error linear complexity with special periods; c) sequences with maximal linear complexity and almost maximal k-error linear complexity in the asymptotic case with composite periods. Besides, we also construct some periodic sequences with low correlation and large k -error linear complexity.

Journal ArticleDOI
TL;DR: A new faithful linear complexity test is proposed, which uses deviations in all parts of the linear complexity profile and hence can detect even the above nonrandom sequences.
Abstract: Linear complexity can be used to detect predictable nonrandom sequences, and hence it is included in the NIST randomness test suite. But, as shown in this paper, the NIST test suite cannot detect nonrandom sequences that are generated, for instance, by concatenating two different M-sequences with low linear complexity. This defect comes from the fact that the NIST linear complexity test uses deviation from the ideal value only in the last part of the whole linear complexity profile. In this paper, a new faithful linear complexity test is proposed, which uses deviations in all parts of the linear complexity profile and hence can detect even the above nonrandom sequences. An efficient formula is derived to compute the exact area distribution needed for the proposed test. Furthermore, a simple procedure is given to compute the proposed test statistic from linear complexity profile, which requires only O(M) time complexity for a sequence of length M.

Journal ArticleDOI
TL;DR: The order structure allows us to prove some topological and quasi-metric properties of the new dual complexity spaces and it is shown that these complexity spaces are, under certain conditions, Hausdorff and satisfy a kind of completeness.

Journal Article
TL;DR: The notion of traditional compression, which can be viewed as compressing protocols that involve only one way communication, is generalized by designing new compression schemes that can compress the communication in interactive processes that do not reveal too much information about their inputs.
Abstract: We prove a direct sum theorem for randomized communication complexity. Ignoring logarithmic factors, our results show that: • Computing n copies of a function requires p n times the communication. • For average case complexity, given any distribution µ on inputs, computing n copies of the function on n independent inputs sampled according to µ requires p n times the communication for computing one copy. • If µ is a product distribution, computing n copies on n independent inputs sampled according to µ requires n times the communication. We also study the complexity of computing the parity of n evaluations of f, and obtain results analogous to those above. Our results are obtained by designing new compression schemes that can compress the communication in interactive processes that do not reveal too much information about their inputs. This generalizes the notion of traditional compression, which can be viewed as compressing protocols that involve only one way communication.

Journal ArticleDOI
Ed Blakey1
TL;DR: In this paper, the RSA cryptographic system is shown to be polynomial in time and space complexity, and it is shown that the system's complexity is not due to its power, but to the inadequacy of traditional, Turing-machine-based complexity theory.
Abstract: Factorization is notoriously difficult. Though the problem is not known to be NP-hard, neither efficient, algorithmic solution nor technologically practicable, quantum-computer solution has been found. This apparent complexity, which renders infeasible the factorization of sufficiently large values, makes secure the RSA cryptographic system. Given the lack of a practicable factorization system from algorithmic or quantum-computing models, we ask whether efficient solution exists elsewhere; this motivates the analogue system presented here. The system’s complexity is prohibitive of its factorizing arbitrary, natural numbers, though the problem is mitigated when factorizing n = pq for primes p and q of similar size, and, hence, when factorizing RSA keys. Ultimately, though, we argue that the system’s polynomial time and space complexities are testament not to its power, but to the inadequacy of traditional, Turing-machine-based complexity theory; we propose precision complexity (defined in our previous paper4)) as a more relevant measure, and advocate, more generally, non-standard complexity analyses for non-standard computers.

Book ChapterDOI
09 Jul 2009
TL;DR: A novel descriptor of graph complexity which can be computed in real time and has the same qualitative behavior of polytopal (Birkhoff) complexity, which has been successfully tested in the context of Bioinformatics.
Abstract: In this paper, we introduce a novel descriptor of graph complexity which can be computed in real time and has the same qualitative behavior of polytopal (Birkhoff) complexity, which has been successfully tested in the context of Bioinformatics. We also show how the phase-change point may be characterized in terms of the Laplacian spectrum, by analyzing the derivatives of the complexity function. In addition, the new complexity notion (flow complexity ) is applied to cluster a database of Reeb graphs coming from analyzing 3D objects.

Journal ArticleDOI
TL;DR: A simplified QR-decomposition-M algorithm for coded multiple-input multiple-output systems based on the idea of selective branch extension results in significant complexity reductions without observable performance losses, especially for large modulation orders where complexity reduction is most important.
Abstract: In this correspondence, we propose a simplified QR-decomposition-M algorithm for coded multiple-input multiple-output systems based on the idea of selective branch extension. The proposed algorithm results in significant complexity reductions without observable performance losses, especially for large modulation orders where complexity reduction is most important.