scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1997"


Journal Article
TL;DR: In this article, the authors consider the complexity of matrix rank problems in commutative ring R. The complexity of these problems can range from polynomial-time solvable to random NP-complete to PSPACE-solvable to unsolvable.
Abstract: We consider the computational complexity of some problems dealing with matrix rank. Let E, S be subsets of a commutative ring R. Let x1, x2, ?, xt be variables. Given a matrix M=M(x1, x2, ?, xt) with entries chosen from E?{x1, x2, ?, xt}, we want to determine maxrankS(M)=max(a1, a2, ?, at)?St rank M(a1, a2, ?, at) and minrankS(M)=min(a1, a2, ?, at)?St rank M(a1, a2, ?, at). There are also variants of these problems that specify more about the structure of M, or instead of asking for the minimum or maximum rank, they ask if there is some substitution of the variables that makes the matrix invertible or noninvertible. Depending on E, S, and which variant is studied, the complexity of these problems can range from polynomial-time solvable to random polynomial-time solvable to NP-complete to PSPACE-solvable to unsolvable. An approximation version of the minrank problem is shown to be MAXSNP-hard.

96 citations


Proceedings ArticleDOI
19 Oct 1997
TL;DR: A connection of the worst-case complexity and the average- case complexity of some well-known lattice problems is improved and the exponent of this connection is improved.
Abstract: We improve a connection of the worst-case complexity and the average-case complexity of some well-known lattice problems. This fascinating connection was first discovered by Ajtai (1995). We improve the exponent of this connection from 8 to 3.5+/spl epsiv/.

82 citations


Journal ArticleDOI
TL;DR: A full and proper hierarchy of nonuniform complexity classes associated with networks having weights of increasing Kolmogorov complexity is revealed.
Abstract: The computational power of recurrent neural networks is shown to depend ultimately on the complexity of the real constants (weights) of the network. The complexity, or information contents, of the weights is measured by a variant of resource-bounded Kolmogorov (1965) complexity, taking into account the time required for constructing the numbers. In particular, we reveal a full and proper hierarchy of nonuniform complexity classes associated with networks having weights of increasing Kolmogorov complexity.

80 citations


Journal ArticleDOI
TL;DR: The logical formulation shows that some of the most tantalizing questions in complexity theory boil down to a single question: the relative power of inflationary vs. noninflationary 1st-order operators.
Abstract: We establish a general connection between fixpoint logic and complexity. On one side, we have fixpoint logic, parameterized by the choices of 1st-order operators (inflationary or noninflationary) and iteration constructs (deterministic, nondeterministic, or alternating). On the other side, we have the complexity classes between P and EXPTIME. Our parameterized fixpoint logics capture the complexity classes P, NP, PSPACE, and EXPTIME, but equally is achieved only over ordered structures.There is, however, an inherent mismatch between complexity and logic—while computational devices work on encodings of problems, logic is applied directly to the underlying mathematical structures. To overcome this mismatch, we use a theory of relational complexity, which bridges the gap between standard complexity and fixpoint logic. On one hand, we show that questions about containments among standard complexity classes can be translated to questions about containments among relational complexity classes. On the other hand, the expressive power of fixpoint logic can be precisely characterized in terms of relational complexity classes. This tight, three-way relationship among fixpoint logics, relational complexity and standard complexity yields in a uniform way logical analogs to all containments among the complexity classes P, NP, PSPACE, and EXPTIME. The logical formulation shows that some of the most tantalizing questions in complexity theory boil down to a single question: the relative power of inflationary vs. noninflationary 1st-order operators.

65 citations


Journal ArticleDOI
TL;DR: This tutorial paper overviews research being done in the eld of structural complexity and recursion theory over the real numbers and other domains following the approach by Blum, Shub, and Smale.
Abstract: In this tutorial paper we overview research being done in the eld of structural complexity and recursion theory over the real numbers and other domains following the approach by Blum, Shub, and Smale [12].

60 citations


Book ChapterDOI
27 Feb 1997
TL;DR: A fresh look at CD complexity, where CD t (x) is the smallest program that distinguishes x from all other strings in time t(¦x¦), and a CND complexity, a new nondeterministic variant of CD complexity.
Abstract: We take a fresh look at CD complexity, where CD t (x) is the smallest program that distinguishes x from all other strings in time t(¦x¦). We also look at a CND complexity, a new nondeterministic variant of CD complexity.

58 citations


Journal ArticleDOI

[...]

TL;DR: It is proved that certain standard complete problems for static complexity classes, such as REACHafor P, remain complete via these new reductions, and that other such problems, including REACH for NL and REACHdfor L, are no longer complete via bounded-expansion reductions.

50 citations


Proceedings Article
01 Aug 1997
TL;DR: The computational complexity of testing and finding small plans in probabilistic planning domains with succinct representations is examined, finding that many problems of interest are complete for a variety of complexity classes: NP, co-NP, PP, NPPP, Co-NP PP, and PSPACE.
Abstract: We examine the computational complexity of testing and finding small plans in probabilistic planning domains with succinct representations. We find that many problems of interest are complete for a variety of complexity classes: NP, co-NP, PP, NPPP, co-NP PP, and PSPACE. Of these, the probabilistic classes PP and NPPP are likely to be of special interest in the field of uncertainty in artificial intelligence and are deserving of additional study. These results suggest a fruitful direction of future algorithmic development.

29 citations


Journal ArticleDOI
TL;DR: An approximate probability distribution for the maximum order complexity of a random binary sequence is given that enables the development of statistical tests based onmaximum order complexity for the testing of a binary sequence generator.
Abstract: In this paper we give an approximate probability distribution for the maximum order complexity of a random binary sequence This enables the development of statistical tests based on maximum order complexity for the testing of a binary sequence generator These tests are analogous to those based on linear complexity

28 citations


Proceedings ArticleDOI
21 Apr 1997
TL;DR: The complexity of the proposed algorithm is lower than that of the sign-error LMS algorithm, while its performance is superior to this algorithm, in particular, it is close to that ofThe regular L MS algorithm.
Abstract: This paper describes a new variant of the least-mean-squares (LMS) algorithm, with low computational complexity, for updating an adaptive filter. The reduction in complexity is obtained by using values of the input data and the output error, quantized to the nearest power of two, to compute the gradient. This eliminates the need for multipliers or shifters in the algorithm's update section. The quantization itself is efficiently realizable in hardware. The filtering section is unchanged. Thus, this algorithm is similar to the sign based variants of the LMS algorithm. However, the complexity of the proposed algorithm is lower than that of the sign-error LMS algorithm, while its performance is superior to this algorithm. In particular, it is close to that of the regular LMS algorithm. The new algorithm also requires much lower area for ASIC implementation.

24 citations


01 Jan 1997
TL;DR: The final author version and the galley proof are versions of the publication after peer review and the final published version features the final layout of the paper including the volume, issue and page numbers.
Abstract: • A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.

Proceedings ArticleDOI
04 May 1997
TL;DR: In this paper, the authors derived lower bounds for the communicant ion complexity of computing functions in general asynchronous networks on the basis of their two-party communication complexity and showed that the lower bound can be improved to 0.275 by using a weaker version of Tiwari's conjecture: Dk(f) ~-y.
Abstract: Tiwari(1987) considered the following seen-ario: k+ 1 processors Po,. . ..P~. connected by k links to form a linear array, are to compute a function .f(z, y), x ~ X, y c Y, on a finite domain X x Y, where x is only known to PO, y is only known to Pk; the intermediate processors Pl,. .. . Pk_l do not have any information. The processors compute f(x, y) by exchanging binary messages across the links, according to some protocol @. Let 11~(~) denote the minimal complexity of such a protocol 0, i. e., the total number of bits sent across all links for the worst case input, and let ~(~) = D1 (f) denote the (standard) 2-party communication complexity off. Tiwari proved that Dk(~) ~ k ~(D(f) – O(l)) for almost all functions $ and conjectured this inequality to be true for all $. His conjecture was falsified by Kushilevitz, Linial, and Ostrovsky (1996): they exhibited a function f for which Dk (f) is essentially bounded above by ~kD(f). The best general lower bound known is D~(f) Z k.(~–logk–3). We prove a weakened version of Tiwari's conjecture: Dk(f) ~-y. k. D(f), for arbitrary $ and k, where y > 0.146 is a constant. (The lower bound on ~ can even be improved to 0.275.) Corresponding results for the nondeterministic and randomized versions of the two-party model and the array are also obtained. Applying the general framework provided by Tiwari, we may derive lower bounds for the communicant ion complexity of computing functions in general asynchronous networks on the basis of their two-party communication complexity. Finally, the main result entails that strong lower bounds on the time complexity of a function f : {O, 1}* + {O, 1} on deterministic one-tape Tur-ing machines can be obtained directly by considering the deterministic two-party communication complexity of its restrictions f 1{.,1}2., for n z 1.

Book ChapterDOI
01 Jan 1997
TL;DR: The examination of relativized complexity theoretic statements which hold for a measure one set of oracles in the measure defined by putting each string into the oracle with probability 1/2 independent of all other strings (a formal definition is given below).
Abstract: Starting with Bennet and Gill’s seminal paper [13] a whole new research line in complexity theory was opened: the examination of relativized complexity theoretic statements which hold for a measure one set of oracles in the measure defined by putting each string into the oracle with probability 1/2 independent of all other strings (a formal definition is given below).

Book ChapterDOI
27 Feb 1997
TL;DR: The resource-bounded measures of complexity classes are shown to be robust with respect to certain changes in the underlying probability measure.
Abstract: The resource-bounded measures of complexity classes are shown to be robust with respect to certain changes in the underlying probability measure. Specifically, for any real number δ > 0, any uniformly polynomial-time computable sequence β=(β0,β1,β2), ... of real numbers (biases) β i e [δ, 1−δ], and any complexity class C (such as P, NP, BPP, P/Poly, PH, PSPACE, etc.) that is closed under positive, polynomial-time, truth-table reductions with queries of at most linear length, it is shown that the following two conditions are equivalent.

Journal ArticleDOI
TL;DR: It is shown that any two complexity classes satisfying some general conditions are distinct relative to a generic oracle iff the corresponding type-2 classes are distinct.
Abstract: We show that any two complexity classes satisfying some general conditions are distinct relative to a generic oracle iff the corresponding type-2 classes are distinct.

Journal Article
TL;DR: In 1984, Leonid Levin initiated a theory of average-case complexity as mentioned in this paper, and provided an exposition of the basic definitions suggested by Levin, and discussed some of the considerations underlying these definitions.
Abstract: In 1984, Leonid Levin initiated a theory of average-case complexity We provide an exposition of the basic definitions suggested by Levin, and discuss some of the considerations underlying these definitions

Journal ArticleDOI
TL;DR: The analysis of the case employs the new concepts of implementation and extension complexity, which indicate the amount of code (software costs) required for the implementation and for later extensions of the object oriented (O-O) approach.
Abstract: The object oriented (O-O) approach is claimed to have a number of advantages. Some support to these claims appeared during an O-O redesign of a legacy CAD system. A surprisingly simple and efficient solution algorithm was discovered for a change propagation problem. The analysis of the case employs the new concepts of implementation and extension complexity, which indicate the amount of code (software costs) required for the implementation and for later extensions. These two complexities are functions of the problem complexity expressed by the number N of object types employed to model the problem domain. Moving from the old system to the new O-O system reduced the implementation complexity from O N ( ) 2 to O N ( ) , the


Book ChapterDOI
25 Aug 1997
TL;DR: The communication complexity of two-party protocols introduced by Abelson and Yao is one of the most intensively studied complexity measures for computing problems and the relation between communication complexity and the following three complexity measures of sequential computation is focused on.
Abstract: The communication complexity of two-party protocols introduced by Abelson and Yao is one of the most intensively studied complexity measures for computing problems. This is a consequence of the relation of communication complexity to many fundamental (mainly parallel) complexity measures. This paper focuses on the relation between communication complexity and the following three complexity measures of sequential computation: the size of finite automata, the time- and space-complexity measures of Turing machines and the time- and space-complexity for data structure problems.

Book ChapterDOI
11 Jul 1997
TL;DR: The average-case complexity of shortest paths problems in the vertex-potential model is studied to show that on a graph with n vertices and with respect to this model, the single-source shortest-paths problem can be solved in O(n2) expected time.
Abstract: We study the average-case complexity of shortest paths problems in the vertex-potential model. The vertex-potential model is a family of probability distributions on complete directed graphs with arbitrary real edge lengths but without negative cycles. We show that on a graph with n vertices and with respect to this model, the single-source shortest-paths problem can be solved in O(n2) expected time, and the all-pairs shortest-paths problem can be solved in O(n2 log n) expected time.

Book ChapterDOI
22 Nov 1997
TL;DR: The aim of this talk is to compute an e-approximation of a linear or nonlinear operator defined on functions of d variables with minimal cost in large d and/or in large 1/e.
Abstract: Computational complexity studies the intrinsic difficulty of solving mathematically posed problems. Discrete computational complexity studies discrete problems and often uses the Turing machine model of computation. Continuous computational complexity studies continuous problems and tends to use the real number model. Continuous computational complexity may be split into two branches. The first deals with problems for which the information is complete. Informally, information may be complete for problems which are specified by a finite number of inputs. Examples include matrix multiplication, solving linear systems or systems of polynomial equations. We mention two specific results. The first is for matrix multiplication of two real n x n matrices. The trivial lower bound on the complexity is of order n 2 , whereas the best known upper bound is of order n 2.376 as proven by D. Coppersmith and S. Winograd. The actual complexity of matrix multiplication is still unknown. The second result is for the problem of deciding whether a system of n real polynomials of degree 4 has a real root. This problem is NP-complete over the reals as proven by L. Blum, M. Shub and S. Smale. The other branch of continuous computational complexity is IBC, information-based complexity. Typically, IBC studies infinite-dimensional problems for which the input is an element of an infinite-dimensional space. Examples of such inputs include multivariate functions on the reals. Information is often given as function values at finitely many points. Therefore information is partial and the original problem can be solved only approzimately. The goal of IBC is to compute such an approximation as inexpensively as possible. The error and the cost of approximation can be defined in different settings including the worst case, average case, probabilistic, randomized and mixed settings. In the second part of the talk we concentrate on multivariate problems. By a multivariate problem we mean an approximation of a linear or nonlinear operator defined on functions of d variables. We wish to compute an e-approximation with minimal cost. We are particularly interested in large d and/or in large 1/e. Typical examples of such problems are multivariate integration and approximation as well as multivariate integral equations and global optimization. Many multivariate problems are intractable in the worst case deterministic setting, i.e., their complexity grows exponentially with the number d of variables. This is sometimes called the curse of dimension. This holds for multivariate integration for the Korobov class of functions as proven in our recent paper with Ian Sloan. The exponential dependence on dimension d is a complexity result and one cannot get around it by designing clever algorithms. To break the curse of dimension of the worst case deterministic setting we have to settle for a weaker assurance. One way is to settle for a randomized setting or average case setting. In the randomized setting, it is well known that the classical Monte Carlo algorithm breaks the curse of dimension for multivariate integration. However, there are problems which suffer the curse of dimension also in the randomized setting. An example is provided by multivariate approximation, In the average case setting, the curse of dimension is broken for multivariate integration independently of what is a probability measure on the class of functions.

Journal ArticleDOI
TL;DR: In this letter, the expected number of changes (“jumps”) in the linear complexity profile of a truly random binary sequence is determined and the variance is given.


Journal ArticleDOI
TL;DR: It is shown that the argument for polynomial expected complexity does not hold and the expected complexity of the ATSP under BnB subtour elimination isPolynomial or exponential in the number of cities.

Book ChapterDOI
06 Oct 1997
TL;DR: For all complexity measures in Kolmogorov complexity the effect discovered by P. Martin-Lof holds.
Abstract: For all complexity measures in Kolmogorov complexity the effect discovered by P Martin-Lof holds For every infinite binary sequence there is a wide gap between the supremum and the infimum of the complexity of initial fragments of the sequence It is assumed that that this inevitable gap is characteristic of Kolmogorov complexity, and it is caused by the highly abstract nature of the unrestricted Kolmogorov complexity

Journal ArticleDOI
TL;DR: It is shown that any parallel algorithm in the fixed degree algebraic decision tree model that answers membership queries in W ⊑ R n using p processors, requires Ω(¦W¦/n log(p/n) rounds where ¦w¦ is the number of connected components of W.

Proceedings ArticleDOI
C. Karg1
24 Jun 1997
TL;DR: It is shown that LR(k) testing of context free grammars with respect to random instances is many-one complete for Distributional NP.
Abstract: In this note, we show that LR(k) testing of context free grammars with respect to random instances is many-one complete for Distributional NP. The same result holds for testing whether a context free grammar is LL(k), strong LL(k), SLR(k), LC(k) or strong LC(k), respectively.

Book ChapterDOI
01 Jan 1997
TL;DR: This work relativizes and gives new characterizations of the ways to relativize nondeterministic space of insertion operations on formal languages relatively to complexity classes.
Abstract: We investigate complexities of insertion operations on formal languages relatively to complexity classes. In this way, we introduce operations closely related to LOG(CFL) and NP. Our results relativize and give new characterizations of the ways to relativize nondeterministic space.

Journal ArticleDOI
TL;DR: It is established that the sensitivity generation requires a minimum of N additional states, where N is the order of the filter, and this result is used to show a minimum complexity of 3N+1 multiplications for an order N filter.
Abstract: The problem of implementing adaptive IIR filters of minimum complexity is considered. The complexity used here is the number of multiplications in the implementation of the structures generating both the adaptive filter output and the sensitivities to be used in any gradient based algorithm. This complexity is independent of the specific adaptive algorithm used. It is established that the sensitivity generation requires a minimum of N additional states, where N is the order of the filter. This result is used to show a minimum complexity of 3N+1 multiplications for an order N filter. Principles to use in the construction of such lowest complexity implementations are provided, and examples of minimum complexity direct-form, cascade-form, and parallel-form adaptive IIR filters are given.

Book ChapterDOI
01 Jan 1997
TL;DR: A new lower bound on the computational complexity of infinite word generation is found: real-time, binary working alphabet, and o(n/(log n)2 space is insufficient to generate a concrete infinite word over two-letter alphabet.
Abstract: The most of the previous work on the complexity of infinite words has measured the complexity as descriptional one, i. e. an infinite word w had a “small” complexity if it was generated by a morphism or another simple machinery, and w has been considered to be “complex” if one needs to use more complex devices (gsm's) to generate it. In [5] the study of the computational complexity of infinite word generation and of its relation to the descriptional characterizations mentioned above was started. The complexity classes GSPACE(f) = {infinite words generated in space f(n)} are defined there, and some fundamental mechanisms for infinite word generation are related to them. It is also proved there, that there is no hierarchy between GSPACE(O(1)) and GSPACE(log2n). Here, GSPACE(f) ⊂ GSPACE(g) for g(n)≥f(n)≥log2n, f(n)=o(g(n)) is proved. The main result of this paper is a new lower bound on the computational complexity of infinite word generation: real-time, binary working alphabet, and o(n/(log n)2 space is insufficient to generate a concrete infinite word over two-letter alphabet.