scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1994"


Journal ArticleDOI
TL;DR: In this article, the authors proposed a new algorithm, AC-6, which keeps the optimal worst-case time complexity of AC-4 while working out the drawback of space complexity.

357 citations


Journal ArticleDOI
TL;DR: A new k-out-of-n model is constructed, which has n components, each with its own positive integer weight, such that the system is good if the total weight of good (failed) components is at least k.
Abstract: This paper constructs a new k-out-of-n model, viz, a weighted-k-out-of-n system, which has n components, each with its own positive integer weight (total system weight=w), such that the system is good (failed) if the total weight of good (failed) components is at least k. The reliability of the weighted-k-out-of-n:G system is the complement of the unreliability of a weighted-(w-k+1)-out-of-n:F system. Without loss of generality, the authors discuss the weighted-k-out-of-n:G system only. The k-out-of-n:G system is a special case of the weighted-k-out-of-n:G system wherein the weight of each component is 1. An efficient algorithm is given to evaluate the reliability of the weighted-k-out-of-n:G system. The time complexity of this algorithm is O(n.k). >

156 citations


Journal ArticleDOI
TL;DR: An O(log n) time wait-free approximate agreement algorithm is presented; the complexity of this algorithm is within a small constant of the lower bound.
Abstract: The time complexity of wait-free algorithms in “normal” executions, where no failures occur and processes operate at approximately the same speed, is considered. A lower bound of log n on the time complexity of any wait-free algorithm that achieves approximate agreement among n processes is proved. In contrast, there exists a non-wait-free algorithm that solves this problem in constant time. This implies an O(log n) time separation between the wait-free and non-wait-free computation models. On the positive side, we present an O(log n) time wait-free approximate agreement algorithm; the complexity of this algorithm is within a small constant of the lower bound.

114 citations


Proceedings Article
01 Jan 1994
TL;DR: In this article, a theory of dynamic complexity was developed for dynamic first-order logic (Dyn-FO), which is the set of properties that can be maintained and queried in firstorder logic, i.e., relational calculus, on a relational database.
Abstract: Traditionally, computational complexity has considered only static problems. Classical complexity classes such as NC, P, and NP are defined in terms of the complexity of checking?upon presentation of an entire input?whether the input satisfies a certain property. For many applications of computers it is more appropriate to model the process as a dynamic one. There is a fairly large object being worked on over a period of time. The object is repeatedly modified by users and computations are performed. We develop a theory of dynamic complexity. We study the new complexity class, dynamic first-order logic (Dyn-FO). This is the set of properties that can be maintained and queried in first-order logic, i.e., relational calculus, on a relational database. We show that many interesting properties are in Dyn-FO, including multiplication, graph connectivity, bipartiteness, and the computation of minimum spanning trees. Note that none of these problems is in static FO, and this fact has been used to justify increasing the power of query languages beyond first-order. It is thus striking that these problems are indeed dynamic first-order and, thus, were computable in first-order database languages all along. We also define “bounded-expansion reductions” which honor dynamic complexity classes. We prove that certain standard complete problems for static complexity classes, such as REACHafor P, remain complete via these new reductions. On the other hand, we prove that other such problems, including REACH for NL and REACHdfor L, are no longer complete via bounded-expansion reductions. Furthermore, we show that a version of REACHa, called REACHa+, is not in Dyn-FO unless all of P is contained in parallel linear time.

107 citations


Journal ArticleDOI
TL;DR: It is proved that if t(n)z n is a time-constructible function and A 1s a recurswe set not in DTIME(t), there then exist a constant c and mfimtely many I such that ic’(x :,4) z K’ (x) – c.
Abstract: We introduce a measure for the computational complexity of mdiwdual instances of a decision problem and study some of Its properties. The instance complexity of a string ~ with respect to a set A and time bound t, ict(x : A). is defined as the size of the smallest special-case program for A that run> m time t,decides x correctly, and makes no mistakes on other strings (“don’t know” answers are permitted). We prove that a set A is m P if and only if there exist a polynomial t and a constant c such that ic’(x : A) < c for all X; on the other hand, If A ]s NP-hard and P # NP, then for all polynomials t and constants c. lc’(~ : A) > c log I ~ I for ]nfimtely many x. Obserwng that Kf(x), the t-bounded Kolmogorov complexity of x, N roughly an upper bound on ]Ct(.t : A), we proceed to investigate the existence of mdiwdually hard problem Instances. ].e , strings whose instance complexity E close to their Kolmogorov complexity. We prove that if t(n)z n is a time-constructible function and A 1s a recurswe set not in DTIME(t), there then exist a constant c and mfimtely many I such that ic’(x : ,4) z K’ (x) – c. for some Prehmmary versions of parts of this work have appeared under the titles “What 1sa hard instance of a computational problem?” m Proceedings of tize Conference on Structare m Cornplexm Theory (Berkeley, Calif., June i 986), and “On the instance complexity of NP-hard problems” in Procecduzgs of the 5tk .4nrrual Conference on StntctLwe m Cowrpkwty Theory (Barcelona, Spain, July 1990). These Proceedings have been published by Springer-Verlag, Berlin, and IEEE, New York, respectively. The research of P. Orponen was supported by the Academy of Finland, and the research of K. Ko in part by National Science Foundation (NSF) grant CCR 8S-01575. Authors’ current addresses: P. Orponen, Department of Computer Science, Unnerslty of Helsinkl, FIN-0001 4 Helsinki, Finland; K. Ko, Department of Computer Science, State Unwersity of New York at Stony Brook, Stony Brook, NY 11794; U. Schomng, Abteiltrng Theoretische Informatik, Umversltat Ulm, D-89069 Ulm, Germany; O. Watanabe, Department of Computer Science, Tohyo Institute of Technology, Tokyo 152, Japan. Permission to copy without fee all or part of this material IS granted provided that the copies are not made or distributed for duect commercial advantage, the ACM copyright notice and the title of the pubhcdtion and Its date appear, and notice K given that copying 1s by permission of the Association for Computing Machinery. To copy otherwse, or to repubhsh, requmes a fee and/or specific permission. 01994 ACM 0004-5411/94/’0100-0096 $03.50 Journal of the AwocI.tIon for Compuh.g Md.hlncry, Vii 41 No 1, January 1YY4 pp Y6-121 Instance Complexity 97 time bound t‘(n)dependent on the complexity of recognizing A. Under the stronger assumptions that the set A is NP-hard and DEXT # NEXT, we prove that for any polynomia~ t there exist a polynomial f‘ and a constant c such that for infinitely many x, ict(x : A) z K“(x) – c. If A is DEXT-hard, then the same result holds unconditionally. We also prove that there is a set A E DEXT such that for some constant c and all x, ic’xp(x : A) s K’xp (x) – 2 log ZCexPr(x)– C, where exp(n) = 2“ and exp’(n) = cn2zn + c.

75 citations


Journal ArticleDOI
TL;DR: A deterministic protocol that has linear space complexity, linear time complexity for a read operation, and constant time complexity of a write, and an overwhelmingly small, controllable probability of error is provided.
Abstract: We address the problem of reading several variables (components) X/sub 1/,...,X/sub c/, all in one atomic operation, by only one process, called the reader, while each of these variables are being written by a set of writers. All operations (i.e., both reads and writes) are assumed to be totally asynchronous and wait-free. For this problem, only algorithms that require at best quadratic time and space complexity can be derived from the existing literature. (The time complexity of a construction is the number of suboperations of a high-level operation and its space complexity is the number of atomic shared variables it needs) In this paper, we provide a deterministic protocol that has linear (in the number of processes) space complexity, linear time complexity for a read operation, and constant time complexity for a write. Our solution does not make use of time-stamps. Rather, it is the memory location where a write writes that differentiates it from the other writes. Also, introducing randomness in the location where the reader gets the value that it returns, we get a conceptually very simple probabilistic algorithm. This algorithm has an overwhelmingly small, controllable probability of error. Its space complexity, and also the time complexity of a read operation, are sublinear. The time complexity of a write is constant. On the other hand, under the Archimedean time assumption, we get a protocol whose time and space complexity do not depend on the number of writers, but are linear in the number of components only. (The time complexity of a write operation is still constant.). >

61 citations


Proceedings ArticleDOI
28 Jun 1994
TL;DR: In this paper, the authors describe three orthogonal complexity measures: parallel time, amount of hardware, and degree of non-uniformity, which together parametrize most complexity classes, and show that the descriptive complexity framework neatly captures these measures using the parameters: quantifier depth, number of variable bits, and type of numeric predicates respectively.
Abstract: We describe three orthogonal complexity measures: parallel time, amount of hardware, and degree of non-uniformity, which together parametrize most complexity classes. We show that the descriptive complexity framework neatly captures these measures using the parameters: quantifier depth, number of variable bits, and type of numeric predicates respectively. A fairly simple picture arises in which the basic questions in complexity theory-solved and unsolved-can be understood as questions about tradeoffs among these three dimensions. >

32 citations


Proceedings ArticleDOI
28 Jun 1994
TL;DR: A general setting in which the complexity of solving two independent problems is the product of the associated individual complexities and several concrete results are derived for decision trees and communication complexity.
Abstract: Gives a general setting in which the complexity (or quality) of solving two independent problems is the product of the associated individual complexities. The authors then derive from this setting several concrete results of this type for decision trees and communication complexity. >

29 citations


Journal ArticleDOI
TL;DR: The average case complexity of multivariate integration and L 2 function approximation for the class F = C ([0, 1] d ) of continuous functions of d variables was studied in this article.

24 citations


Book ChapterDOI
25 Aug 1994
TL;DR: An efficient algorithm is given, the SAT1.2 algorithm, for the SAT problem, which can find a solution for a satisfiable CNF formula efficiently but gives an answer in O(mo(1)2m) time to an unsatisfiable C NF formula.
Abstract: In this paper, we give an efficient algorithm, the SAT1.2 algorithm, for the SAT problem. For randomly generated formulas with n clauses, m variables, and l literals per clause, the average run time of the SAT1.2 algorithm is O(mo(1)n2) for l≥3 and n/m≤α2l/l, where α

13 citations


Proceedings ArticleDOI
E. Hemaspaandra1
04 Jul 1994
TL;DR: It is shown that under reasonable assumptions the complexity can increase only if the complexity of all the uni-modal fragments is below PSPACE.
Abstract: We prove general theorems about the relationship between the complexity of multi-modal logics and the complexity of their uni-modal fragments Halpern and Moses (1985) show that the complexity of a multi-modal logic without any interaction between the modalities may be higher than the complexity of the individual fragments We show that under reasonable assumptions the complexity can increase only if the complexity of all the uni-modal fragments is below PSPACE In addition, we completely characterize what happens if the complexity of all fragments is below PSPACE >

Proceedings ArticleDOI
23 May 1994
TL;DR: Inspired by recent successful attempts to develop a meaningful average case analysis for TM computations, a new comlexity measure for the internal delay is defined, called time, which is defined by considering the complexity of circuits with uniform distributed random input bits that generate such distributions.
Abstract: In contrast to machine models like Turing machines or random access machines, circuits are a rigid computational model. The internal information flow of a computation is fixed in advance, independent of the actual input. Therefore, in complexity theory only worst case complexity measures have been used to analyse this model. Concerning the circuit size this seems to be the best one can do. The delay between feeding the input bits into a circuit and being able to read off the output bits at the output gates is usually measured by the depth of the circuit. One might try to take advantage of favorable cases in which the output values are obtained much earlier. This will be the case when critical paths, e.g. paths between input and output gates of maximal length, have no influence on the final output. Inspired by recent successful attempts to develop a meaningful average case analysis for TM computations [Levi86, Gure91, BCGL92, ReSc93a], we follow the same goal for the circuit model. For this purpose, a new comlexity measure for the internal delay is defined, called time. This may be given implicitly or explicitly, where in the latter case a gate has to signal that it is “ready”, e.g. has computed the desired result. Using a simple coding both models are shown to be equivalent. By doubling the number of gates any circuit with implicit timing can easily be transformed to an equivalent circuit with explicit time signals. Based on the notion of time, two average case measures for the circuit delay are defined. The analysis is not restricted to uniform distributions over the input space, *supported by DFG Research Grant Re 672-2 t Institut fur Theoretische Informatik, AlexanderstraBe 10, 64283 Darmstadt, Germany email: jakoby / reischuk / schindel Ql iti.informatik.thdarmstadt .de Permission to copy without fee all or part of this material is granted provided that the copies are not made or cfiatributecf for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association of Computing Machinery. To copy othervdse, or to republish, requires a fee and/or specific permission. STOC 945/94 Montreal, Quebec, Canada @ 1994 ACM 0-89791 -663-6/94/0005..$3.50 instead a large class of distributions will be considered. For this purpose, a complexity notion is needed for distributions. We define a measure based on the circuit model by considering the complexity of circuits with uniform distributed random input bits that generate such distributions. Finally, this new approach is applied to concrete examples. We derive matching lower and upper bounds for the average circuit delay for basic functions like the OR, ADDITION, THRESHOLD and PARITY. It will be shown that for PARITY the average delay remains logarithmic. In many cases, however, an exponential speedup compared to the worst case can be achieved. For example, the average delay for n-Bit-ADDITION is of order log log n. The circuit designs to achieve these bounds turn out to be very different from the standard ones for optimal worst case results.

Proceedings ArticleDOI
28 Jun 1994
TL;DR: This work surveys recent research concerning the qualitative complexity of Angluin's (1993) model of learning with queries, and characterizations of the power of different learning protocols by complexity classes of oracle machines are reviewed.
Abstract: We survey recent research concerning the qualitative complexity of Angluin's (1993) model of learning with queries. In this model, there is a learner that tries to identify a target concept by means of queries to a teacher. Thus, the process can be naturally formulated as an oracle computation. Among the results we review there are: characterizations of the power of different learning protocols by complexity classes of oracle machines; relations between the complexity of learning and the complexity of computing advice functions for nonuniform classes; and combinatorial characterizations of the concept classes that are learnable in specific protocols. >

Book ChapterDOI
11 Jul 1994
TL;DR: The average case complexity of a computational problem for arbitrary input distributions is defined using a complexity measure for the average delay of computational model circuits over the semigroup, called time.
Abstract: We analyse the average case complexity of evaluating all prefixes of an input vector over a given semigroup As computational model circuits over the semigroup are used and a complexity measure for the average delay of such circuits, called time, is introduced Based on this notion, we then define the average case complexity of a computational problem for arbitrary input distributions

Journal ArticleDOI
TL;DR: The theory of the r.
Abstract: The theory of the re m-degrees has the same computational complexity as true arithmetic In fact, it is possible to define without parameters a standard model of arithmetic in this degree structure

Proceedings ArticleDOI
23 May 1994
TL;DR: This work relates statistical knowledge-complexity with perfect knowledge- Complexity; specifically, it shows that, for the honest verifier, these hierarchies coincide, up to a logarithmic additive term.
Abstract: We study the computational complexity of languages which have interactive proofs of logarithmic knowledge complexity. We show that all such languages can be recognized in B7VN7. Prior to this work, for languages with greaterthan-zero knowledge complexity (and specifically, even for knowledge complexity 1) only trivial computational complexity bounds (i.e., only recognizability in PSPAC& = ZP) were known. Inthe course of our proof, we relate statistical knowledge-complexity with perfect knowledge-complexity; specifically, we show that, for the honest verifier, these hierarchies coincide, up to a logarithmic additive term (i.e., sKc(k(.)) g Pxc(k($)+ log(.))).

Journal ArticleDOI
TL;DR: In this article, the authors studied the computational complexity of a matrix inversion formula with the intention of showing its improvement over the naive method of computing the inverse separately, and they did not claim that the matrix-inversion formula is their discovery.
Abstract: The purpose of our two-page communication was to study the computational complexity of a matrix inversion formula with the intention of showing its improvement over the naive method of computing the inverse separately. We did not intend to claim that the matrix inversion formula is our discovery. However, it is true that this point was not made clear in our short paper. >

Journal ArticleDOI
TL;DR: A related algorithm is presented that obtains the linear complexity of the sequence requiring, on average for sequences of period 2n,n≥0, no more than 2 parity checks sums.
Abstract: The linear complexity of a periodic binary sequence is the length of the shortest linear feedback shift register that can be used to generate that sequence. When the sequence has least period 2 n ,n≥0, there is a fast algorithm due to Games and Chan that evaluates this linear complexity. In this paper a related algorithm is presented that obtains the linear complexity of the sequence requiring, on average for sequences of period 2 n ,n≥0, no more than 2 parity checks sums.

Book ChapterDOI
10 Jun 1994
TL;DR: This paper searches for connections between descriptional and computational complexities of infinite words, measuring the complexity of the mechanism used to generate infinite words by resourses used by Turing machines.
Abstract: This paper searches for connections between descriptional and computational complexities of infinite words. In the former one the complexity is measured by the complexity of the mechanism used to generate infinite words, typical examples being iterated morphisms, iterated dgsm's and double D0L TAG systems. In the latter on the complexity is measured by resourses used by Turing machines to generate infinite words.

Proceedings ArticleDOI
01 Apr 1994
TL;DR: A parallel algorithm for performing Boolean set operations on generalized polygons that have holes in them that tries to minimize the intersection point computations by intersecting only a subset of loops of the polygons based on of their topological relationships.
Abstract: We present a parallel algorithm for performing Boolean set operations on generalized polygons that have holes in them. The intersection algorithm has a processor complexity of O(m/sup 2/n/sup 2/) processors and a time complexity of O(max(2logm, log/sup 2/n)), where m is the maximum number of vertices in any loop of a polygon, and n is the maximum number of loops per polygon. The union and difference algorithms have a processor complexity of O(m/sup 2/n/sup 2/) and time complexity of O(logm) and O(2logm, logn) respectively. The algorithm is based on the EREW PRAM model. The algorithm tries to minimize the intersection point computations by intersecting only a subset of loops of the polygons based on of their topological relationships. >

Journal ArticleDOI
TL;DR: The lower bound of the worst-case time complexity of the problem is shown to be of O(n log n) and therefore the time complexity (time complexity) of the presented algorithm is shows to be lower bound.

Journal ArticleDOI
Ming Chu1
TL;DR: This work presents an information-based complexity problem for which the computational complexity can be any given increasing function of the information complexity, and the information simplicity can beAny non-decreasing function of ??1, where ? is the error parameter.

Proceedings Article
01 Jan 1994
TL;DR: In this paper, the authors prove general theorems about the relationship between the complexity of multi-modal logics and their complexity of their una-modality fragments.
Abstract: In this paper, we prove general theorems about the relationship between the complexity of multi-modal logics and the complexity of their una-modal fragments. Halpern and Moses [HM85] show that the complexity of a multi-modal logic without any interaction between the modalities may be higher than the complexity of the individual fragments. In this paper, we show that under reasonable assumptions the complexity can increase only if the complexity of all the uni-modal fragments is below PSPACE. In addition, we completely characterize what happens if the complexity of all fragments is below PSPACE.

Proceedings ArticleDOI
19 Apr 1994
TL;DR: A parallel algorithm for RNS to binary conversion, which is computationally efficient and requires moderate amount of storage, is presented, based on a graphical interpretation of the residue numbers, and imposes no restriction on the size or choice of moduli set.
Abstract: We present a parallel algorithm for RNS to binary conversion, which is computationally efficient and requires moderate amount of storage. The algorithm is based on a graphical interpretation of the residue numbers, and imposes no restriction on the size or choice of moduli set. The parallel time complexity of the algorithm is /spl Theta/ upper bound [(log k)], where k is the size of the moduli set. >

Book ChapterDOI
06 Sep 1994
TL;DR: Two probabilistic models developed in order to predict the computational complexity of the branch and bound algorithm as well as its suitability for a parallelization based on the simultaneous exploration of all subproblems having a same common lower bound are studied.
Abstract: We study two probabilistic models developed in order to predict the computational complexity of the branch and bound algorithm as well as its suitability for a parallelization based on the simultaneous exploration of all subproblems having a same common lower bound We show that both models, starting from different assumptions, yield asymptotically the same results but differ for small problems. Both models agree to predict a quick increase of the number of subproblems as a function of their lower bounds offering a convenient approach for parallelization of the branch and bound algorithm.

Journal ArticleDOI
TL;DR: An analysis of the computational complexity of the linear quadratic control models commonly used in Economics and the MacRae approximation, which has a nested structure.

Journal ArticleDOI
TL;DR: A structure called hierarchical tree is proposed to reduce the complexity of the Dempster's rule of combination in evidence theory and is bounded by O (2 2 n −2 ) in the worst case versus O ( 2 2 n ) for the brute-force algorithm.
Abstract: In this article we propose a structure called hierarchical tree to reduce the complexity of the Dempster's rule of combination in evidence theory. Our algorithm is bounded by O(22n−2) in the worst case versus O(22n) for the brute-force algorithm. We can hope for a better average complexity. Furthermore, we propose algorithms based on hierarchical trees to reduce the complexity of the computation of Bel, Pl and Q functions. INFORMS Journal on Computing, ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.

Book ChapterDOI
01 Mar 1994
TL;DR: The history of the developments from Godel''s 1956 letter asking for the computational complexity of finding proofs of theorems through computational complexity, the exploration of complete problems for NP and PSPACE, through the results of structural complexity to the recent insights about interactive proofs are reviewed.
Abstract: In this paper, we view $P \stackrel{?}{=} NP$ as the problem which symbolizes the attempt to understand what is and what is not feasibly computable. The paper shortly reviews the history of the developments from Godel''s 1956 letter asking for the computational complexity of finding proofs of theorems, through computational complexity, the exploration of complete problems for NP and PSPACE, through the results of structural complexity to the recent insights about interactive proofs.

Book ChapterDOI
11 Jul 1994
TL;DR: Two different types of complexity lower bounds for the one-way bounded-error error probabilistic space complexity are proved.
Abstract: We prove two different types of complexity lower bounds for the one-way bounded-error error probabilistic space complexity. The lower bounds are proved for arbitrary languges in the common way in terms of the deterministic communication dimension of languages and in terms of the notion “probabilistic communication characteristic” of language that we define. These lower bounds are incomparable.

Book ChapterDOI
22 Aug 1994
TL;DR: A function, B(x) is introduced which assigns a real number to a string, x, which is intended to be a measure of the randomness of x, the Kolmogorov complexity of x.
Abstract: A function, B(x) is introduced which assigns a real number to a string, x, which is intended to be a measure of the randomness of x. Comparisons are made between B(x) and K(x), the Kolmogorov complexity of x. A O(n3) algorithm for computing B(x) is given, along with brief descriptions of experimental results showing the efficacy of this function in practical situations.