scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1988"


Journal ArticleDOI
TL;DR: In this paper, Jacobi's Triple-Product and some number theoretic applications are discussed, as well as algebraic approximations of the elementary functions of pi and arithmetic-geometric mean iterators.
Abstract: Complete Elliptic Integrals and the Arithmetic-Geometric Mean Iteration. Theta Functions and the Arithmetic-Geometric Mean Iteration. Jacobi's Triple-Product and Some Number Theoretic Applications. Higher Order Transformations. Modular Equations and Algebraic Approximations to pi. The Complexity of Algebraic Functions. Algorithms for the Elementary Functions. General Means and Iterations. Some Additional Applications. Other Approaches to the Elementary Functions. Pi. Bibliography. Symbol List. Index.

269 citations


Book ChapterDOI
01 Apr 1988

85 citations


Proceedings ArticleDOI
14 Jun 1988
TL;DR: The separation of small complexity classes is considered and some downward closure results are derived which show that some intuitively arrive at results that were published previously are misleading.
Abstract: The separation of small complexity classes is considered. Some downward closure results are derived which show that some intuitively arrive at results that were published previously are misleading. This is done by giving uniform versions of simulations in the decision-tree model of concrete complexity. The results also show that sublinear-time computation has enough power to code interesting questions in polynomial-time complexity. >

75 citations


Journal ArticleDOI
Gabriel M. Kuper1, Moshe Y. Vardi1
06 Oct 1988
TL;DR: This work defines a hierarchy of queries based on the depth of nesting of power set operations and shows that this hierarchy corresponds to a natural hierarchy of Turing machines that run in multiply exponential time.
Abstract: We investigate the complexity of query processing in the logical data model (LDM). We use two measures: data complexity, which is complexity with respect to the size of the data, and expression complexity, which is complexity with respect to the size of the expressions denoting the queries. Our investigation shows that while the operations of product and union are essentially first-order operations, the power set operation is inherently a higher-order operation and is exponentially expensive. We define a hierarchy of queries based on the depth of nesting of power set operations and show that this hierarchy corresponds to a natural hierarchy of Turing machines that run in multiply exponential time.

52 citations


Journal ArticleDOI
TL;DR: Two generalized discrete versions of the Arimoto-Blahut algorithm for continuous channels require only the computation of a sequence of finite sums, which significantly reduces numerical computational complexity.
Abstract: A version of the Arimoto-Blahut algorithm for continuous channels involves evaluating integrals over an entire input space and thus is not tractable. Two generalized discrete versions of the Arimoto-Blahut algorithm are presented for this purpose. Instead of calculating integrals, both algorithms require only the computation of a sequence of finite sums. This significantly reduces numerical computational complexity. >

41 citations



Proceedings ArticleDOI
14 Jun 1988
TL;DR: The authors investigate some of these questions and show that they are equivalent to each other and are closely related to other important open questions in complexity theory.
Abstract: A recent paper by S. Tang and R. Book (1988) initiated a study of the classes of sets which are equivalent to tally sets or sparse sets, under varying notions of reducibility. A number of interesting results are proved, and many additional questions are posed and left open. The authors investigate some of these questions and show that they are equivalent to each other and are closely related to other important open questions in complexity theory. >

27 citations


Proceedings ArticleDOI
14 Jun 1988
TL;DR: Recent results on polynomial complexity cores, their complexity, density, and structure and their counterparts on proper hard cores and an approach to generalized complexity cores that is almost axiomatic in nature are surveyed and interpreted.
Abstract: Recent results on polynomial complexity cores, their complexity, density, and structure and their counterparts on proper hard cores are surveyed and interpreted. An approach to generalized complexity cores that is almost axiomatic in nature is included in the discussion. The purpose is to provide an integrated presentation of this material. >

26 citations



Journal ArticleDOI
TL;DR: The cubic algorithm is a nongradient method for the solution of multi-extremal, nonconvex Lipschitzian optimization problems and improved computational schemes are proposed.
Abstract: The cubic algorithm (Ref. 1) is a nongradient method for the solution of multi-extremal, nonconvex Lipschitzian optimization problems. The precision and complexity of this algorithm are studied, and improved computational schemes are proposed.

21 citations


Proceedings ArticleDOI
14 Jun 1988
TL;DR: The nonuniformity of communication protocols is used to show that the Boolean communication hierarchy does not collapse, and some proper inclusions are shown.
Abstract: The complexity of communication between two processors is studied in terms of complexity classes. Previously published results showing some analogies between Turing machine classes and the corresponding communication complexity classes are enlarged, and some proper inclusions are shown. The nonuniformity of communication protocols is used to show that the Boolean communication hierarchy does not collapse. For completeness an overview on communication complexity classes is added with proofs of some properties already observed by other authors. >


Journal ArticleDOI
01 Apr 1988
TL;DR: The hierarchy of S-communication complexity is established, and a similar relation between determinism and nondeterminism as for communication complexity is proved, and newΩ(n2) lower bounds for language recognition on AT2 of VLSI circuits are obtained.
Abstract: In this paper a formal definition of S-communication complexity based on the idea of Aho, Ullman and Yanakakis [On notions of information transfer in VLSI circuits, Proc. 14th Ann. ACM STOC (1983) 133–139] is given, and its properties are compared with the original communication complexity. The basic advantages of S-communication complexity presented here are the following two: (1) S-communication complexity provides the strongest lower bound Ω(n2) on AT2 of VLSI circuits in most cases in which the communication complexity grants only constant lower bounds on AT2; (2) proving lower bounds for S-communication complexity is technically not so hard as obtaining lower bounds for communication complexity. Further, the hierarchy of S-communication complexity is established, and a similar relation between determinism and nondeterminism as for communication complexity is proved. Using the S-communication complexity, newΩ(n2) lower bounds for language recognition on AT2 of VLSI circuits are obtained. The hardness of algorithmically determining the S-communication complexity of a given Boolean formula, and other properties of S-communication complexity are studied.

Journal ArticleDOI
TL;DR: A set of three algorithms for solving single-row routine problems with a fixed street capacity using the least number of layers using the main difference among these algorithms is in the strategy used to search for an optimal solution, which greatly affects the performance.
Abstract: A set of three algorithms is presented for solving single-row routine problems with a fixed street capacity using the least number of layers. The main difference among these algorithms is in the strategy used to search for an optimal solution, which greatly affects the performance. At the extreme points of the strategy are algorithms Q and S. The worst-case time complexity is linear for algorithm Q and exponential for algorithm S. The best-case time complexity of all the algorithms is linear. The main disadvantage of algorithm Q is that the constant associated with its time complexity bounds is large. On the other hand, the constant associated with the best-case time complexity bound for algorithm S is small. An experimental evaluation of the performance of the algorithms is presented. >

Book ChapterDOI
11 Jul 1988
TL;DR: This paper employs generating functions theory in order to perform a precise average case complexity analysis of Rete algorithm, and extends the model to take into account different frequency coefficients for symbols in a way that can closely model real-life applications.
Abstract: The Rete multi-pattern match algorithm [Forg 82] is an efficient method for comparing a large collection of patterns to a large collection of objects. It finds all the combinations of objects which match with a conjunction of patterns. This algorithm is widely used to improve the run-time performance of rule-based expert systems. In this paper we employ generating functions theory in order to perform a precise average case complexity analysis of Rete algorithm. Our results are first established under a “simple” random term model, and later extended to take into account different frequency coefficients for symbols in a way that can closely model real-life applications.

Journal Article
TL;DR: Very general P-comp!eteness tileorems are presented which yield a new series of P-complete problems arising from graph optixr~ization probienss including the lexicographically first maxima! independent set problem (Cook 1983).

Journal ArticleDOI
TL;DR: A rather general approach for constructing upper and lower bounds for the average computational complexity of divide-and-conquer algorithms is described.

Journal ArticleDOI
Nicholas Pippenger1
TL;DR: The object of this note is to give a correct proof of Proposition, 5.2.2 of [ 11], a function that assigns to every positive integer d a non-negative real number C(d) and satisfies the following three axioms.

Book
01 Jan 1988
TL;DR: The equivalence of dgsm replications on Q-rational languages is decidable and the learnability of DNF formulae is debated.
Abstract: Communication complexity of PRAMs.- Average case complexity analysis of the RETE multi-pattern match algorithm.- Problems easy for tree-decomposable graphs extended abstract.- Serializability in distributed systems with handshaking.- Algorithms for planar geometric models.- Nonuniform learnability.- Zeta functions of recognizable languages.- Dynamic programming on graphs with bounded treewidth.- Efficient simulations of simple models of parallel computation by time-bounded ATM's and space-bounded TM's.- Optimal slope selection.- Approximation of a trace, asynchronous automata and the ordering of events in a distributed system.- New techniques for proving the decidability of equivalence problems.- Transitive orientations, mobius functions, and complete semi-thue systems for free partially commutative monoids.- The complexity of matrix transposition on one-tape off-line turing machines with output tape.- Geometric structures in computational geometry.- Arrangements of curves in the plane - topology, combinatorics, and algorithms.- Reset sequences for finite automata with application to design of parts orienters.- Random allocations and probabilistic languages.- Systolic architectures, systems and computations.- New developments in structural complexity theory.- Operational semantics of OBJ-3.- Do we really need to balance patricia tries?.- Contractions in comparing concurrency semantics.- A complexity theory of efficient parallel algorithms.- On the learnability of DNF formulae.- Efficient algorithms on context-free graph languages.- Efficient analysis of graph properties on context-free graph languages.- A polynomial-time algorithm for subgraph isomorphism of two-connected series-parallel graphs.- Constructive Hopf's theorem: Or how to untangle closed planar curves.- Maximal dense intervals of grammar forms.- Computations, residuals, and the power of indeterminacy.- Nested annealing: A provable improvement to simulated annealing.- Nonlinear pattern matching in trees.- Invertibility of linear finite automata over a ring.- Moving discs between polygons.- Optimal circuits and transitive automorphism groups.- A Kleene-presburgerian approach to linear production systems.- On minimum flow and transitive reduction.- La Reconnaissance Des Facteurs D'un Langage Fini Dans Un Texte En Temps Lineaire - Resume -.- Regular languages defined with generalized quantifiers.- A dynamic data structure for planar graph embedding.- Separating polynomial-time turing and truth-table reductions by tally sets.- Assertional verification of a timer based protocol.- Type inference with partial types.- Some behavioural aspects of net theory.- The equivalence of dgsm replications on Q-rational languages is decidable.- Pfaffian orientations, 0/1 permanents, and even cycles in directed graphs.- On restricting the access to an NP-oracle.- On ? 1?tt p -sparseness and nondeterministic complexity classes.- Semantics for logic programs without occur check.- Outer narrowing for equational theories based on constructors.

Book ChapterDOI
TL;DR: For several different algebraic structures S, this work studies the computational complexity of such problems as determining, for a system of equations on S, how to solve the inequality of the entailments of Euler's inequality.
Abstract: For several different algebraic structures S, we study the computational complexity of such problems as determining, for a system of equations on S,


Proceedings ArticleDOI
16 Mar 1988
TL;DR: A parallel algorithm for sorting is developed which has a time complexity of O(log n) and requires n/sup 2//log n processors and can be readily mapped onto an SIMD mesh-connected array of processors.
Abstract: A parallel algorithm for sorting is developed which has a time complexity of O(log n) and requires n/sup 2//log n processors. The algorithm can be readily mapped onto an SIMD mesh-connected array of processors which has all the features of efficient VLSI implementation. The corresponding hardware algorithm maintains the O(log n) execution time and has a low, O(n) interprocessor communication time. >

Proceedings ArticleDOI
07 Dec 1988
TL;DR: The author has shown that a recently introduced method for asynchronous simulation with rollback contains the Bellman-Ford algorithm as a special case, and he has deduced that the rollback method also has exponential communication complexity.
Abstract: Summary form only given. The author has studied an asynchronous version of the Bellman-Ford algorithm for computing the shortest distances from all nodes in a network to a fixed destination. It is known that this algorithm has (in the worst case) exponential (in the size of the underlying graph) communication complexity. The author has obtained results indicating that its expected (in a probabilistic sense) communication complexity is actually polynomial, under some reasonable probabilistic assumptions. He has shown that a recently introduced method for asynchronous simulation with rollback contains the Bellman-Ford algorithm as a special case, and he has deduced that the rollback method also has exponential communication complexity. The author has also investigated whether (under certain probabilistic assumptions and/or modifications of the simulation algorithm) the communication complexity becomes polynomial. >