scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1986"


Journal ArticleDOI
TL;DR: In this paper, the authors present a data structure for representing Boolean functions and an associated set of manipulation algorithms, which have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large.
Abstract: In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.

9,021 citations


Journal ArticleDOI
14 Apr 1986-Nature
TL;DR: A novel method of directly calculating the force on N bodies that grows only as N log N is described, using a tree-structured hierarchical subdivision of space into cubic cells, each is recursively divided into eight subcells whenever more than one particle is found to occupy the same cell.
Abstract: Until recently the gravitational N-body problem has been modelled numerically either by direct integration, in which the computation needed increases as N2, or by an iterative potential method in which the number of operations grows as N log N. Here we describe a novel method of directly calculating the force on N bodies that grows only as N log N. The technique uses a tree-structured hierarchical subdivision of space into cubic cells, each of which is recursively divided into eight subcells whenever more than one particle is found to occupy the same cell. This tree is constructed anew at every time step, avoiding ambiguity and tangling. Advantages over potential-solving codes are: accurate local interactions; freedom from geometrical assumptions and restrictions; and applicability to a wide class of systems, including (proto-)planetary, stellar, galactic and cosmological ones. Advantages over previous hierarchical tree-codes include simplicity and the possibility of rigorous analysis of error. Although we concentrate here on stellar dynamical applications, our techniques of efficiently handling a large number of long-range interactions and concentrating computational effort where most needed have potential applications in other areas of astrophysics as well.

3,750 citations


Journal ArticleDOI
TL;DR: Answering a question of Vera Sós, it is shown how Lovász’ lattice reduction can be used to find a point of a given lattice, nearest within a factor ofcd (c = const.) to a given point in Rd.
Abstract: Answering a question of Vera Sos, we show how Lovasz’ lattice reduction can be used to find a point of a given lattice, nearest within a factor ofc d (c = const.) to a given point in R d . We prove that each of two straightforward fast heuristic procedures achieves this goal when applied to a lattice given by a Lovasz-reduced basis. The verification of one of them requires proving a geometric feature of Lovasz-reduced bases: ac 1 lower bound on the angle between any member of the basis and the hyperplane generated by the other members, wherec 1 = √2/3. As an application, we obtain a solution to the nonhomogeneous simultaneous diophantine approximation problem, optimal within a factor ofC d . In another application, we improve the Grotschel-Lovasz-Schrijver version of H. W. Lenstra’s integer linear programming algorithm. The algorithms, when applied to rational input vectors, run in polynomial time.

1,030 citations


Journal ArticleDOI
TL;DR: New algorithms for arc and path consistency are presented and it is shown that the arc consistency algorithm is optimal in time complexity and of the same-order space complexity as the earlier algorithms.

734 citations


Journal ArticleDOI
Neil Immerman1
TL;DR: The rotary ball valve includes a generally annular seal which is held in place by an edge anchored annular retainer which is oversized for the valve housing and on installation resides in a state of compression.
Abstract: We characterize the polynomial time computable queries as those expressible in relational calculus plus a least fixed point operator and a total ordering on the universe. We also show that even without the ordering one application of fixed point suffices to express any query expressible with several alternations of fixed point and negation. This proves that the fixed point query hierarchy suggested by Chandra and Harel collapses at the first fixed point level. It is also a general result showing that in finite model theory one application of fixed point suffices.

721 citations


Journal ArticleDOI
TL;DR: A negative answer is given to the question as to whether there exist NP-problems whose instances have solutions that are unique but are hard to find, using the notion of randomized polynomial time reducibility.

647 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that the annealing algorithm converges with probability arbitrarily close to 1, and that it is no better than a deterministic method. But it is also shown that there are cases where convergence takes exponentially long.
Abstract: The annealing algorithm is a stochastic optimization method which has attracted attention because of its success with certain difficult problems, including NP-hard combinatorial problems such as the travelling salesman, Steiner trees and others. There is an appealing physical analogy for its operation, but a more formal model seems desirable. In this paper we present such a model and prove that the algorithm converges with probability arbitrarily close to 1. We also show that there are cases where convergence takes exponentially long—that is, it is no better than a deterministic method. We study how the convergence rate is affected by the form of the problem. Finally we describe a version of the algorithm that terminates in polynomial time and allows a good deal of ‘practical’ confidence in the solution.

609 citations


Book
01 Jan 1986
TL;DR: A computation that is guaranteed to take at most cn3 time for input of size n will be thought of as an ‘easy’ computation, and one that needs at most n10 time is also easy.
Abstract: An algorithm is a method for solving a class of problems on a computer. The complexity of an algorithm is the cost, measured in running time, or storage, or whatever units are relevant, of using the algorithm to solve one of those problems. This book is about algorithms and complexity, and so it is about methods for solving problems on computers and the costs (usually the running time) of using those methods. Computing takes time. Some problems take a very long time, others can be done quickly. Some problems seem to take a long time, and then someone discovers a faster way to do them (a ‘faster algorithm’). The study of the amount of computational effort that is needed in order to perform certain kinds of computations is the study of computational complexity. Naturally, we would expect that a computing problem for which millions of bits of input data are required would probably take longer than another problem that needs only a few items of input. So the time complexity of a calculation is measured by expressing the running time of the calculation as a function of some measure of the amount of data that is needed to describe the problem to the computer. For instance, think about this statement: ‘I just bought a matrix inversion program, and it can invert an n × n matrix in just 1.2n3 minutes.’ We see here a typical description of the complexity of a certain algorithm. The running time of the program is being given as a function of the size of the input matrix. A faster program for the same job might run in 0.8n3 minutes for an n × n matrix. If someone were to make a really important discovery (see section 2.4), then maybe we could actually lower the exponent, instead of merely shaving the multiplicative constant. Thus, a program that would invert an n × n matrix in only 7n2.8 minutes would represent a striking improvement of the state of the art. For the purposes of this book, a computation that is guaranteed to take at most cn3 time for input of size n will be thought of as an ‘easy’ computation. One that needs at most n10 time is also easy. If a certain calculation on an n × n matrix were to require 2n minutes, then that would be a ‘hard’ problem. Naturally some of the computations that we are calling ‘easy’ may take a very long time to run, but still, from our present point of view the important distinction to maintain will be the polynomial time guarantee or lack of it. The general rule is that if the running time is at most a polynomial function of the amount of input data, then the calculation is an easy one, otherwise it’s hard. Many problems in computer science are known to be easy. To convince someone that a problem is easy, it is enough to describe a fast method for solving that problem. To convince someone that a problem is hard is hard, because you will have to prove to them that it is impossible to find a fast way of doing the calculation. It will not be enough to point to a particular algorithm and to lament its slowness. After all, that algorithm may be slow, but maybe there’s a faster way. Matrix inversion is easy. The familiar Gaussian elimination method can invert an n ×n matrix in time at most cn3. To give an example of a hard computational problem we have to go far afield. One interesting one is called the ‘tiling problem.’ Suppose* we are given infinitely many identical floor tiles, each shaped like a regular hexagon. Then we can tile the whole plane with them, i.e., we can cover the plane with no empty spaces left over. This can also be done if the tiles are identical rectangles, but not if they are regular pentagons. In Fig. 0.1 we show a tiling of the plane by identical rectangles, and in Fig. 0.2 is a tiling by regular hexagons. That raises a number of theoretical and computational questions. One computational question is this. Suppose we are given a certain polygon, not necessarily regular and not necessarily convex, and suppose we have infinitely many identical tiles in that shape. Can we or can we not succeed in tiling the whole plane? That elegant question has been proved* to be computationally unsolvable. In other words, not only do we not know of any fast way to solve that problem on a computer, it has been proved that there isn’t any

563 citations


Journal ArticleDOI
02 Jun 1986
TL;DR: It is shown that TRAVELING SALESPERSON and KNAPSACK are complete for OptP, and that CLIQUE and COLORING arecomplete for a subclass of OptP .
Abstract: We consider NP-complete optimization problems at the level of computing their optimal value, and define a class of functions called OptP to capture this level of structure. We show that TRAVELING SALESPERSON and KNAPSACK are complete for OptP, and that CLIQUE and COLORING are complete for a subclass of OptP. These results show a deeper level of structure in these problems than was previously known. We also show that OptP is closely related to FPSAT, the class of functions computable in polynomial time with an oracle for NP. This allows us to quantify exactly “how much” NP-completeness is in these problems. In particular, in this measure, we show that TRAVELING SALESPERSON is strictly harder than CLIQUE and that CLIQUE is strictly harder than BIN PACKING . A further result is that an OptP-completeness result implies NP-, Dp-, and Δ2P-completeness results, thus tying these four classes closely together.

443 citations


Journal ArticleDOI
TL;DR: This work presents a new planar convex hull algorithm with worst case time complexity O(n \log H) where n is the size of the input set and H is thesize of the output set, i.e. the number of vertices found to be on the hull.
Abstract: We present a new planar convex hull algorithm with worst case time complexity $O(n \log H)$ where $n$ is the size of the input set and $H$ is the size of the output set, i.e. the number of vertices found to be on the hull. We also show that this algorithm is asymptotically worst case optimal on a rather realistic model of computation even if the complexity of the problem is measured in terms of input as well as output size. The algorithm relies on a variation of the divide-and-conquer paradigm which we call the ``marriage-before-conquest'''' principle and which appears to be interesting in its own right.

416 citations


Journal ArticleDOI
E V Ruiz1
TL;DR: A new algorithm is proposed which finds the Nearest Neighbour of a given sample in approximately constant average time complexity, independent of the data set size, thus being of general use in many present applications of Pattern Recognition.

Journal ArticleDOI
TL;DR: It turns out that some of the languages investigated for the succinct representation of the instances of combinatorial problems are not comparable, unless P=NP Some problems left open in [2].
Abstract: Several languages for the succinct representation of the instances of combinatorial problems are investigated. These languages have been introduced in [20, 2] and [5] where it has been shown that describing the instances by these languages causes a blow-up of the complexities of some problems. In the present paper the descriptional power of these languages is compared by estimating the complexities of some combinatorial problems in terms of completeness in suitable classes of the “counting polynomial-time hierarchy” which is introduced here. It turns out that some of the languages are not comparable, unless P=NP Some problems left open in [2] are solved.

Proceedings Article
25 Aug 1986
TL;DR: A formal model and a precise statement of the optimization problem that delineates the assumptions and limitations of the previous approaches are presented, and a quadratic-tinie algorithm determines the optimum join order for acyclic queries is proposed.
Abstract: State-of-the-art optimization approaches for relational database systems, e.g., those used in systems such as OBE, SQL/DS, and commercial INGRES. when used for queries in non-traditional database applications, suffer from two problems. First, the time complexity of their optimization algorithms, being combinatoric, is exponential in the number of relations to be joined in the query. Their cost is therefore prohibitive in situations such as deductive databases and logic oriented languages for knowledge bases, where hundreds of joins may be required. The second problem with the traditional approaches is that, albeit effective in their specific domain, it is not clear whether they can be generalized to different scenarios (e.g. parallel evaluation) since they lack a formal model to define the assumptions and critical factors on which their valiclity depends. This paper proposes a solution to these problems by presenting (i) a formal model and a precise statement of the optimization problem that delineates the assumptions and limitations of the previous approaches, and (ii) a quadratic-tinie algorithm th& determines the optimum join order for acyclic queries. The approach proposed is robust; in particular, it is shown that it remains heuristically effective for cyclic queries as well.

Proceedings ArticleDOI
01 Nov 1986
TL;DR: The central result is that any FPSAT function decomposes into an OptP function followed by polynomial-time computation, and it quantifies "how much" NP-completeness is in a problem, i.e., the number of NP queries it takes to compute the function.
Abstract: We study computational complexity theory and define a class of optimization problems called OptP (Optimization Polynomial Time), and we show that TRAVELLING SALESPERSON, KNAPSACK and 0-1 INTEGER LINEAR PROGRAMMING are complete for OptP. OptP is a natural generalization of NP (Nondeterministic Polynomial Time), but while NP only considers problems at the level of their yes/no question, the value of an OptP function is the optimal value of the problem. This approach enables us to show a deeper level of structure in these problems than is possible in NP. OptP is a subset of FPSAT, the class of functions computable in polynomial time with an oracle for NP. Our central result is that any FPSAT function decomposes into an OptP function followed by polynomial-time computation. The significance of this result is that it quantifies "how much" NP-completeness is in a problem, i.e., the number of NP queries it takes to compute the function. It also allows us to unify the classes NP, DP and DELTA-2 in a natural way. For example, we prove that an OptP-completeness result implies, as corollaries, NP-, DP- and DELTA-2- completeness results. We also prove separation results on subclasses of FPSAT by restricting the number of calls to the NP oracle. For example, TRAVELLING SALESPERSON is complete for O(n) queries, CLIQUE is complete for O(log n) queries and BIN PACKING can be solved in O(log log n) queries. We prove these classes distinct under the assumption that P does not equal NP. Finally, we consider generalizations of OptP to higher levels in the Polynomial-Time Hierarchy. We define the DOUBLE KNAPSACK problem and prove that it is complete for DELTA-3, the first example of a natural complete problem for this class, and the highest level in the Polynomial Hierarchy with a known natural complete problem.

Proceedings ArticleDOI
01 Nov 1986
TL;DR: A new probabilistie primality test is presented, different from the tests of Miller, Solovay-Strassen, and Rabin in that its assertions of primality are certain, rather than being correct with high probability or dependent on an unproven assumption.
Abstract: This paper presents a new probabilistie primality test. Upon termination the test outputs "composite" or "prime", along with a short proof of correctness, which can be verified in deterministic polynomial time. The test is different from the tests of Miller [M], Solovay-Strassen [SSI, and Rabin [R] in that its assertions of primality are certain, rather than being correct with high probability or dependent on an unproven assumption. Thc test terminates in expected polynomial time on all but at most an exponentially vanishing fraction of the inputs of length k, for every k. This result implies: • There exist an infinite set of primes which can be recognized in expected polynomial time. • Large certified primes can be generated in expected polynomial time. Under a very plausible condition on the distribution of primes in "small" intervals, the proposed algorithm can be shown ' to run in expected polynomial t ime on eve ry i n p u t . This * R e s e a r c h s u p p o r t e d i n p a r t by N S F G r a n t 8 5 0 9 9 0 5 D C R Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, a n d notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1986 ACM 0-89791-193-8/86/0500/0316 $00.75 316 condition is implied by Cramer 's conjecture. The methods employed are from the theory of elliptic curves over finite fields. 1. I N T R O D U C T I O N I . I T e s t i n g P r i m a l i t y : B r i e f R e v i e w Distinguishing prime numbers from composites has intrigued mathemat ic ians as early as about 274 B.C, when the sieve algorithm of Eratosthenes has been allegedly recorded. Much progress has been made on this problem since tile 17th century by Fermat, Euler, Legendre and Gauss. With the arrival of fast computat ional devices new algorithmic ideas based on the work of Fermat and Gauss were proposed and implemented (see [D],[BLS]). These algorithms mostly relied on factoring and thus where impractical for even moderate size inputs. The interest in primality in complexity theory was invoked by the exciting primality tests of Miller[M], Solovay and Srassen [SS], and Rabin [R]. Miller's algorithm [M] is a deterministic polynomial t ime procedure, which when answering "composite" gives a proof of correctness, and when answering "prime" does not. The assertions of primality made by the algori thm are always correct if if the Extended Riemann Hypothesis (ERH) is true. However, if the ERII is false, the numbers declared prime may still be composite. Thus, tile ERII is not used to bound the running time of the algorithm, but to vouch for the correctness of the answer. The probabilistic primality tests of SolovayStrassen [SS] and Rabin [SS], essentially perform a probabilistic search for a proof of compositeness. The failure of this search, provides circumstantial evidence that the number is not composite. These algorithms always terminate in polynomial time on every input. Upon termination they declare the input either composite or probably prime. When a number is declared "composite", a short (verifiable in deterministic polynomial time) proof (certificate) of compositeness is provided. When a number is declared "probably prime", then it is a prime with very high probability, but no certainty is provided. The fastest deterministic algorithm known is due to Adleman, Pomerance and Rumley [APR] (followed by Choen-Lenstra[CL]) and runs in time O(kc log los k) on inputs of length k. The answers of this algorithm are always correct. Unfortunately, it is not only slow but, like its predecessors, does do not provide us with a short certificate (i.e polynomial time verifiable proof) of its assertions of primality. As discussed above, finding a short certificate of compositeness can be done quickly probabilistically. But, how about short proofs of primality? Although it is not as obvious as in the case of compositeness, Pra t t [P] has shown that short proofs of primality do exist (i.e the set PRIMES is in NP). Unfortunately, finding a Pratt-certificate for a given prime involves being able to factor quickly, which is hard. Partial progress toward finding short proof of primality quickly was made by Furer IF]. He shows a Las Vegas (always correct, probably fast) algorithm distinguishing between n a product of two primes and the n a prime (provided n ~ 1 mod 24). To summarize, the following questions remain open: • Is there an infinite set of primes which can be recognized in expected polynomial time ? • Can random large certified primes be generated in expected polynomial time? • Is there a probabilistic primality test which is alaways correct and probably fast on every prime input, i.e a Las Vegas primality test ? 1.2 O u r Resul t s In this paper, we propose a probabilistic algorithm which upon termination outputs either "prime" or "composite", along with a short proof (certificate) of correctness. The proof of correctness can be verified by a deterministic polynomial Lime algorithm. We prove the following. T h e o r e m 1: Given any prime p of length k, our algorithm outputs a certificate of correctness of size O(k2), which can be verified correct in O(k 4) deterministic time. T h e o r e m 3: For every size k ~> 0, our algorithm terminates in expected polynomial time on at least 1 0(2 ' ~ ' ~ ' ~ ) of the prime inputs of length k. Note that the fraction of primes for which we could not prove that the algorithm terminates in expected polynomial time is smaller than any polynomial in k fraction. Let ~(x) denote the number of primes smaller than z. Theorem 2: Our algorithm terminates in expected polynomial time on every input if the following conjecture is true:

Journal ArticleDOI
TL;DR: This paper considers the problem of approximating a piecewise linear curve by another whose vertices are a subset of the vertices of the former, and shows that an optimum solution of this problem can be found in a polynomial time.
Abstract: In cartography, computer graphics, pattern recognition, etc., we often encounter the problem of approximating a given finer piecewise linear curve by another coarser piecewise linear curve consisting of fewer line segments. In connection with this problem, a number of papers have been published, but it seems that the problem itself has not been well modelled from the standpoint of specific applications, nor has a nice algorithm, nice from the computational-geometric viewpoint, been proposed. In the present paper, we first consider (i) the problem of approximating a piecewise linear curve by another whose vertices are a subset of the vertices of the former, and show that an optimum solution of this problem can be found in a polynomial time. We also mention recent results on related problems by several researchers including the authors themselves. We then pose (ii) a problem of covering a sequence of n points by a minimum number of rectangles with a given width, and present an O(n long n )-time algorithm by making use of some fundamental established techniques in computational geometry. Furthermore, an O(mn (log n ) 2 )-time algorithm is presented for finding the minimum width w such that a sequence of n points can be covered by at most m rectangles with width w . Finally, (iii) several related problems are discussed.

Journal ArticleDOI
TL;DR: It is suggested that any analog computer can be simulated efficiently (in polynomial time) by a digital computer from the assumption that P ≠ NP and from this assumption the operation of physical devices used for computation is drawn.

Journal ArticleDOI
TL;DR: The logarithmic lower bound on communication complexity is applied to obtain an Ω(n log n) bound on the time of 1-tape unbounded error probabilistic Turing machines, believed to be the first nontrivial lower bound obtained for such machines.

Journal ArticleDOI
TL;DR: An algorithm is presented which computes shortest paths in the Euclidean plane that do not cross given obstacles that can be found in O(f 2 + n log n) time.

Journal ArticleDOI
TL;DR: The algorithm runs in polynomial time and it is shown that the algorithm finds a solution to a random instance of 3-Satisfiability with probability bounded from below by a constant greater than zero for a range of parameter values.
Abstract: An algorithm for the 3-Satisfiability problem is presented and a probabilistic analysis is performed. The analysis is based on an instance distribution which is parameterized to simulate a variety of sample characteristics. The algorithm assigns values to variables appearing in a given instance of 3-Satisfiability, one at a time, using the unit clause heuristic and a maximum occurring literal selection heuristic; at each step a variable is chosen randomly from a subset of variables which is usually large. The algorithm runs in polynomial time and it is shown that the algorithm finds a solution to a random instance of 3-Satisfiability with probability bounded from below by a constant greater than zero for a range of parameter values. The heuristics studied here can be used to select variables in a Backtrack algorithm for 3-Satisfiability. Experiments have shown that for about the same range of parameters as above the Backtrack algorithm using the heuristics finds a solution in polynomial average time.

Journal ArticleDOI
TL;DR: A dynamic programming approach is proposed to solve the complete set partitioning problem, which has time complexityO(3 m ), wheren=2 m −1 is the size of the problem space.
Abstract: The complete set partitioning (CSP) problem is a special case of the set partitioning problem where the coefficient matrix has 2 m −1 columns, each column being a binary representation of a unique integer between 1 and 2 m −1,m⩾1. It has wide applications in the area of corporate tax structuring in operations research. In this paper we propose a dynamic programming approach to solve the CSP problem, which has time complexityO(3 m ), wheren=2 m −1 is the size of the problem space.

Journal ArticleDOI
TL;DR: A general method for searching efficiently in parallel undirected graphs, called ear-decomposition search (EDS), based on depth-first search (DFS), is presented.

Journal ArticleDOI
TL;DR: This paper describes an O(n)-time algorithm for recognizing and sorting Jordan sequences that uses level-linked search trees and a reduction of the recognition and sorting problem to a list-splitting problem.
Abstract: For a Jordan curve C in the plane nowhere tangent to the x axis, let x1, x2,…, xn be the abscissas of the intersection points of C with the x axis, listed in the order the points occur on C. We call x1, x2,…, xn a Jordan sequence. In this paper we describe an O(n)-time algorithm for recognizing and sorting Jordan sequences. The problem of sorting such sequences arises in computational geometry and computational geography. Our algorithm is based on a reduction of the recognition and sorting problem to a list-splitting problem. To solve the list-splitting problem we use level-linked search trees.

Journal ArticleDOI
TL;DR: This paper shows that the orbit problem for general n is decidable and indeed decidable in polynomial time and applies the algorithm for the orbitproblem in several contexts.
Abstract: The accessibility problem for linear sequential machines [12] is the problem of deciding whether there is an input x such that on x the machine starting in a given state q1 goes to a given state q2. Harrison shows that this problem is reducible to the following simply stated linear algebra problem, which we call the "orbit problem":Given (n, A, x, y), where n is a natural number and A, x, and y are nxn, nx1, and nx1 matrices of rationals, respectively, decide whether there is a natural number I such that Aix=y.He conjectured that the orbit problem is decidable. No progress was made on the conjecture for ten years until Shank [22] showed that if n is fixed at 2, then the problem is decidable. This paper shows that the orbit problem for general n is decidable and indeed decidable in polynomial time. The orbit problem arises in several contexts; two of these, linear recurrences and the discrete logarithm problem for polynomials, are discussed, and we apply our algorithm for the orbit problem in these contexts.

Proceedings ArticleDOI
27 Oct 1986
TL;DR: This paper concerns the design of parts orienters - the dual to the motion planning problem and two particular paradigms are considered and their abstractions to the computational domain lead to interesting problems in graph pebbling and function composition on finite sets.
Abstract: This paper concerns the design of parts orienters - the dual to the motion planning problem. Two particular paradigms are considered and their abstractions to the computational domain lead to interesting problems in graph pebbling and function composition on finite sets. Polynomial time algorithms are developed for the abstracted problems.

Journal ArticleDOI
TL;DR: New dynamic programming algorithms are presented which reduce the required computation and the first polynomial time algorithm is given for predicting general secondary structure.

Journal ArticleDOI
Frank K. Hwang1
TL;DR: In this article, a polynomial time implementation of Melzak's construction of Steiner trees is presented, which can be implemented in linear time and is shown to run in linear space.

Book ChapterDOI
Eric Allender1
11 Jun 1986
TL;DR: The complexity of sparse sets in P is shown to be central to certain questions about circuit complexity classes and about one-way functions.
Abstract: P-printable sets, defined in [HY-84], arise naturally in the study of P-uniform circuit complexity, generalized Kolmogorov complexity, and data compression, as well as in many other areas. We present new characterizations of the P-printable sets and present necessary and sufficient conditions for the existence of sparse sets in P which are not P-printable. The complexity of sparse sets in P is shown to be central to certain questions about circuit complexity classes and about one-way functions. Among the main results are:

Proceedings Article
Jin-Yi Cai1
02 Jun 1986
TL;DR: It is shown that a random oracle set A separates PSPACE from the entire polynomial-time hierarchy with probability one as a consequence of how much error a fixed depth Boolean circuit must make in computing the parity function.
Abstract: We consider how much error a fixed depth Boolean circuit must make in computing the parity function. We show that with an exponential bound of the form exp( n λ ) on the size of the circuits, they make a 50% error on all possible inputs, asymptotically and uniformly. As a consequence, we show that a random oracle set A separates PSPACE from the entire polynomial-time hierarchy with probability one.

Journal ArticleDOI
TL;DR: An OPM/L data compression scheme suggested by Ziv and Lempel, LZ77, is applied to text compression and a slightly modified version suggested by Storer and Szymanski, L ZSS, is found to achieve compression ratios as good as most existing schemes for a wide range of texts.
Abstract: An OPM/L data compression scheme suggested by Ziv and Lempel, LZ77, is applied to text compression. A slightly modified version suggested by Storer and Szymanski, LZSS, is found to achieve compression ratios as good as most existing schemes for a wide range of texts. LZSS decoding is very fast, and comparatively little memory is required for encoding and decoding. Although the time complexity of LZ77 and LZSS encoding is O(M) for a text of M characters, straightforward implementations are very slow. The time consuming step of these algorithms is a search for the longest string match. Here a binary search tree is used to find the longest string match, and experiments show that this results in a dramatic increase in encoding speed. The binary tree algorithm can be used to speed up other OPM/L schemes, and other applications where a longest string match is required. Although the LZSS scheme imposes a limit on the length of a match, the binary tree algorithm will work without any limit.