scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1991"


Journal ArticleDOI
TL;DR: It is shown that the STP, which subsumes the major part of Vilain and Kautz's point algebra, can be solved in polynomial time and the applicability of path consistency algorithms as preprocessing of temporal problems is studied, to demonstrate their termination and bound their complexities.

1,989 citations


Journal ArticleDOI
TL;DR: The algorithms presented here are based on the recently developed theory of wavelets and are applicable to all Calderon-Zygmund and pseudo-differential operators, and indicate that many previously intractable problems become manageable with the techniques presented here.
Abstract: A class of algorithms is introduced for the rapid numerical application of a class of linear operators to arbitrary vectors. Previously published schemes of this type utilize detailed analytical information about the operators being applied and are specific to extremely narrow classes of matrices. In contrast, the methods presented here are based on the recently developed theory of wavelets and are applicable to all Calderon-Zygmund and pseudo-differential operators. The algorithms of this paper require order O(N) or O(N log N) operations to apply an N × N matrix to a vector (depending on the particular operator and the version of the algorithm being used), and our numerical experiments indicate that many previously intractable problems become manageable with the techniques presented here.

1,841 citations


Journal ArticleDOI
TL;DR: The proof of correctness of the algorithm relies on recent theory of rapidly mixing Markov chains and isoperimetric inequalities to show that a certain random walk can be used to sample nearly uniformly from within K within Euclidean space.
Abstract: A randomized polynomial-time algorithm for approximating the volume of a convex body K in n-dimensional Euclidean space is presented. The proof of correctness of the algorithm relies on recent theory of rapidly mixing Markov chains and isoperimetric inequalities to show that a certain random walk can be used to sample nearly uniformly from within K.

702 citations


Journal Article
TL;DR: A deterministic algorithm for triangulating a simple polygon in linear time is given, using the polygon-cutting theorem and the planar separator theorem, whose role is essential in the discovery of new diagonals.
Abstract: We give a deterministic algorithm for triangulating a simple polygon in linear time. The basic strategy is to build a coarse approximation of a triangulation in a bottom-up phase and then use the information computed along the way to refine the triangulation in a top-down phase. The main tools used are the polygon-cutting theorem, which provides us with a balancing scheme, and the planar separator theorem, whose role is essential in the discovery of new diagonals. Only elementary data structures are required by the algorithm. In particular, no dynamic search trees, of our algorithm.

632 citations


Journal ArticleDOI
01 Jan 1991-Networks
TL;DR: These problems of inferring the evolutionary history of n objects, either from present characters of the objects or from several partial estimates of their evolutionary history, can be solved by graph theoretic methods in linear time, which is time optimal, and which is a significant improvement over existing methods.
Abstract: In this paper, we examine two related problems of inferring the evolutionary history of n objects, either from present characters of the objects or from several partial estimates of their evolutionary history. The first problem is called the Phylogeny problem, and second is the Tree Compatibility problem. Both of these problems are central in algorithmic approaches to the study of evolution and in other problems of historical reconstruction. In this paper, we show that both of these problems can be solved by graph theoretic methods in linear time, which is time optimal, and which is a significant improvement over existing methods.

414 citations


Journal ArticleDOI
TL;DR: This paper proposes a new problem called the dynamic Steiner tree problem, and it is shown that the worst-case performance for any algorithm is at least $\frac{1}{2}\lg n$ times the cost of an optimum solution with complete rearrangement.
Abstract: This paper proposes a new problem called the dynamic Steiner tree problem. Interest in the dynamic Steiner tree problem is motivated by multipoint routing in communication networks, where the set of nodes in the connection changes over time. This problem, which has its basis in the Steiner tree problem on graphs, can be divided into two cases: one in which rearrangement of existing routes is not allowed, and a second in which rearrangement is allowed.For the nonrearrangeable version, it is shown that the worst-case performance for any algorithm is at least $\frac{1}{2}\lg n$ times the cost of an optimum solution with complete rearrangement. Here n is the maximum number of nodes to be connected. In addition, a simple, polynomial time algorithm is present that has worst-case performance within two times this bound. In the rearrangeable case, a polynomial time algorithm is presented with worst-case performance bounded by a constant times optimum.

392 citations


Proceedings ArticleDOI
01 Sep 1991
TL;DR: In this paper, the computational complexity of approximating omega (G), the size of the largest clique in a graph G, within a given factor is considered, and it is shown that if certain approximation procedures exist, then EXPTIME=NEXPTIME and NP=P.
Abstract: The computational complexity of approximating omega (G), the size of the largest clique in a graph G, within a given factor is considered. It is shown that if certain approximation procedures exist, then EXPTIME=NEXPTIME and NP=P. >

382 citations


Proceedings ArticleDOI
03 Jan 1991
TL;DR: The authors demonstrate that any functionf whose L -norm is polynomial can be approximated by a polynomially sparse function, and prove that boolean decision trees with linear operations are a subset of this class of functions.
Abstract: This work gives apolynomial time algorithm for learning decision trees with respect to the uniform distribution (This algorithm uses membership queries) The decision tree model that is considered is an extension of the traditional boolean decision tree model that allows linear operations in each node (ie, summation of a subset of the input variables over GF(2)) This paper shows how to learn in polynomial time any function that can be approximated (in norm L2) by a polynomially sparse function (ie, a function with only polynomially many nonzero Fourier coefficients) The authors demonstrate that any functionf whose L -norm (ie, the sum of absolute value of the Fourier coefficients) is polynomial can be approximated by a polynomially sparse function, and prove that boolean decision trees with linear operations are a subset of this class of functions Moreover, it is shown that the functions with polynomial L -norm can be learned deterministically The algorithm can also exactly identify a decision tree of depth d in time polynomial in 2 a and n This result implies that trees of logarithmic depth can be identified in polynomial time

343 citations


Journal ArticleDOI
TL;DR: This article converts some of the applications of the Lovasz Local Lemma into polynomial time sequential algorithms (at the cost of a weaker constant factor in the “exponent”).
Abstract: The Lovasz Local Lemma is a remarkable sieve method to prove the existence of certain structures without supplying any efficient way of finding these structures. In this article we convert some of the applications of the Local Lemma into polynomial time sequential algorithms (at the cost of a weaker constant factor in the “exponent”). Our main example is the following: assume that in an n‐uniform hypergraph every hyperedge intersects at most 2n/48 other hyperedges, then there is a polynomial time algorithm that finds a two‐coloring of the points such that no hyperedge is monochromatic. © 1991 Wiley Periodicals, Inc.

312 citations


Journal ArticleDOI
TL;DR: This work presents an efficient O(n3) worst case time complexity algorithm for achieving such a goal and introduces an indexing approach based on transformation invariant representations and is especially geared toward efficient recognition of partial structures in rigid objects belonging to large data bases.
Abstract: Macromolecules carrying biological information often consist of independent modules containing recurring structural motifs Detection of a specific structural motif within a protein (or DNA) aids in elucidating the role played by the protein (DNA element) and the mechanism of its operation The number of crystallographically known structures at high resolution is increasing very rapidly Yet, comparison of three-dimensional structures is a laborious time-consuming procedure that typically requires a manual phase To date, there is no fast automated procedure for structural comparisons We present an efficient O(n3) worst case time complexity algorithm for achieving such a goal (where n is the number of atoms in the examined structure) The method is truly three-dimensional, sequence-order-independent, and thus insensitive to gaps, insertions, or deletions This algorithm is based on the geometric hashing paradigm, which was originally developed for object recognition problems in computer vision It introduces an indexing approach based on transformation invariant representations and is especially geared toward efficient recognition of partial structures in rigid objects belonging to large data bases This algorithm is suitable for quick scanning of structural data bases and will detect a recurring structural motif that is a priori unknown The algorithm uses protein (or DNA) structures, atomic labels, and their three-dimensional coordinates Additional information pertaining to the structure speeds the comparisons The algorithm is straightforwardly parallelizable, and several versions of it for computer vision applications have been implemented on the massively parallel connection machine A prototype version of the algorithm has been implemented and applied to the detection of substructures in proteins

311 citations


Journal ArticleDOI
TL;DR: This work provides matching upper and lower bounds on the data-complexity of testing containment, membership and uniqueness for sets of possible worlds and shows that the certain fact problem is coNP-complete, even for a fixed first order query applied to a Codd-table.

Proceedings ArticleDOI
01 Jun 1991
TL;DR: The following packing problem arises in connection with lettering of maps: Given n distinct points pl, p2, pn in the plane, determine the supremum uoPi of all reals U, such that there are n pan-wise dtsjomt, axis-parallel, closed squares Ql, Q2, Qn of side-length u, where each pi ts a corner of Qi.
Abstract: The following packing problem arises in connection with lettering of maps: Given n distinct points pl, p2, . . . . pn in the plane, determine the supremum uoPi of all reals U, such that there are n pan-wise dtsjomt, axis-parallel, closed squares Ql, Q2, . . . . Qn of side-length u, where each pi ts a corner of Qi. Note that — by using afine transformation — the problem is equivalent to the case when we want largest homothetic cop~es of a jized rectangle or parallelogram tnstead of equal ly-szzed squares. In the cartographic application, the points are items (groundwater-drillho les etc.) and the squares are places for labels associated with these items (sulphate concentration etc.). An algorithm is presented, that in O(n log n] time either produces a solution, that is guaranteed to be at least half as large as the supremum. This is optimal, m the sense that the corresponding decision problem is NP complete, no po[ynomzal approximation algorithm with a guaranteed factor ezceedmg ~ exwts, provided that P # AfP; and there M also a lower bound of C2(n log n) for the running time.

Proceedings ArticleDOI
03 Jan 1991
TL;DR: It is shown that assuming the intractability of quadratic residues module a composite, inverting RSA encryption, or factoring Blum integers, there is no polynomial time prediction algorithm with membership queries for Boolean formulas, constant depth threshold circuits, 3 ?
Abstract: We investigate cryptographic limitations on the power of membership queries to help with concept learning. We extend the notion of prediction-preserving reductions to prediction with membership queries. We exhibit a number of reductions and show several prediction problems to be complete for different complexity classes. We show that assuming the intractability of (1) quadratic residues module a composite, (2) inverting RSA encryption, or (3) factoring Blum integers, there is no polynomial time prediction algorithm with membership queries for Boolean formulas, constant depth threshold circuits, 3 ?-Boolean formulas, finite unions or intersections of DFAs, 2-way DFAs, NFAs, or CFGs. Also, we show that if there exist one-way functions that cannot be inverted by polynomial-sized circuits, then CNF or DNF formulas and convex polytopes intersected with the Boolean hypercube are either polynomial time predictable without membership queries, or they are not polynomial time predictable even with membership queries; so, in effect, membership queries will not help with predicting CNF or DNF formulas.

Proceedings ArticleDOI
03 Jan 1991
TL;DR: A Monte Carlo algorithm is presented which constructs an efficient nearly uniform random generator for finite groups G in a very general setting and presumes a priori knowledge of an upper bound n on log |G|.
Abstract: Heuristic algorithms manipulating finite groups often work under the assumption that certain operations lead to “random” elements of the group. While polynomial time methods to construct uniform random elements of permutation groups have been known for over two decades, no such methods have previously been known for more general cases such as matrix groups over finite fields. We present a Monte Carlo algorithm which constructs an efficient nearly uniform random generator for finite groups G in a very general setting. The algorithm presumes a priori knowledge of an upper bound n on log |G|. The random generator is constructed and works in time, polynomial in this upper bound n. The process admits high degree of parallelization: after a preprocessing of length O(n logn) with O(n) processors, the construction of each random element costs O(logn) time with O(n) processors. We use the computational model of “black box groups”: group elements are encoded as (0, 1)-strings of uniform length; and an oracle performs group operations at unit cost. The group G is given by a list of generators. The random generator will produce each group element with probability (1/|G|)(1 ± ) where can be prescribed to be an arbitrary exponentially small function of n. ∗Research supported in part by NSF Grant CCR-8710078.

Journal ArticleDOI
TL;DR: An attempt is made to provide a unifying theoretical framework for this growing body of algorithms that have been proposed to solve the set union problem and its variants.
Abstract: This paper surveys algorithmic techniques and data structures that have been proposed tosolve thesetunion problem and its variants, Thediscovery of these data structures required anew set ofalgorithmic tools that have proved useful in other areas. Special attention is devoted to recent extensions of the original set union problem, and an attempt is made to provide a unifying theoretical framework for this growing body of algorithms.

Journal ArticleDOI
E.B. Baum1
TL;DR: The author's algorithm is proved to PAC learn in polynomial time the class of target functions defined by layered, depth two, threshold nets having n inputs connected to k hidden threshold units connected to one or more output units, provided k=/<4.
Abstract: An algorithm which trains networks using examples and queries is proposed. In a query, the algorithm supplies a y and is told t(y) by an oracle. Queries appear to be available in practice for most problems of interest, e.g. by appeal to a human expert. The author's algorithm is proved to PAC learn in polynomial time the class of target functions defined by layered, depth two, threshold nets having n inputs connected to k hidden threshold units connected to one or more output units, provided k >

Proceedings Article
14 Jul 1991
TL;DR: The search time complexity of reinforcement learning algorithms, along with unbiased Q-learning, are analyzed for problem solving tasks on a restricted class of state spaces and shed light on the complexity of search in reinforcement learning in general and the utility of cooperative mechanisms for reducing search.
Abstract: Reinforcement learning algorithms, when used to solve multi-stage decision problems, perform a kind of online (incremental) search to find an optimal decision policy. The time complexity of this search strongly depends upon the size and structure of the state space and upon a priori knowledge encoded in the learners initial parameter values. When a priori knowledge is not available, search is unbiased and can be excessive. Cooperative mechanisms help reduce search by providing the learner with shorter latency feedback and auxiliary sources of experience. These mechanisms are based on the observation that in nature, intelligent agents exist in a cooperative social environment that helps structure and guide learning. Within this context, learning involves information transfer as much as it does discovery by trial-and-error. Two cooperative mechanisms are described: Learning with an External Critic (or LEC) and Learning By Watching (or LBW). The search time complexity of these algorithms, along with unbiased Q-learning, are analyzed for problem solving tasks on a restricted class of state spaces. The results indicate that while unbiased search can be expected to require time moderately exponential in the size of the state space, the LEC and LBW algorithms require at most time linear in the size of the state space and under appropriate conditions, are independent of the state space size altogether; requiring time proportional to the length of the optimal solution path. While these analytic results apply only to a restricted class of tasks, they shed light on the complexity of search in reinforcement learning in general and the utility of cooperative mechanisms for reducing search.

Proceedings ArticleDOI
01 Sep 1991
TL;DR: The notion of distributed program checking as a means of making a distributed algorithm self-stabilizing is explored and a compiler that converts a deterministic synchronous protocol pi for static networks into a self-Stabilizing version of pi for dynamic networks is described.
Abstract: The notion of distributed program checking as a means of making a distributed algorithm self-stabilizing is explored. A compiler that converts a deterministic synchronous protocol pi for static networks into a self-stabilizing version of pi for dynamic networks is described. If T/sub pi / is the time complexity of pi and D is a bound on the diameter of the final network, the compiled version of pi stabilizes in time O(D+T/sub pi /) and has the same space complexity as pi . The general method achieves efficient results for many specific noninteractive tasks. For instance, solutions for the shortest paths and spanning tree problems take O(D) to stabilize, an improvement over the previous best time of O(D/sup 2/). >

Journal ArticleDOI
TL;DR: In this article, the problem of generating a sequence of motions for removing components in a three-dimensional assembly, one at a time, is considered, the robot motion being strictly translational.
Abstract: Generating a sequence of motions for removing components in a three-dimensional assembly, one at a time, is considered—the robot motion being strictly translational. We map the boundary representation of a given assembly to a tree structure called Disassembly Tree (DT). Traversing the DT in pre- and post-order yields a minimal sequence of operations for disassembly and assembly, respectively. In this paper, an assembly is classified by the logical complexity of its DT (an ordered graph whose nodes are components of the given assembly) and by the geometric complexity of the nodes in DT (in terms of the number of motions needed to remove a single component). Next, whether a component can be removed in one motion is described as a predicate. This predicate is then used in an algorithm for constructing the DT. For a class of assemblies that exhibit total ordering, the algorithm decides whether each component can be removed in a single motion, by constructing a DT in O(N log N) time, on the average, where N is the total number of mating faces in the assembly.

Journal ArticleDOI
TL;DR: The complexity of decision problems that can be solved by a polynomial-time Turing machine that makes a bounded number of queries to an NP oracle is studied and the Boolean hierarchy and the bounded query hierarchies either stand or collapse together are studied.

Proceedings Article
24 Aug 1991
TL;DR: In this article, the authors present two concept languages, called PL1 and PL2, which are extensions of TC and prove that the subsumption problem in these languages can be solved in polynomial time.
Abstract: We present two concept languages, called PL1 and PL2 which are extensions of TC. We prove that the subsumption problem in these languages can be solved in polynomial time. Both languages include a construct for expressing inverse roles, which has not been considered up to now in tractable languages. In addition, PL1 includes number restrictions and negation of primitive concepts, while Pl2 includes role conjunction and role chaining. By exploiting recent complexity results, we show that none of the constructs usually considered in concept languages can be added to PL1 and PL2 without losing tractabtlity. Therefore, on the assumption that Languages are characterized by the set of constructs they provide, the two languages presented in this paper provide a solution to the problem of singling out an optimal trade-off between expressive power and computational complexity.

Journal ArticleDOI
TL;DR: The polynomial-time counting hierarchy, a hierarchy of complexity classes related to the notion of counting, is studied, settling many open questions dealing with oracle characterizations, closure under Boolean operations, and relations with other complexity classes.
Abstract: The polynomial-time counting hierarchy, a hierarchy of complexity classes related to the notion of counting is studied. Some of their structural properties are investigated, settling many open questions dealing with oracle characterizations, closure under Boolean operations, and relations with other complexity classes. A new combinatorial technique to obtain relativized separations for some of the studied classes, which imply absolute separations for some logarithmic time bounded complexity classes, is developed.

Journal ArticleDOI
TL;DR: A distributed algorithm to compute shortest paths in a network with changing topology that does not suffer from the routing table looping behavior associated with the Ford-Bellman distributed shortest path algorithm although it uses truly distributed processing.
Abstract: The authors give a distributed algorithm to compute shortest paths in a network with changing topology The authors analyze its behavior The proof of correctness is discussed It does not suffer from the routing table looping behavior associated with the Ford-Bellman distributed shortest path algorithm although it uses truly distributed processing Its time and message complexities are evaluated Comparisons with other methods are given >

Journal ArticleDOI
TL;DR: The authors show how large efficiencies can be achieved in model-based 3-D vision by combining the notions of discrete relaxation and bipartite matching, capable of pruning large segments of search space.
Abstract: The authors show how large efficiencies can be achieved in model-based 3-D vision by combining the notions of discrete relaxation and bipartite matching. The computational approach presented is capable of pruning large segments of search space-an indispensable step when the number of objects in the model library is large and when recognition of complex objects with a large number of surfaces is called for. Bipartite matching is used for quick wholesale rejection of inapplicable models and for the determination of compatibility of a scene surface with a potential model surface taking into account relational considerations. The time complexity function associated with those aspects of the procedure that are implemented via bipartite matching is provided. The algorithms do not take more than a couple of iterations, even for objects with more than 30 surfaces. >

Proceedings ArticleDOI
03 Jan 1991
TL;DR: In this article, it was shown that the greedy algorithm can achieve a constant factor approximation of 4n for the superstring problem, which is the first polynomial-time algorithm for the problem.
Abstract: We consider the following problem: given a collection of strings s1,…, sm, find the shortest string s such that each si appears as a substring (a consecutive block) of s. Although this problem is known to be NP-hard, a simple greedy procedure appears to do quite well and is routinely used in DNA sequencing and data compression practice, namely: repeatedly merge the pair of (distinct) strings with maximum overlap until only one string remains. Let n denote the length of the optimal superstring. A common conjecture states that the above greedy procedure produces a superstring of length O(n) (in fact, 2n), yet the only previous nontrivial bound known for any polynomial-time algorithm is a recent O(n log n) result.We show that the greedy algorithm does in fact achieve a constant factor approximation, proving an upper bound of 4n. Furthermore, we present a simple modified version of the greedy algorithm that we show produces a superstring of length at most 3n. We also show the superstring problem to be MAXSNP-hard, which implies that a polynomial-time approximation scheme for this problem is unlikely.

Proceedings ArticleDOI
08 Apr 1991
TL;DR: A new, simple, extremely fast, locally adaptive data compression algorithm of the LZ77 class is presented, which almost halves the size of text files, uses 16 K of memory, and requires about 13 machine instructions to compress and about 4 instructions to decompress each byte.
Abstract: A new, simple, extremely fast, locally adaptive data compression algorithm of the LZ77 class is presented. The algorithm, called LZRW1, almost halves the size of text files, uses 16 K of memory, and requires about 13 machine instructions to compress and about 4 instructions to decompress each byte. This results in speeds of about 77 K and 250 K bytes per second on a one-MIPS machine. The algorithm runs in linear time and has a good worst-case running time. It adapts quickly and has a negligible initialization overhead, making it fast and efficient for small as well as large blocks of data. >

Journal ArticleDOI
TL;DR: An alternative algorithm for the single pair quickest path problem with same time complexity and less space requirement is developed and an algorithm to enumerate the first m quickest paths to send a given amount of data with time complexity O(rmne + rmn2 log n) time is developed.

Journal ArticleDOI
TL;DR: In this article, the problem of scheduling tasks, each of which is logically decomposed into a mandatory subtask and an optional subtask, is considered, and two preemptive algorithms for scheduling, on a uniprocessor system, n dependent tasks with rational ready times, deadlines, and processing times are described.
Abstract: Here the problem of scheduling tasks, each of which is logically decomposed into a mandatory subtask and an optional subtask, is considered The mandatory subtask must be executed to completion in order to produce an acceptable result The optional subtask begins after the mandatory subtask is completed and refines the result in order to reduce the error in the result The optional subtask can be left incomplete The error in the result of a task is equal to the processing time of the unfinished portion of the optional subtask Two preemptive algorithms for scheduling, on a uniprocessor system, n dependent tasks with rational ready times, deadlines, and processing times are described An algorithm is optimal in the following sense: whenever feasible schedules that meet the ready time and deadline constraints of all tasks exist, it finds one that has the minimum total error of all tasks One of the algorithms is optimal when the tasks have identical weights, and its time complexity is $O(n\log n)$ The oth

Journal ArticleDOI
TL;DR: Two new algorithms for maximizing a separable concave function on a polymatroid are presented, and it is shown that the Decomposition Algorithm runs in polynomial time (in the discrete version) for network and generalized symmetric polymatroids.

Journal ArticleDOI
01 Feb 1991
TL;DR: An inconsistent polynomial-time algorithm is presented which identifies every pattern language in the limit and investigates inference of arbitrary pattern languages within the framework of learning from good examples.
Abstract: A pattern is a finite string of constants and variables (cf. [1]). The language of a pattern is the set of all strings which can be obtained by substituting non-null strings of constants for the variables of the pattern. In the present paper, we consider the problem of learning pattern languages from examples. As a main result we present an inconsistent polynomial-time algorithm which identifies every pattern language in the limit. Furthermore, we investigate inference of arbitrary pattern languages within the framework of learning from good examples. Finally, we show that every pattern language can be identified in polynomial time from polynomially many disjointness queries, only.