scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1994"


Journal ArticleDOI
TL;DR: For these types of restrictions, it is shown when planning is tractable (polynomial) and intractable (NP-hard) and PSPACE-complete to determine if a given planning instance has any solutions.

943 citations


Journal ArticleDOI
TL;DR: A new construction of a pseudorandom bit generator that stretches a short string of truly random bits into a long string that looks random to any algorithm from a complexity class C using an arbitrary function that is hard for C is presented.

921 citations


Proceedings Article
01 Aug 1994
TL;DR: A new filtering algorithm is presented that achieves the generalized arc-consistency condition for these non-binary constraints and has been successfully used in the system RESYN, to solve the subgraph isomorphism problem.
Abstract: Many real-life Constraint Satisfaction Problems (CSPs) involve some constraints similar to the alldifferent constraints. These constraints are called constraints of difference. They are defined on a subset of variables by a set of tuples for which the values occuring in the same tuple are all different. In this paper, a new filtering algorithm for these constraints is presented. It achieves the generalized arc-consistency condition for these non-binary constraints. It is based on matching theory and its complexity is low. In fact, for a constraint defined on a subset of p variables having domains of cardinality at most d, its space complexity is O(pd) and its time complexity is O(p2d2). This filtering algorithm has been successfully used in the system RESYN (Vismara et al. 1992), to solve the subgraph isomorphism problem.

823 citations


Journal ArticleDOI
TL;DR: It is shown that the problem becomes NP-hard as soon as $k=3$, but can be solved in polynomial time for planar graphs for any fixed $k$, if the planar problem is NP- hard, however, if £k$ is not fixed.
Abstract: In the multiterminal cut problem one is given an edge-weighted graph and a subset of the vertices called terminals, and is asked for a minimum weight set of edges that separates each terminal from all the others. When the number $k$ of terminals is two, this is simply the mincut, max-flow problem, and can be solved in polynomial time. It is shown that the problem becomes NP-hard as soon as $k=3$, but can be solved in polynomial time for planar graphs for any fixed $k$. The planar problem is NP-hard, however, if $k$ is not fixed. A simple approximation algorithm for arbitrary graphs that is guaranteed to come within a factor of $2-2/k$ of the optimal cut weight is also described.

726 citations


Proceedings ArticleDOI
14 Dec 1994
TL;DR: Two serial and parallel algorithms for solving a system of equations that arises from the discretization of the Hamilton-Jacobi equation associated to a trajectory optimization problem of the following type are presented.
Abstract: Presents serial and parallel algorithms for solving a system of equations that arises from the discretization of the Hamilton-Jacobi equation associated to a trajectory optimization problem of the following type. A vehicle starts at a prespecified point x/sub 0/ and follows a unit speed trajectory x(t) inside a region in /spl Rfr//sup m/, until an unspecified time T that the region is excited. A trajectory minimising a cost function of the form /spl int//sub 0//sup T/ r(x(t))dt+q(x(T)) is sought. The discretized Hamilton-Jacobi equation corresponding to this problem is usually served using iterative methods. Nevertheless, assuming that the function r is positive, one is able to exploit the problem structure and develop one-pass algorithms for the discretized problem. The first m resembles Dijkstra's shortest path algorithm and runs in time O(n log n), where n is the number of grid points. The second algorithm uses a somewhat different discretization and borrows some ideas from Dial's shortest path algorithm; it runs in time O(n), which is the best possible, under some fairly mild assumptions. Finally, the author shows that the latter algorithm can be efficiently parallelized: for two-dimensional problems and with p processors, its running time becomes O(n/p), provided that p=O(/spl radic/n/log n). >

589 citations


Journal ArticleDOI
TL;DR: Five algorithms that identify a subset of features sufficient to construct a hypothesis consistent with the training examples are presented and it is shown that any learning algorithm implementing the MIN-FEATURES bias requires ⊖(( ln ( l δ ) + [2 p + p ln n])/e) training examples to guarantee PAC-learning a concept having p relevant features out of n available features.

537 citations


Journal ArticleDOI
TL;DR: It is proved that for any dimension d there exists a polynomial time algorithm for counting integral points in polyhedra in the d-dimensional Euclidean space.
Abstract: We prove that for any dimension d there exists a polynomial time algorithm for counting integral points in polyhedra in the d-dimensional Euclidean space. Previously such algorithms were known for dimensions d = 1, 2, 3, and 4 only.

419 citations


Proceedings ArticleDOI
10 Jun 1994
TL;DR: The optimum solution to the k-clustering problem is characterized by the ordinary Euclidean Voronoi diagram and the weighted Vor onoi diagram with both multiplicative and additive weights.
Abstract: In this paper we consider thek-clustering problem for a set S of n points i=(xi) in thed-dimensional space with variance-based errors as clustering criteria, motivated from the color quantization problem of computing a color lookup table for frame buffer display. As the inter-cluster criterion to minimize, the sum on intra-cluster errors over every cluster is used, and as the intra-cluster criterion of a cluster Sj,|Sj|α-1 ΣpiϵSj || xi - x(Sj)||2is considered, where ||·|| is the L2 norm and x(Sj) is the centroid of points in Sj, i.e., (1/|Sj|)Σp ∈Sjxi. The cases of α=1,2 correspond to the sum of squared errors and the all-pairs sum of squared errors, respectively.The k-clustering problem under the criterion with α=1,2 are treated in a unified manner by characterizing the optimum solution to the kclustering problem by the ordinary Euclidean Voronoi diagram and the weighted Voronoi diagram with both multiplicative and additive weights. With this framework, the problem is related to the generalized primary shutter function for the Voronoi diagrams. The primary shutter function is shown to be O(nO(kd)), which implies that, for fixed k, this clustering problem can be solved in a polynomial time. For the problem with the most typical intra-cluster criterion of the sum of squared errors, we also present an efficient randomized algorithm which, roughly speaking, finds an ∈–approximate 2–clustering in O(n(1/∈)d) time, which is quite practical and may be used to real large-scale problems such as the color quantization problem.

365 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a new algorithm, AC-6, which keeps the optimal worst-case time complexity of AC-4 while working out the drawback of space complexity.

357 citations


Journal ArticleDOI
TL;DR: A general strategy is developed for solving the random generation problem with two closely related types of methods: for structures of size n, the boustrophedonic algorithms exhibit a worst-case behaviour of the form O(n log n); the sequential algorithms have worst case O( n2), while offering good potential for optimizations in the average case.

338 citations


Journal ArticleDOI
TL;DR: Lower bounds for the time complexity of a class of deterministic algorithms for the dictionary problem are proved and this class encompasses realistic hashing-based schemes that use linear space.
Abstract: The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, allowing the operations insert, delete, and lookup. A dynamic perfect hashing strategy is given: a randomized algorithm for the dynamic dictionary problem that takes $O(1)$ worst-case time for lookups and $O(1)$ amortized expected time for insertions and deletions; it uses space proportional to the size of the set stored. Furthermore, lower bounds for the time complexity of a class of deterministic algorithms for the dictionary problem are proved. This class encompasses realistic hashing-based schemes that use linear space. Such algorithms have amortized worst-case time complexity $\Omega(\log n)$ for a sequence of $n$ insertions and lookups; if the worst-case lookup time is restricted to $k$, then the lower bound becomes $\Omega(k\cdot n^{1/k})$.

Journal ArticleDOI
D. Lee1, Mihalis Yannakakis1
TL;DR: In this paper, the complexity of finite-state machine testing has been studied and it has been shown that it is PSPACE-complete to determine whether a finite state machine has a preset distinguishing sequence.
Abstract: We study the complexity of two fundamental problems in the testing of finite-state machines. 1) Distinguishing sequences (state identification). We show that it is PSPACE-complete to determine whether a finite-state machine has a preset distinguishing sequence. There are machines that have distinguishing sequences, but only of exponential length. We give a polynomial time algorithm that determines whether a finite-state machine has an adaptive distinguishing sequence. (The previous classical algorithms take exponential time.) Furthermore, if there is an adaptive distinguishing sequence, then we give an efficient algorithm that constructs such a sequence of length at most n(n/spl minus/1)/2 (which is the best possible), where n is the number of states. 2) Unique input output sequences (state verification). It is PSPACE-complete to determine whether a state of a machine has a unique input output sequence. There are machines whose states have unique input output sequences but only of exponential length. >

Journal ArticleDOI
TL;DR: A dynamic programming algorithm for computing a best global alignment of two sequences that is robust in identifying any of several global relationships between two sequences and a multiple alignment algorithm based on the pairwise algorithm.
Abstract: We present a dynamic programming algorithm for computing a best global alignment of two sequences. The proposed algorithm is robust in identifying any of several global relationships between two sequences. The algorithm delivers a best alignment of two sequences in linear space and quadratic time. We also describe a multiple alignment algorithm based on the pairwise algorithm. Both algorithms have been implemented as portable C programs. Experimental results indicate that for a commonly used set of gap penalties, the new programs produce more satisfactory alignments on sequences of various lengths than some existing pairwise and multiple programs based on the dynamic programming algorithm of Needleman and Wunsch.

Journal ArticleDOI
TL;DR: A polynomial algorithm for k fixed, that runs in Onki¾²/2-3k/2+4Tn, m steps, where Tn,m is the running time required to find the minimum s, t-cut on a graph with n vertices and m edges.
Abstract: The k-cut problem is to find a partition of an edge weighted graph into k nonempty components, such that the total edge weight between components is minimum. This problem is NP-complete for an arbitrary k and its version involving fixing a vertex in each component is NP-hard even for k = 3. We present a polynomial algorithm for k fixed, that runs in Onki¾²/2-3k/2+4Tn, m steps, where Tn, m is the running time required to find the minimum s, t-cut on a graph with n vertices and m edges.

Proceedings ArticleDOI
01 Jun 1994
TL;DR: A linear-time algorithm for finding SESE regions and for building the PST of arbitrary control flow graphs (including irreducible ones) is given and it is shown how to use the algorithm to find control regions in linear time.
Abstract: In this paper, we describe the program structure tree (PST), a hierarchical representation of program structure based on single entry single exit (SESE) regions of the control flow graph. We give a linear-time algorithm for finding SESE regions and for building the PST of arbitrary control flow graphs (including irreducible ones). Next, we establish a connection between SESE regions and control dependence equivalence classes, and show how to use the algorithm to find control regions in linear time. Finally, we discuss some applications of the PST. Many control flow algorithms, such as construction of Static Single Assignment form, can be speeded up by applying the algorithms in a divide-and-conquer style to each SESE region on its own. The PST is also used to speed up data flow analysis by exploiting “sparsity”. Experimental results from the Perfect Club and SPEC89 benchmarks confirm that the PST approach finds and exploits program structure.

Proceedings ArticleDOI
23 Jan 1994
TL;DR: The lirst linear-time algorithm for modular decomposition is given, and a new bound of 0 (ri +m logn) on transitive orientation and the problem of recognizing permutation graphs and two-dimensional partial orders is solved.
Abstract: A module of an undirected graph is a set X of nodes such for each node x not in X, either every member of X is adjacent to x , or no member of X is adjacent to x. There is a canonical linear-space representation for the modules of a graph, called the modular decomposition. The modular decomposition facilitates solution of a number of combinatorial problems on certain classes of graphs, and algorithms for computing it have a lengthy history. Closely related to modular decomposition is the transitive orientation problem, which is the problem of assigning a direction to each edge of a graph so that the resulting digraph is transitive, if such an assignment is possible. We give the lirst linear-time algorithm for modular decomposition, and a new bound of 0 (ri +m logn) on transitive orientation and the problem of recognizing permutation graphs and two-dimensional partial orders.

Proceedings ArticleDOI
20 Nov 1994
TL;DR: It is shown that every computation possesses a short certificate vouching its correctness, and that, under a cryptographic assumption, any program for a /spl Nscr//spl Pscr/-complete problem is checkable in polynomial time.
Abstract: This paper puts forward a computationally-based notion of proof and explores its implications to computation at large. In particular, given a random oracle or a suitable cryptographic assumption, we show that every computation possesses a short certificate vouching its correctness, and that, under a cryptographic assumption, any program for a /spl Nscr//spl Pscr/-complete problem is checkable in polynomial time. In addition, our work provides the beginnings of a theory of computational complexity that is based on "individual inputs" rather than languages. >

Journal ArticleDOI
TL;DR: This work considers how to formulate a parallel analytical molecular surface algorithm that has expected linear complexity with respect to the total number of atoms in a molecule, and aims to compute and display these surfaces at interactive rates, by taking advantage of advances in computational geometry.
Abstract: We consider how we set out to formulate a parallel analytical molecular surface algorithm that has expected linear complexity with respect to the total number of atoms in a molecule. To achieve this goal, we avoided computing the complete 3D regular triangulation over the entire set of atoms, a process that takes time O(n log n), where n is the number of atoms in the molecule. We aim to compute and display these surfaces at interactive rates, by taking advantage of advances in computational geometry, making further algorithmic improvements and parallelizing the computations. >

Proceedings ArticleDOI
23 Jan 1994
TL;DR: A polynomial time algorithm is given for the evacuation problem with a fixed number of sources and sinks, and a dynamic flow is sought that lexicographically maximizes the amounts-of flow between sources in a soecified order.
Abstract: Evacuation problems can be modeled as flow problems on dynamic networks. A dvnamic network is defined by a graph with capacities and integral transit times on its edges. The maximum dvnamic flow oroblem is to send a maximum amount of flow from a source to a sink within a given time bound T; conversely, the quickest flow problem is to send a given flow amount v from the source to the sink in the shortest possible time. These dynamic flow problems have been studied previously and can be solved via simple minimum cost flow computations. More complicated dynamic flow problems have numerous applications and have been studied extensively. There are no polynomial time algorithms known for many of these nroblems. includine the auickest flow problem with iust two sources, each with-a flow amount that must reacha single sink. The general multiple source quickest flow problem is commonly used as a model for building evacuation; we also call it the evacuation problem. In this paper we consider three problems related to the evacuation problem. We give a polynomial time algorithm for the evacuation problem with a fixed number of sources and sinks. We give a polynomial time algorithm for the lexicographic maximum dynamic flow problem with any number of sources: in this problem we seek a dynamic flow that lexicographically maximizes the amounts-of flow leavine the sources in a soecified order. Our algorithm for the evzcuation problem follows as an application. We also consider the earliest arrival flow problem. Given a source, sink, and time bound T, an earliest arrival flow maximizes the amount of flow reaching the sink at every time step up to and including T. The existence of such a flow is well known, but there are no polynomial time algorithms known even to approximate it. We give a polynomial time algorithm that for any fixed c > 0 approximates an earliest arrival flow within a factor of 1 +c. ‘Research was done while the authors were visiting the Department of Computer Science at Princeton University. tDepartment of Computer Science, Cornell University, Ithaca, NY 14853. Research supported by a National Science Foundation Graduate Research Fellowship. tSchoo1 of Operations Research & Industrial Engineering, Cornell University, Ithaca, NY 14853. Research supported in part by a Packard FeIIowship, an NSF PYI award, and by the National Science Foundation, the Air Force Office of Scientific Research, and the Office of Naval Research, through NSF grant DMS-8920550. Iha Tardost

Journal ArticleDOI
TL;DR: A systematic comparison of several complexity classes of functions that are computed nondeterministically in polynomial time or with an oracle in NP shows that there exists a disjoint pair of NP-complete sets such that every separator is NP-hard.

Book ChapterDOI
26 Sep 1994
TL;DR: A polynomial time algorithm for testing if two morphisms are equal on every word of a context-free language and whether or not n first elements of two sequences of words defined by recurrence formulae are the same.
Abstract: We present a polynomial time algorithm for testing if two morphisms are equal on every word of a context-free language The input to the algorithm are a context-free grammar with constant size productions and two morphisms The best previously known algorithm had exponential time complexity Our algorithm can be also used to test in polynomial tiime whether or not n first elements of two sequences of words defined by recurrence formulae are the same In particular, if the well known 2n conjecture for D0L sequences holds, the algorithm can test in polynomial time equivalence of two D0L sequences

Journal ArticleDOI
TL;DR: Evidence that the 3-D dynamic movement problem is intractable even if B has only a constant number of degrees of freedom of movement is provided, and evidence that the problem is PSPACE-hard if B is given a velocity modulus bound on its movements.
Abstract: This paper investigates the computational complexity of planning the motion of a body B in 2-D or 3-D space, so as to avoid collision with moving obstacles of known, easily computed, trajectories. Dynamic movement problems are of fundamental importance to robotics, but their computational complexity has not previously been investigated.We provide evidence that the 3-D dynamic movement problem is intractable even if B has only a constant number of degrees of freedom of movement. In particular, we prove the problem is PSPACE-hard if B is given a velocity modulus bound on its movements and is NP-hard even if B has no velocity modulus bound, where, in both cases, B has 6 degrees of freedom. To prove these results, we use a unique method of simulation of a Turing machine that uses time to encode configurations (whereas previous lower bound proofs in robotic motion planning used the system position to encode configurations and so required unbounded number of degrees of freedom).We also investigate a natural class of dynamic problems that we call asteroid avoidance problems: B, the object we wish to move, is a convex polyhedron that is free to move by translation with bounded velocity modulus, and the polyhedral obstacles have known translational trajectories but cannot rotate. This problem has many applications to robot, automobile, and aircraft collision avoidance. Our main positive results are polynomial time algorithms for the 2-D asteroid avoidance problem, where B is a moving polygon and we assume a constant number of obstacles, as well as single exponential time or polynomial space algorithms for the 3-D asteroid avoidance problem, where B is a convex polyhedron and there are arbitrarily many obstacles. Our techniques for solving these asteroid avoidance problems use “normal path” arguments, which are an intereting generalization of techniques previously used to solve static shortest path problems.We also give some additional positive results for various other dynamic movers problems, and in particular give polynomial time algorithms for the case in which B has no velocity bounds and the movements of obstacles are algebraic in space-time.

Journal ArticleDOI
TL;DR: This work proposes an algorithm based on local optimality criteria in the event of a potential crossing conflict which can be obtained very quickly in polynomial time and furnish a complexity analysis to show the NP-completeness of the problem.

Journal ArticleDOI
TL;DR: In this paper, the first randomized and deterministic polynomial-time algorithms that yield polylogarithmic approximations to the optimal length schedule for the job shop scheduling problem were presented.
Abstract: In the job shop scheduling problem, there are $m$ machines and $n$ jobs. A job consists of a sequence of operations, each of which must be processed on a specified machine, and the aim is to complete all jobs as quickly as possible. This problem is strongly ${\cal NP}$-hard even for very restrictive special cases. The authors give the first randomized and deterministic polynomial-time algorithms that yield polylogarithmic approximations to the optimal length schedule. These algorithms also extend to the more general case where a job is given not by a linear ordering of the machines on which it must be processed but by an arbitrary partial order. Comparable bounds can also be obtained when there are $m'$ types of machines, a specified number of machines of each type, and each operation must be processed on one of the machines of a specified type, as well as for the problem of scheduling unrelated parallel machines subject to chain precedence constraints.

Journal ArticleDOI
TL;DR: A new k-out-of-n model is constructed, which has n components, each with its own positive integer weight, such that the system is good if the total weight of good (failed) components is at least k.
Abstract: This paper constructs a new k-out-of-n model, viz, a weighted-k-out-of-n system, which has n components, each with its own positive integer weight (total system weight=w), such that the system is good (failed) if the total weight of good (failed) components is at least k. The reliability of the weighted-k-out-of-n:G system is the complement of the unreliability of a weighted-(w-k+1)-out-of-n:F system. Without loss of generality, the authors discuss the weighted-k-out-of-n:G system only. The k-out-of-n:G system is a special case of the weighted-k-out-of-n:G system wherein the weight of each component is 1. An efficient algorithm is given to evaluate the reliability of the weighted-k-out-of-n:G system. The time complexity of this algorithm is O(n.k). >

Journal ArticleDOI
TL;DR: The first algorithm solves the problem of computingitnesses for the Boolean product of two matrices, and the second algorithm is a nearly linear time deterministic procedure for constructing a perfect hash function for a givenn-subset of {1,...,m}.
Abstract: Small sample spaces with almost independent random variables are applied to design efficient sequential deterministic algorithms for two problems. The first algorithm, motivated by the attempt to design efficient algorithms for the All Pairs Shortest Path problem using fast matrix multiplication, solves the problem of computing witnesses for the Boolean product of two matrices. That is, if A and B are two n by n matrices, and C = AB is their Boolean product, the algorithm finds for every entry C>sub /sub sub /sub sub /sub< = 1. Its running time exceeds that of computing the product of two n by n matrices with small integer entries by a polylogarithmic factor. The second algorithm is a nearly linear time deterministic procedure for constructing a perfect hash function for a given n-subset of {1,?,m}.

Proceedings ArticleDOI
06 Nov 1994
TL;DR: The experimental results demonstrate that FBB outperforms the K&L heuristics and the spectral method in terms of the number of crossing nets, and the efficient implementation makes it possible to partition large, circuit instances with reasonable runtime.
Abstract: We consider the problem of bipartitioning a circuit into two balanced components that minimizes the number of crossing nets. Previously, the Kernighan and Lin type (K&L) heuristics, the simulated annealing approach, and the spectral method were given to solve the problem. However, network flow techniques were overlooked as a viable approach to min-cut balanced bipartition to due its high complexity. In this paper we propose a balanced bipartition heuristic based on repeated max-flow min-cut techniques, and give an efficient implementation that has the same asymptotic time complexity as that of one max-flow computation. We implemented our heuristic algorithm in a package called FBB. The experimental results demonstrate that FBB outperforms the K&L heuristics and the spectral method in terms of the number of crossing nets, and the efficient implementation makes it possible to partition large, circuit instances with reasonable runtime. For example, the average elapsed time for bipartitioning a circuit S35932 of almost 20K gates is less than 20 minutes.

Journal ArticleDOI
TL;DR: In this paper, a new approximation heuristic for finding a rectilinear Steiner tree of a set of nodes is presented, which starts with a minimum spanning tree of the nodes and repeatedly connects a node to the nearest point on the rectangular layout of an edge.
Abstract: A new approximation heuristic for finding a rectilinear Steiner tree of a set of nodes is presented. It starts with a rectilinear minimum spanning tree of the nodes and repeatedly connects a node to the nearest point on the rectangular layout of an edge, removing the longest edge of the loop thus formed. A simple implementation of the heuristic using conventional data structures is compared with previously existing algorithms. The performance (i.e., quality of the route produced) of our algorithm is as good as the best reported algorithm, while the running time is an order of magnitude better than that of this best algorithm. It is also shown that the asymptotic time complexity for the algorithm can be improved to O(n log n), where n is the number of points in the set. >

Journal ArticleDOI
TL;DR: A wide class of problems, the divide & conquer class (D&Q), is shown to be easily and efficiently solvable on the HHC topology, and parallel algorithms are provided to describe how a D&Q problem can be solved efficiently on an HHC structure.
Abstract: Interconnection networks play a crucial role in the performance of parallel systems. This paper introduces a new interconnection topology that is called the hierarchical hypercube (HHC). This topology is suitable for massively parallel systems with thousands of processors. An appealing property of this network is the low number of connections per processor, which enhances the VLSI design and fabrication of the system. Other alluring features include symmetry and logarithmic diameter, which imply easy and fast algorithms for communication. Moreover, the HHC is scalable; that is it can embed HHC's of lower dimensions. The paper presents two algorithms for data communication in the HHC. The first algorithm is for one-to-one transfer, and the second is for one-to-all broadcasting. Both algorithms take O(log/sub 2/ k), where k is the total number of processors in the system. A wide class of problems, the divide & conquer class (D&Q), is shown to be easily and efficiently solvable on the HHC topology. Parallel algorithms are provided to describe how a D&Q problem can be solved efficiently on an HHC structure. The solution of a D&Q problem instance having up to k inputs requires a time complexity of O(log/sub 2/ k). >

Journal ArticleDOI
TL;DR: A set of constraints is identified which gives rise to a class of tractable problems and given polynomial time algorithms for solving such problems, and it is proved that the class of problems generated by any set of constraint not contained in this restricted set is NP-complete.