scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1990"


Journal ArticleDOI
TL;DR: The problem of minimizing the total tardiness for a set of independent jobs on one machine is considered and is shown to be NP-hard in the ordinary sense.
Abstract: The problem of minimizing the total tardiness for a set of independent jobs on one machine is considered. Lawler has given a pseudo-polynomial-time algorithm to solve this problem. In spite of extensive research efforts for more than a decade, the question of whether it can be solved in polynomial time or it is NP-hard in the ordinary sense remained open. In this paper the problem is shown to be NP-hard in the ordinary sense.

721 citations


Journal ArticleDOI
18 Jun 1990
TL;DR: Algorithms are presented that solve the emptiness problem without explicitly constructing the strongly connected components of the graph representing the product automaton by allowing the algorithms to err with some probability.
Abstract: This article addresses the problem of designing memory-efficient algorithms for the verification of temporal properties of finite-state programs. Both the programs and their desired temporal properties are modeled as automata on infinite words (Buchi automata). Verification is then reduced to checking the emptiness of the automaton resulting from the product of the program and the property. This problem is usually solved by computing the strongly connected components of the graph representing the product automaton. Here, we present algorithms that solve the emptiness problem without explicitly constructing the strongly connected components of the product graph. By allowing the algorithms to err with some probability, we can implement them with a randomly accessed memory of size O(n) bits, where n is the number of states of the graph, instead of O(n log n) bits that the presently known algorithms require.

577 citations


Journal ArticleDOI
01 Aug 1990-Networks
TL;DR: This article characterizes all independence assertions that logically follow from the topology of a network and develops a linear time algorithm that identifies these assertions and is shown to work for a broad class of nonprobabilistic independencies.
Abstract: An important feature of Bayesian networks is that they facilitate explicit encoding of information about independencies in the domain, information that is indispensable for efficient inferencing. This article characterizes all independence assertions that logically follow from the topology of a network and develops a linear time algorithm that identifies these assertions. The algorithm's correctness is based on the soundness of a graphical criterion, called d-separation, and its optimality stems from the completeness of d-separation. An enhanced version of d-separation, called D-separation, is defined, extending the algorithm to networks that encode functional dependencies. Finally, the algorithm is shown to work for a broad class of nonprobabilistic independencies.

553 citations


Journal ArticleDOI
TL;DR: A probabilistic algorithm for counting the number of unique values in the presence of duplicates, which has O(q) time complexity, and produces an estimation with an arbitrary accuracy prespecified by the user using only a small amount of space is presented.
Abstract: We present a probabilistic algorithm for counting the number of unique values in the presence of duplicates. This algorithm has O(q) time complexity, where q is the number of values including duplicates, and produces an estimation with an arbitrary accuracy prespecified by the user using only a small amount of space. Traditionally, accurate counts of unique values were obtained by sorting, which has O(q log q) time complexity. Our technique, called linear counting, is based on hashing. We present a comprehensive theoretical and experimental analysis of linear counting. The analysis reveals an interesting result: A load factor (number of unique values/hash table size) much larger than 1.0 (e.g., 12) can be used for accurate estimation (e.g., 1% of error). We present this technique with two important applications to database problems: namely, (1) obtaining the column cardinality (the number of unique values in a column of a relation) and (2) obtaining the join selectivity (the number of unique values in the join column resulting from an unconditional join divided by the number of unique join column values in the relation to he joined). These two parameters are important statistics that are used in relational query optimization and physical database design.

448 citations


Proceedings ArticleDOI
Jon Louis Bentley1
01 May 1990
TL;DR: Other new techniques can be applied to general K-d trees, leading to a data structure that is significantly faster and less vulnerable to pathological inputs than ordinary K—d trees.
Abstract: A K-d tree represents a set of N points in K-dimensional space. Operations on a semidynamic tree may delete and undelete points, but may not insert new points. This paper shows that several operations that require O(log N) expected time in general K-d trees may be performed in constant expected time in semidynamic trees. These operations include deletion, undeletion, nearest neighbor searching, and fixed-radius near neighbor searching (the running times of the first two are proved, while the last two are supported by experiments and heuristic arguments). Other new techniques can also be applied to general K-d trees: simple sampling reduces the time to build a tree from O(KN log N) to O(KN + N log N), and more advanced sampling builds a robust tree in the same time. The methods are straightforward to implement, and lead to a data structure that is significantly faster and less vulnerable to pathological inputs than ordinary K-d trees.

388 citations


Journal ArticleDOI
TL;DR: This work develops a new approach to solving minimum-cost circulation problems that combines methods for solving the maximum flow problem with successive approximation techniques based on cost scaling and shows that a minimum- cost circulation can be computed by solving a sequence of On lognC blocking flow problems.
Abstract: We develop a new approach to solving minimum-cost circulation problems. Our approach combines methods for solving the maximum flow problem with successive approximation techniques based on cost scaling. We measure the accuracy of a solution by the amount that the complementary slackness conditions are violated. We propose a simple minimum-cost circulation algorithm, one version of which runs in On3lognC time on an n-vertex network with integer arc costs of absolute value at most C. By incorporating sophisticated data structures into the algorithm, we obtain a time bound of Onm logn2/mlognC on a network with m arcs. A slightly different use of our approach shows that a minimum-cost circulation can be computed by solving a sequence of On lognC blocking flow problems. A corollary of this result is an On2log nlognC-time, m-processor parallel minimum-cost circulation algorithm. Our approach also yields strongly polynomial minimum-cost circulation algorithms. Our results provide evidence that the minimum-cost circulation problem is not much harder than the maximum flow problem. We believe that a suitable implementation of our method will perform extremely well in practice.

331 citations


Journal ArticleDOI
TL;DR: The Boolean hierarchy is generalized in such a way that it is possible to characterize P and O in terms of the generalization, and the class $P^{\text{NP}}[O(\log n)]$ can be characterized in very different ways.
Abstract: Polynomial time machines having restricted access to an NP oracle are investigated. Restricted access means that the number of queries to the oracle is restricted or the way in which the queries are made is restricted (e.g., queries made during truth-table reductions). Very different kinds of such restrictions result in the same or comparable complexity classes. In particular, the class $P^{\text{NP}}[O(\log n)]$ can be characterized in very different ways. Furthermore, the Boolean hierarchy is generalized in such a way that it is possible to characterize $P^{\text{NP}}$ and $P^{\text{NP}}[O(\log n)]$ in terms of the generalization.

328 citations


Book
06 Apr 1990
TL;DR: This work formalizes a notion of learning that characterizes the training of feed-forward networks and introduces a perspective on shallow networks, called the Support Cone Interaction graph, which is helpful in distinguishing tractable from intractable subcases.
Abstract: We formalize a notion of learning that characterizes the training of feed-forward networks. In the field of learning theory, it stands as a new model specialized for the type of learning problems that arise in connectionist networks. The formulation is similar to Valiant's (Val84) in that we ask what can be feasibly learned from examples and stored in a particular data structure. One can view the data structure resulting from Valiant-type learning as a 'sentence' in a language described by grammatical syntax rules. Neither the words nor their interrelationships are known a priori. Our learned data structure is more particular than Valiant's in that it must be a particular 'sentence'. The position and relationships of each 'word' are fully specified in advance, and the learning system need only discover what the missing words are. This corresponds to the problem of finding retrieval functions for each node in a given network. We prove this problem NP-complete and thus demonstrate that learning in networks has no efficient general solution. Corollaries to the main theorem demonstrate the NP-completeness of several sub-cases. While the intractability of the problem precludes its solution in all these cases, we sketch some alternative definitions of the problem in a search for tractable sub-cases. One broad class of subcases is formed by placing constraits on the network architecture; we study one type in particular. The focus of these constraints is on families of 'shallow' architectures which are defined to have bounded depth and unbounded width. We introduce a perspective on shallow networks, called the Support Cone Interaction (SCI) graph, which is helpful in distinguishing tractable from intractable subcases: When the SCI graph has tree-width O(log n), learning can be accomplished in polynomial time; when its tree-width is $n\sp{\Omega (1)}$ we find the problem NP-complete even if the SCI graph is a simple 2-dimensional planar grid.

317 citations


Proceedings ArticleDOI
01 Apr 1990
TL;DR: Pseudorandom generators are constructed which convertO(SlogR) truly random bits toR bits that appear random to any algorithm that runs inSPACE(S) to simulated using onlyO(Slogn) random bits.
Abstract: Pseudorandom generators are constructed which convertO(SlogR) truly random bits toR bits that appear random to any algorithm that runs inSPACE(S). In particular, any randomized polynomial time algorithm that runs in spaceS can be simulated using onlyO(Slogn) random bits. An application of these generators is an explicit construction of universal traversal sequences (for arbitrary graphs) of lengthnO(logn).

311 citations


Journal ArticleDOI
David Eppstein1
TL;DR: A new algorithm based on breadth-first search is presented that runs in faster asymptotic time than Natarajan’s algorithms, and in addition finds the shortest possible reset sequence if such a sequence exists.
Abstract: Natarajan reduced the problem of designing a certain type of mechanical parts orienter to that of finding reset sequences for monotonic deterministic finite automata. He gave algorithms that in polynomial time either find such sequences or prove that no such sequence exists. In this paper a new algorithm based on breadth-first search is presented that runs in faster asymptotic time than Natarajan’s algorithms, and in addition finds the shortest possible reset sequence if such a sequence exists. Tight bounds on the length of the minimum reset sequence are given. The time and space bounds of another algorithm given by Natarajan are further improved.That algorithm finds reset sequences for arbitrary deterministicfinite automata when all states are initially possible.

291 citations


Proceedings ArticleDOI
22 Oct 1990
TL;DR: The authors solve the two major open problems associated with noninteractive zero-knowledge proofs: how to enable polynomially many provers to prove in writing polynomial many theorems based on the basis of a single random string, and how to construct such proofs under general (rather than number-theoretic) assumptions.
Abstract: The authors solve the two major open problems associated with noninteractive zero-knowledge proofs: how to enable polynomially many provers to prove in writing polynomially many theorems based on the basis of a single random string, and how to construct such proofs under general (rather than number-theoretic) assumptions. The constructions can be used in cryptographic applications in which the prover is restricted to polynomial time, and they are much simpler than earlier (and less capable) proposals. >

Journal ArticleDOI
TL;DR: An algorithm for this problem with time complexity O(n/sup 2/3/sup n/) is presented, which represents an improvement over the previous best algorithm.
Abstract: The ordered binary decision diagram is a canonical representation for Boolean functions, presented by R.E. Bryant (1985) as a compact representation for a broad class of interesting functions derived from circuits. However, the size of the diagram is very sensitive to the choice of ordering on the variables; hence, for some applications, such as differential cascode voltage switch (DCVS) trees, it becomes extremely important to find the ordering leading to the most compact representation. An algorithm for this problem with time complexity O(n/sup 2/3/sup n/) is presented. This represents an improvement over the previous best algorithm. >

Journal ArticleDOI
TL;DR: It is shown how to compute, in polynomial time, a simplicial packing of sizeO(rd) which coversd-space, each of whose simplices intersectsO(n/r) hyperplanes, and improves on various probabilistic bounds in geometric complexity.
Abstract: The combination of divide-and-conquer and random sampling has proven very effective in the design of fast geometric algorithms. A flurry of efficient probabilistic algorithms have been recently discovered, based on this happy marriage. We show that all those algorithms can be derandomized with only polynomial overhead. In the process we establish results of independent interest concerning the covering of hypergraphs and we improve on various probabilistic bounds in geometric complexity. For example, givenn hyperplanes ind-space and any integerr large enough, we show how to compute, in polynomial time, a simplicial packing of sizeO(r d ) which coversd-space, each of whose simplices intersectsO(n/r) hyperplanes.

Journal ArticleDOI
TL;DR: The idea of a priori optimization as a strategy competitive to the strategy of reoptimization, under which the combinatorial optimization problem is solved optimally for every instance is introduced.
Abstract: Consider a complete graph G = V, E in which each node is present with probability p. We are interested in solving combinatorial optimization problems on subsets of nodes which are present with a certain probability. We introduce the idea of a priori optimization as a strategy competitive to the strategy of reoptimization, under which the combinatorial optimization problem is solved optimally for every instance. We consider four problems: the traveling salesman problem TSP, the minimum spanning tree, vehicle routing, and traveling salesman facility location. We discuss the applicability of a priori optimization strategies in several areas and show that if the nodes are randomly distributed in the plane the a priori and reoptimization strategies are very close in terms of performance. We characterize the complexity of a priori optimization and address the question of approximating the optimal a priori solutions with polynomial time heuristics with provable worst-case guarantees. Finally, we use the TSP as an example to find practical solutions based on ideas of local optimality.

Book ChapterDOI
16 Jul 1990
TL;DR: In this paper, an efficient algorithm for the Relational Coarsest Partition with Stuttering (RCPS) problem has been presented, which has time complexity O(n·(n+m)) and space complexity O (n + m) for Kripke structures m⩽n2.
Abstract: This paper presents an efficient algorithm for the Relational Coarsest Partition with Stuttering problem (RCPS). The RCPS problem is closely related to the problem of deciding stuttering equivalence on finite state Kripke structures (see Browne, Clarke & Grumberg [3]), and to the problem of deciding branching bisimulation equivalence on finite state labelled transition systems (see Van Glabbeek & Weijland [12]). If n is the number of states and m the number of transitions, then our algorithm has time complexity O(n·(n+m)) and space complexity O(n+m). The algorithm induces algorithms for branching bisimulation and stuttering equivalence which have the same complexity. Since for Kripke structures m⩽n2, this confirms a conjecture of Browne, Clarke & Grumberg [3], that their O(n5)-time algorithm for stuttering equivalence is not optimal.

Journal ArticleDOI
TL;DR: This paper describes how the Paige-Tarjan algorithm of complexity O(m log n) has been adapted to minimize labeled transition systems modulo bisimulation equivalence and is used in Aldebaran, a tool for the verification of concurrent systems.

Proceedings ArticleDOI
01 Jan 1990
TL;DR: In this paper, a new data structure, called a suffixarray, is introduced for on-line string searches, which can be constructed in O(N) expected time. But, in practice, suffix arrays use three to five times less space than suffixtrees.
Abstract: A new and conceptually simple data structure, called a suffixarray, for on-line string searches is introduced in this paper. Constructing and querying suffixarrays is reduced to a sort and search paradigm that employs novel algorithms. The main advantage of suffixarrays over suffixtrees is that, in practice, they use three to five times less space. From a complexity standpoint, suffix arrays permit on-line string searches of the type, ‘‘Is W a substring of A?’’ to be answered in time O(P + log N), where P is the length of W and N is the length of A, which is competitive with (and in some cases slightly better than) suffixtrees. The only drawback is that in those instances where the underlying alphabet is finiteand small, suffixtrees can be constructed in O(N) time in the worst case, versus O(N log N) time for suffixarrays. However, we give an augmented algorithm that, regardless of the alphabet size, constructs suffixarrays in O(N) expected time, albeit with lesser space efficiency. We believe that suffixarrays will prove to be better in practice than suffixtrees for many applications.

Journal ArticleDOI
Dana Angluin1
TL;DR: There is no polynomial time algorithm using only equivalence queries that exactly identifies deterministic or nondeterministic finite state acceptors, context free grammars, or disjunctive or conjunctive normal form boolean formulas.
Abstract: We consider the problem of exact identification of classes of concepts using only equivalence queries. We define a combinatorial property, approximate fingerprints, of classes of concepts and show that no class with this property can be exactly identified in polynomial time using only equivalence queries. As applications of this general theorem, we show that there is no polynomial time algorithm using only equivalence queries that exactly identifies deterministic or nondeterministic finite state acceptors, context free grammars, or disjunctive or conjunctive normal form boolean formulas.

Journal ArticleDOI
TL;DR: Probabilistic algorithms are proposed to overcome the difficulty of designing a ring of n processors such that they will be able to choose a leader by sending messages along the ring, if the processors are indistinguishable.
Abstract: Given a ring of n processors it is required to design the processors such that they will be able to choose a leader (a uniquely designated processor) by sending messages along the ring If the processors are indistinguishable then there exists no deterministic algorithm to solve the problem To overcome this difficulty, probabilistic algorithms are proposed The algorithms may run forever but they terminate within finite time on the average For the synchronous case several algorithms are presented: The simplest requires, on the average, the transmission of no more than 2442 n bits and O ( n ) time More sophisticated algorithms trade time for communication complexity If the processors work asynchronously then on the average O ( n log n ) bits are transmitted In the above cases the size of the ring is assumed to be known to all the processors If the size is not known then finding it may be be done only with high probability: any algorithm may yield incorrect results (with nonzero probability) for some values of n Another difficulty is that, if we insist on correctness, the processors may not explicity terminate Rather, the entire ring reaches an inactive state, in which no processor initiates communication

Proceedings Article
01 Sep 1990
TL;DR: This paper describes and measures the performance of the Starburst join enumerator, which can parameterically adjust for each query the space of join sequences that arc evaluated by the optimizer to allow or disallow composite tables as the inner operand of a join.
Abstract: Since relational database management systems typically support only diadic join operators as primitive operations, a query optimizer must choose the “best” scquence of two-way joins to achieve the N-way join of tables requested by a query. The computational complexity of this optimization process is dominated by the number of such possible sequences that must bc evaluated by the optimizer. This paper describes and measures the performance of the Starburst join enumerator, which can parameterically adjust for each query the space of join sequences that arc evaluated by the optimizer to allow or disallow (I) composite tables (i.e., tables that are themselves the result of a join) as the inner operand of a join and (2) joins between two tables having no join predicate linking them (i.e., Cartesian products). To limit the size of their optimizer’s search space, most earlier systems excludcd both of these types of plans, which can exccutc significantly faster for some queries. Dy experimentally varying the parameters of the Starburst join enumerator, we have validated analytic formulas for the number of join sequcnccs under a variety of conditions, and have proven their dependence upon the “shape” of the query. Specifically, ‘linear” queries, in which tables arc connectcd by binary predicates in a straight lint, can hc optimized in polynomial time. llence the dynamic programming techniques of System R and R* can still be used to optimize linear queries of as many as 100 tables in a reasonable amount of time! A query optimizer in a relational DRMS translates non-procedural queries into a pr0cedura.l plan for execution, typically hy generating many alternative plans, estimating the execution cost of each, and choosing the plan having the lowest estimated cost. Increasing this set offeasilile plans that it evaluates improves the chances but dots not guarantee! that it will find a bcttct plan, while increasing the (compile-time) cost for it to optimize the query. A major challenge in the design of a query optimizer is to ensure that the set of feasible plans contains cflicient plans without making the :set too big to he gcncratcd practically.

Journal ArticleDOI
TL;DR: It is concluded that a simple and fast heuristic algorithm, such as HNF, may be sufficient to achieve adequate performance in terms of program execution time and processors' idle time.

Journal ArticleDOI
P. M. Vaidya1
TL;DR: The worst-case running time of the algorithm is better than that of Karmarkar's algorithm by a factor of $$\sqrt {m + n} $$ .
Abstract: We present an algorithm for linear programming which requires O(((m+n)n 2+(m+n)1.5 n)L) arithmetic operations wherem is the number of constraints, andn is the number of variables. Each operation is performed to a precision of O(L) bits.L is bounded by the number of bits in the input. The worst-case running time of the algorithm is better than that of Karmarkar's algorithm by a factor of $$\sqrt {m + n} $$ .

Journal ArticleDOI
TL;DR: An algorithm for linear and convex quadratic programming problems that uses power series approximation of the weighted barrier path that passes through the current iterate in order to find the next iterate that can be interpreted as an affine scaling algorithm in the primal-dual setup is described.
Abstract: We describe an algorithm for linear and convex quadratic programming problems that uses power series approximation of the weighted barrier path that passes through the current iterate in order to find the next iterate. If r ≥ 1 is the order of approximation used, we show that our algorithm has time complexity O(n½(1 + 1/r)L(1 + 1/r)) iterations and O(n3 + n2r) arithmetic operations per iteration, where n is the dimension of the problem and L is the size of the input data. When r = 1, we show that the algorithm can be interpreted as an affine scaling algorithm in the primal-dual setup.

Proceedings ArticleDOI
22 Oct 1990
TL;DR: It is shown that the class of languages having two-prover interactive proof systems is computable in nondeterministic exponential time (NEXP), which represents a further step demonstrating the unexpectedly immense power for randomization and interaction in efficient provability.
Abstract: The exact power of two-prover interactive proof systems (MIP) introduced by M. Ben-Or et al. (Proc. 20th Symp. on Theory of Computing, 1988, p.113-31) is determined. In this system, two all-powerful noncommunicating provers convince a randomizing polynomial-time verifier in polynomial time that the input x belongs to the language L. It was previously suspected (and proved in a relativized sense) that coNP-complete languages do not admit such proof systems. In sharp contrast, it is shown that the class of languages having two-prover interactive proof systems is computable in nondeterministic exponential time (NEXP). This represents a further step demonstrating the unexpectedly immense power for randomization and interaction in efficient provability. >

Journal ArticleDOI
TL;DR: An O(n) algorithm for a singly constrained convex quadratic program using binary search to solve the Kuhn-Tucker system is given.
Abstract: This paper gives an O(n) algorithm for a singly constrained convex quadratic program using binary search to solve the Kuhn-Tucker system. Computational results indicate that a randomized version of this algorithm runs in expected linear time and is suitable for practical applications. For the nonconvex case ane-approximate algorithm is proposed which is based on convex and piecewise linear approximations of the objective function.

Journal ArticleDOI
TL;DR: Two algorithms for the k-satisfiability problem are presented and a probabilistic analysis is performed and it is shown that the first algorithm finds a solution with probability approaching one for a wide range of parameter values.

Book ChapterDOI
01 Jan 1990
TL;DR: In this paper, the posterior probability of each disease, given a set of observed findings, called quickscore, is computed in a probabilistic model for the diagnosis of multiple diseases.
Abstract: We examine a probabilistic model for the diagnosis of multiple diseases. In the model, diseases and findings are represented as binary variables. Also, diseases are marginally independent, features are conditionally independent given disease instances, and diseases interact to produce findings via a noisy OR-gate. An algorithm for computing the posterior probability of each disease, given a set of observed findings, called quickscore, is presented. The time complexity of the algorithm is 0 (nm - 2 m+ ), where n is the number of diseases, M + is the number of positive findings and m - is the number of negative findings. Although the time complexity of quickscore is exponential in the number of positive findings, the algorithm is useful in practice because the number of observed positive findings is usually far less than the number of diseases under consideration. Performance results for quickscore applied to a probabilistic version of Quick Medical Reference (QMR) are provided.

Journal ArticleDOI
TL;DR: A model of polynomial-time concept prediction is investigated which is a relaxation of the distribution-independent model of concept learning due to Valiant and prediction-preserving reductions are defined and are shown to be effective tools for comparing the relative difficulty of solving various prediction problems.

Journal ArticleDOI
TL;DR: Though the algorithm itself is simple, the global evolution of the underlying partition is non-trivial, which makes the analysis of the algorithm theoretically interesting in its own right.

Proceedings ArticleDOI
22 Oct 1990
TL;DR: A linear-time deterministic algorithm for triangulating a simple polygon is developed that does not need dynamic search trees, finger trees, or fancy point location structures.
Abstract: A linear-time deterministic algorithm for triangulating a simple polygon is developed. The algorithm is elementary in that it does not require the use of any complicated data structures; in particular, it does not need dynamic search trees, finger trees, or fancy point location structures. >