scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1988"


Book
01 Jan 1988
TL;DR: In this article, the Fulkerson Prize was won by the Mathematical Programming Society and the American Mathematical Society for proving polynomial time solvability of problems in convexity theory, geometry, and combinatorial optimization.
Abstract: This book develops geometric techniques for proving the polynomial time solvability of problems in convexity theory, geometry, and - in particular - combinatorial optimization. It offers a unifying approach based on two fundamental geometric algorithms: - the ellipsoid method for finding a point in a convex set and - the basis reduction method for point lattices. The ellipsoid method was used by Khachiyan to show the polynomial time solvability of linear programming. The basis reduction method yields a polynomial time procedure for certain diophantine approximation problems. A combination of these techniques makes it possible to show the polynomial time solvability of many questions concerning poyhedra - for instance, of linear programming problems having possibly exponentially many inequalities. Utilizing results from polyhedral combinatorics, it provides short proofs of the poynomial time solvability of many combinatiorial optimization problems. For a number of these problems, the geometric algorithms discussed in this book are the only techniques known to derive polynomial time solvability. This book is a continuation and extension of previous research of the authors for which they received the Fulkerson Prize, awarded by the Mathematical Programming Society and the American Mathematical Society.

3,676 citations


Proceedings ArticleDOI
Kenneth L. Clarkson1
06 Jan 1988
TL;DR: Asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets, are given.
Abstract: Random sampling is used for several new geometric algorithms. The algorithms are “Las Vegas,” and their expected bounds are with respect to the random behavior of the algorithms. One algorithm reports all the intersecting pairs of a set of line segments in the plane, and requires O(A + n log n) expected time, where A is the size of the answer, the number of intersecting pairs reported. The algorithm requires O(n) space in the worst case. Another algorithm computes the convex hull of a point set in E3 in O(n log A) expected time, where n is the number of points and A is the number of points on the surface of the hull. A simple Las Vegas algorithm triangulates simple polygons in O(n log log n) expected time. Algorithms for half-space range reporting are also given. In addition, this paper gives asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets.

1,163 citations


Journal ArticleDOI
TL;DR: An algorithm is presented that generates all maximal independent sets of a graph in lexicographic order, with only polynomial delay between the output of two successive independent sets, unless P=NP.

862 citations


Journal ArticleDOI
TL;DR: Two polynomial-time algorithms are given for scheduling conversations in a spread spectrum radio network that jointly meet a prespecified end-to-end demand and has the smallest possible length.
Abstract: Two polynomial-time algorithms are given for scheduling conversations in a spread spectrum radio network. The constraint on conversations is that each station can converse with only one other station at a time. The first algorithm is strongly polynomial and finds a schedule of minimum length that allows each pair of neighboring stations to converse directly for a prescribed length of time. The second algorithm is designed for the situation in which messages must be relayed multiple hops. The algorithm produces, in polynomial time, a routing vector and compatible link schedule that jointly meet a prespecified end-to-end demand, so that the schedule has the smallest possible length. >

602 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: This work gives a polynomial time approximation scheme that estimates the optimal number of clusters under the second measure of cluster size within factors arbitrarily close to 1 for a fixed cluster size.
Abstract: In a clustering problem, the aim is to partition a given set of n points in d-dimensional space into k groups, called clusters, so that points within each cluster are near each other. Two objective functions frequently used to measure the performance of a clustering algorithm are, for any L4 metric, (a) the maximum distance between pairs of points in the same cluster, and (b) the maximum distance between points in each cluster and a chosen cluster center; we refer to either measure as the cluster size.We show that one cannot approximate the optimal cluster size for a fixed number of clusters within a factor close to 2 in polynomial time, for two or more dimensions, unless P=NP. We also present an algorithm that achieves this factor of 2 in time O(n log k), and show that this running time is optimal in the algebraic decision tree model. For a fixed cluster size, on the other hand, we give a polynomial time approximation scheme that estimates the optimal number of clusters under the second measure of cluster size within factors arbitrarily close to 1. Our approach is extended to provide approximation algorithms for the restricted centers, suppliers, and weighted suppliers problems that run in optimal O(n log k) time and achieve optimal or nearly optimal approximation bounds.

485 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: This algorithm improves the best previous strongly polynomial algorithm due to Galil and Tardos, by a factor of m/n, and is even more efficient if the number of arcs with finite upper bounds, say m', is much less than m.
Abstract: We present a new strongly polynomial algorithm for the minimum cost flow problem, based on a refinement of the Edmonds-Karp scaling technique. Our algorithm solves the uncapacitated minimum cost flow problem as a sequence of O(n log n) shortest path problems on networks with n nodes and m arcs and runs in O(n log n(m + n log n)) steps. Using a standard transformation, this approach yields an O(m log n (m + n log n)) algorithm for the capacitated minimum cost flow problem. This algorithm improves the best previous strongly polynomial algorithm due to Galil and Tardos, by a factor of m/n. Our algorithm is even more efficient if the number of arcs with finite upper bounds, say m', is much less than m. In this case, the number of shortest path problems solved is O((m + n) log n).

352 citations


Proceedings ArticleDOI
24 Oct 1988
TL;DR: In this article, a randomized algorithm with O(1) worst-case time for lookup and O( 1) amortized expected time for insertion and deletion was given for the dictionary problem.
Abstract: A randomized algorithm is given for the dictionary problem with O(1) worst-case time for lookup and O(1) amortized expected time for insertion and deletion. An Omega (log n) lower bound is proved for the amortized worst-case time complexity of any deterministic algorithm in a class of algorithms encompassing realistic hashing-based schemes. If the worst-case lookup time is restricted to k, then the lower bound for insertion becomes Omega (kn/sup 1/k/). >

344 citations


Journal ArticleDOI
TL;DR: It is shown that for fields, linear statements can be transferred from characteristic zero to prime characteristic p, provided p is double exponential in the length of the statement, and lower bounds for these problems are established showing that the upper bounds are essentially tight.

301 citations


Proceedings ArticleDOI
01 Dec 1988
TL;DR: For many simple two-layer networks whose nodes compute linear threshold functions of their inputs that training is NP-complete, it is shown that these networks differ fundamentally from the perceptron in a worst-case computational sense.
Abstract: We show for many simple two-layer networks whose nodes compute linear threshold functions of their inputs that training is NP-complete. For any training algorithm for one of these networks there will be some sets of training data on which it performs poorly, either by running for more than an amount of time polynomial in the input length, or by producing sub-optimal weights. Thus, these networks differ fundamentally from the perceptron in a worst-case computational sense.

252 citations


Journal ArticleDOI
TL;DR: New methods are employed to prove membership in P for a number of problems whose complexities are not otherwise known and their utility is illustrated.
Abstract: Recent advances in graph theory and graph algorithms dramatically alter the traditional view of concrete complexity theory, in which a decision problem is generally shown to be in P by producing an efficient algorithm to solve an optimization version of the problem. Nonconstructive tools are now available for classifying problems as decidable in polynomial time by guaranteeing only the existence of polynomial-time decision algorithms. In this paper these new methods are employed to prove membership in P for a number of problems whose complexities are not otherwise known. Powerful consequences of these techniques are pointed out and their utility is illustrated. A type of partially ordered set that supports this general approach is defined and explored.

246 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: The permanent function arises naturally in a number of fields, including algebra, combinatorial enumeration and the physical sciences, and has been an object of study by mathematicians for many years (see [14] for background).
Abstract: The permanent of an n x n matrix A with 0-1 entries aij is defined by per (A) = S/s P/n-1/i=oais(i), where the sum is over all permutations s of [n] = {0, …, n - 1}. Evaluating per (A) is equivalent to counting perfect matchings (1-factors) in the bipartite graph G = (V1, V2, E), where V1 = V2 = [n] and (i,j) ∈ E iff aij = 1. The permanent function arises naturally in a number of fields, including algebra, combinatorial enumeration and the physical sciences, and has been an object of study by mathematicians for many years (see [14] for background). Despite considerable effort, and in contrast with the syntactically very similar determinant, no efficient procedure for computing this function is known.Convincing evidence for the inherent intractability of the permanent was provided in the late 1970s by Valiant [19], who demonstrated that it is complete for the class #P of enumeration problems and thus as hard as counting any NP structures. Interest has therefore recently turned to finding computationally feasible approximation algorithms (see, e.g., [11], [17]). The notion of approximation we shall use in this paper is as follows: let ƒ be a function from input strings to natural numbers. A fully-polynomial randomised approximation scheme (fpras) for ƒ is a probabilistic algorithm which, when presented with a string x and a real number e > 0, runs in time polynomial in |x| and 1/e and outputs a number which with high probability estimates ƒ(x) to within a factor of (1 + e).A promising approach to finding a fpras for the permanent was recently proposed by Broder [7], and involves reducing the problem of counting perfect matchings in a graph to that of generating them randomly from an almost uniform distribution. The latter problem is then amenable to the following dynamic stochastic technique: construct a Markov chain whose states correspond to perfect and 'near-perfect' matchings, and which converges to a stationary distribution which is uniform over the states. Transitions in the chain correspond to simple local perturbations of the structures. Then, provided convergence is fast enough, we can generate matchings by simulating the chain for a small number of steps and outputting the structure corresponding to the final state.When applying this technique, one is faced with the task of proving that a given Markov chain is rapidly mixing, i.e., that after a short period of evolution the distribution of the final state is essentially independent of the initial state. 'Short' here means bounded by a polynomial in the input size; since the state space itself may be exponentially large, the chain must typically be close to stationarity after visiting only a small fraction of the space.Recent work on the rate of convergence of Markov chains has focussed on stochastic concepts such as coupling [1] and stopping times [3]. While these methods are intuitively appealing and yield tight bounds for simple chains, the analysis involved becomes extremely complicated for more interesting processes which lack a high degree of symmetry. Using a complex coupling argument, Broder [7] claims that the perfect matchings chain above is rapidly mixing provided the bipartite graph is dense, i.e., has minimum vertex degree at least n/2. This immediately yields a fpras for the dense permanent. However, the coupling proof is hard to penetrate; more seriously, as has been observed by Mihail [13], it contains a fundamental error which is not easily correctable.In this paper, we propose an alternative technique for analysing the rate of convergence of Markov chains based on a structural property of the underlying weighted graph. Under fairly general conditions, a finite ergodic Markov chain is rapidly mixing iff the conductance of its underlying graph is not too small. This characterisation is related to recent work by Alon [4] and Alon and Milman [5] on eigenvalues and expander graphs.While similar characterisations of rapid mixing have been noted before (see, e.g., [2]), independent estimates of the conductance have proved elusive for non-trivial chains. Using a novel method of analysis, we are able to derive a lower bound on the conductance of Broder's perfect matchings chain under the same density assumption, thus verifying that it is indeed rapidly mixing. The existence of a fpras for the dense permanent is therefore established.Reductions from approximate counting to almost uniform generation similar to that mentioned above for perfect matchings also hold for the large class of combinatorial structures which are self-reducible [10]. Consequently, the Markov chain approach is potentially a powerful general tool for obtaining approximation algorithms for hard combinatorial enumeration problems. Moreover, our proof technique for rapid mixing also seems to generalise to other interesting chains. We substantiate this claim by considering an example from the field of statistical physics, namely the monomer-dimer problem (see, e.g., [8]). Here a physical system is modelled by a set of combinatorial structures, or configurations, each of which has an associated weight. Most interesting properties of the model can be computed from the partition function, which is just the sum of the weights of the configurations. By means of a reduction to the associated generation problem, in which configurations are selected with probabilities proportional to their weights, we are able to show the existence of a fpras for the monomer-dimer partition function under quite general conditions. Significantly, in such applications the generation problem is often of interest in its own right.Our final result concerns notions of approximate counting and their robustness. We show that, for all self-reducible NP structures, randomised approximate counting to within a factor of (1 + nb), where n is the input size, is possible in polynomial time either for all b ∈ R or for no b ∈ R. We are therefore justified in calling such a counting problem approximable iff there exists a polynomial time randomised procedure which with high probability estimates the number of structures within ratio (1 + nb) for some arbitrary b ∈ R. The connection with the earlier part of the paper is our use of a Markov chain simulation to reduce almost uniform generation to approximate counting within any factor of the above form: once again, the proof that the chain is rapidly mixing follows from the conductance characterisation.

Journal ArticleDOI
TL;DR: This work formalizes a notion of loading information into connectionist networks that characterizes the training of feed-forward neural networks and introduces a perspective on shallow networks, called the Support Cone Interaction (SCI) graph, which is helpful in distinguishing tractable from intractable subcases.

Journal ArticleDOI
TL;DR: A new technique for proving lower bounds in the synchronous model is presented, based on a string-producing mechanism from formal language theory, first introduced by Thue to study square-free words.
Abstract: The computational capabilities of a system of n indistinguishable (anonymous) processors arranged on a ring in the synchronous and asynchronous models of distributed computation are analyzed. A precise characterization of the functions that can be computed in this setting is given. It is shown that any of these functions can be computed in O(n2) messages in the asynchronous model. This is also proved to be a lower bound for such elementary functions as AND, SUM, and Orientation. In the synchronous model any computable function can be computed in O(n log n) messages. A ring can be oriented and start synchronized within the same bounds.The main contribution of this paper is a new technique for proving lower bounds in the synchronous model. With this technique tight lower bounds of t(n log n) (for particular n) are proved for XOR, SUM, Orientation, and Start Synchronization. The technique is based on a string-producing mechanism from formal language theory, first introduced by Thue to study square-free words. Two methods for generalizing the synchronous lower bounds to arbitrary ring sizes are presented.

Journal ArticleDOI
TL;DR: A bottleneck optimization problem on a graph with edge costs is the problem of finding a subgraph of a certain kind that minimizes the maximum edge cost in the subgraph, and a fast algorithms for two bottleneck optimization problems are proposed.

BookDOI
01 Apr 1988
TL;DR: This work proposes a temporal logic for causality and choice in distributed systems based on bisimulation semantics for concurrency and describes a logic for distributed transition systems.

Journal ArticleDOI
TL;DR: The proof is immediate by combining the Alon—Boppana version of another argument of Razborov with results of Grötschel—Lovász—Schrijver on the Lovász — capacity of a graph.
Abstract: A. A. Razborov has shown that there exists a polynomial time computable monotone Boolean function whose monotone circuit complexity is at leastnc losn. We observe that this lower bound can be improved to exp(cn1/6−o(1)). The proof is immediate by combining the Alon—Boppana version of another argument of Razborov with results of Grotschel—Lovasz—Schrijver on the Lovasz — capacity, ϑ of a graph.

Journal ArticleDOI
TL;DR: Two algorithms based on the ‘candidate list paradigm’ first used by Waterman (1984) are presented, one of which computes significantly more parsimonious candidate lists than Waterman's method.

Journal ArticleDOI
TL;DR: It is shown that W(P) can be computed in O(n log n+I) time and O( n) space, where I is the number of antipodal pairs of edges of the convex hull of P, and n is thenumber of vertices.
Abstract: For a set of points P in three-dimensional space, the width of P, W (P), is defined as the minimum distance between parallel planes of support of P. It is shown that W(P) can be computed in O(n log n+I) time and O(n) space, where I is the number of antipodal pairs of edges of the convex hull of P, and n is the number of vertices; in the worst case, I=O(n/sup 2/). For a convex polyhedra the time complexity becomes O(n+I). If P is a set of points in the plane, the complexity can be reduced to O(nlog n). For simple polygons, linear time suffices. >

Proceedings ArticleDOI
24 Oct 1988
TL;DR: The first provably good approximation algorithm is given and shown to run in polynomial time for the simplified case of a point mass under Newtonian mechanics together with velocity and acceleration bounds.
Abstract: The following problem, is considered: given a robot system find a minimal-time trajectory from a start position and velocity to a goal position and velocity, while avoiding obstacles and respecting dynamic constraints on velocity and acceleration. The simplified case of a point mass under Newtonian mechanics together with velocity and acceleration bounds is considered. The point must be flown from a start to a goal, amid 2-D or 3-D polyhedral obstacles. While exact solutions to this problem are not known, the first provably good approximation algorithm is given and shown to run in polynomial time.

Journal ArticleDOI
TL;DR: A PRAM algorithm for computing the minimum model of a logical query program is given, and it is shown that for programs with the “polynomial fringe property,” this algorithm runs in time that is logarithmic in the input size.
Abstract: We consider the parallel time complexity of logic programs without function symbols, called logical query programs, or Datalog programs. We give a PRAM algorithm for computing the minimum model of a logical query program, and show that for programs with the “polynomial fringe property,” this algorithm runs in time that is logarithmic in the input size, assuming that concurrent writes are allowed if they are consistent. As a result, the “linear” and “piecewise linear” classes of logic programs are inN C. Then we examine several nonlinear classes in which the program has a single recursive rule that is an “elementary chain.” We show that certain nonlinear programs are related to GSM mappings of a balanced parentheses language, and that this relationship implies the “polynomial fringe property;” hence such programs are inN C Finally, we describe an approach for demonstrating that certain logical query programs are log space complete forP, and apply it to both elementary single rule programs and nonelementary programs.

Journal ArticleDOI
TL;DR: It is shown for arbitrary graphs that a degenerate form of the basic annealing algorithm (obtained by letting “temperature” be a suitably chosen constant) produces matchings with nearly maximum cardinality in polynomial average time.
Abstract: The random, heuristic search algorithm called simulated annealing is considered for the problem of finding the maximum cardinality matching in a graph. It is shown that neither a basic form of the algorithm, nor any other algorithm in a fairly large related class of algorithms, can find maximum cardinality matchings such that the average time required grows as a polynomial in the number of nodes of the graph. In contrast, it is also shown for arbitrary graphs that a degenerate form of the basic annealing algorithm (obtained by letting “temperature” be a suitably chosen constant) produces matchings with nearly maximum cardinality in polynomial average time.

Journal ArticleDOI
01 Jun 1988
TL;DR: A new method is presented for solving Banning's alias-free flow-insensitive side-effect analysis problem, which employs a new data structure, called the binding multi-graph, along with depth-first search to achieve a running time that is linear in the size of the call multi- graph of the program.
Abstract: We present a new method for solving Banning's alias-free flow-insensitive side-effect analysis problem. The algorithm employs a new data structure, called the binding multi-graph, along with depth-first search to achieve a running time that is linear in the size of the call multi-graph of the program. This method can be extended to produce fast algorithms for data-flow problems with more complex lattice structures.

Book ChapterDOI
26 May 1988
TL;DR: A general and simple algorithm is presented which computes the set FP of all free configurations for a polygonal object I which is free to translate and/or to rotate but not to intersect another polygonnal object E.
Abstract: A general and simple algorithm is presented which computes the set FP of all free configurations for a polygonal object I (with m edges) which is free to translate and/or to rotate but not to intersect another polygonal object E. The worst-case time complexity of the algorithm is O(m/sup 3/n/sup 3/ log mn), which is close to optimal. FP is a three-dimensional curved object which can be used to find free motions within the same time bounds. Two types of motion have been studied in some detail. Motion in contact, where I remains in contact with E, is performed by moving along the faces of the boundary of FP. By partitioning FP into prisms, it is possible to compute motions when I never makes contact with E. In this case, the theoretical complexity does not exceed O(m/sup 6/n/sup 6/ alpha (mn)) but it is expected to be much smaller in practice. In both cases, pseudo-optimal motions can be obtained with a complexity increased by a factor log mn. >

Journal ArticleDOI
TL;DR: This paper provides a partial decision procedure that reports useful results and runs in polynomial time as an alternative to a complete, but potentially exponential-time decision procedure.

Proceedings ArticleDOI
24 Oct 1988
TL;DR: It is shown how to compute, in polynomial time, a simplicial packing of size O(r/sup d/) that covers d-space, each of whose simplices intersects O(n/r) hyperplanes.
Abstract: A number of efficient probabilistic algorithms based on the combination of divide-and-conquer and random sampling have been recently discovered. It is shown that all those algorithms can be derandomized with only polynomial overhead. In the process. results of independent interest concerning the covering of hypergraphs are established, and various probabilistic bounds in geometry complexity are improved. For example, given n hyperplanes in d-space and any large enough integer r, it is shown how to compute, in polynomial time, a simplicial packing of size O(r/sup d/) that covers d-space, each of whose simplices intersects O(n/r) hyperplanes. It is also shown how to locate a point among n hyperplanes in d-space in O(log n) query time, using O(n/sup d/) storage and polynomial preprocessing. >

Journal ArticleDOI
TL;DR: It is shown that the class of sets of small generalized Kolmogorov complexity is exactly theclass of sets which are P-isomorphic to a tally language.
Abstract: P-printable sets arise naturally in the.studies of generalized Kolmogorov complexity and data compression, as well as in other areas. We present new characterizations of the P-printable sets and present necessary and sufficient conditions for the existence of sparse sets in P that are not P-printable. As a corollary to one of our results, we show that the class of sets of small generalized Kolmogorov complexity is exactly the class of sets which are P-isomorphic to a tally language.

Journal ArticleDOI
TL;DR: In this article, an algorithm for computing the Wiener index of a tree in linear time is given. But it is not known whether the algorithm can be used to calculate the index of an arbitrary graph.
Abstract: The Wiener index of a graphG is equal to the sum of distances between all pairs of vertices ofG. It is known that the Wiener index of a molecular graph correlates with certain physical and chemical properties of a molecule. In the mathematical literature, many good algorithms can be found to compute the distances in a graph, and these can easily be adapted for the calculation of the Wiener index. An algorithm that calculates the Wiener index of a tree in linear time is given. It improves an algorithm of Canfield, Robinson and Rouvray. The question remains: is there an algorithm for general graphs that would calculate the Wiener index without calculating the distance matrix? Another algorithm that calculates this index for an arbitrary graph is given.

Journal ArticleDOI
TL;DR: This paper shows that both the nonseparating independent set problem and feedback set problem can be solved in polynomial time for graphs with no vertex degree exceeding 3 by reducing the problems to the matroid parity problem.

Journal ArticleDOI
TL;DR: A general polynomial time algorithm is proposed to find small integer solutions to systems of linear congruences and will solve most problems when twice as much information as that necessary to uniquely determine the variables is available.
Abstract: We propose a general polynomial time algorithm to find small integer solutions to systems of linear congruences. We use this algorithm to obtain two polynomial time algorithms for reconstructing the values of variables $x_1 , \cdots ,x_k $ when we are given some linear congruences relating them together with some bits obtained by truncating the binary expansions of the variables. The first algorithm reconstructs the variables when either the high order bits or the low order bits of the $x_i $ are known. It is essentially optimal in its use of information in the sense that it will solve most problems almost as soon as the variables become uniquely determined by their constraints. The second algorithm reconstructs the variables when an arbitrary window of consecutive bits of the variables is known. This algorithm will solve most problems when twice as much information as that necessary to uniquely determine the variables is available. Two cryptanalytic applications of the algorithms are given: predicting li...

Book ChapterDOI
21 Aug 1988
TL;DR: It is proven that both the discrete log problem and the Diffie-Hellman key exchange scheme are (probabilisticly) polynomial-time equivalent if the totient of P-l has only small prime factors with respect to a (fixed)Polynomial in 2logP.
Abstract: Diffie and Hellman proposed a key exchange scheme in 1976, which got their name in the literature afterwards. In the same epoch-making paper, they conjectured that breaking their scheme would be as hard as taking discrete logarithms. This problem has remained open for the multiplicative group modulo a prime P that they originally proposed. Here it is proven that both problems are (probabilisticly) polynomial-time equivalent if the totient of P-l has only small prime factors with respect to a (fixed) polynomial in 2logP.There is no algorithm known that solves the discrete log problem in probabilistic polynomial time for the this case, i.e., where the totient of P-l is smooth. Consequently, either there exists a (probabilistic) polynomial algorithm to solve the discrete log problem when the totient of P-l is smooth or there exist primes (satisfying this condition) for which Diffie-Hellman key exchange is secure.