scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1999"


Journal ArticleDOI
TL;DR: In this paper, the authors considered factoring integers and finding discrete logarithms, two problems that are generally thought to be hard on classical computers and that have been used as the basis of several proposed cryptosystems.
Abstract: A digital computer is generally believed to be an efficient universal computing device; that is, it is believed to be able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems that are generally thought to be hard on classical computers and that have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, for example, the number of digits of the integer to be factored.

2,856 citations


Journal ArticleDOI
TL;DR: The development of Fast Marching Methods is reviewed, including the theoretical and numerical underpinnings; details of the computational schemes, including higher order versions; and examples of the techniques in a collection of different areas are demonstrated.
Abstract: Fast Marching Methods are numerical schemes for computing solutions to the nonlinear Eikonal equation and related static Hamilton--Jacobi equations Based on entropy-satisfying upwind schemes and fast sorting techniques, they yield consistent, accurate, and highly efficient algorithms They are optimal in the sense that the computational complexity of the algorithms is O(N log N), where N is the total number of points in the domain The schemes are of use in a variety of applications, including problems in shape offsetting, computing distances from complex curves and surfaces, shape-from-shading, photolithographic development, computing first arrivals in seismic travel times, construction of shortest geodesics on surfaces, optimal path planning around obstacles, and visibility and reflection calculations In this paper, we review the development of these techniques, including the theoretical and numerical underpinnings; provide details of the computational schemes, including higher order versions; and demonstrate the techniques in a collection of different areas

1,339 citations


Journal ArticleDOI
TL;DR: A fast method to localize thelevel set method of Osher and Sethian and address two important issues that are intrinsic to the level set method, which reduces the computational effort by one order of magnitude, works in as much generality as the original one, and is conceptually simple and easy to implement.

1,131 citations



Proceedings ArticleDOI
01 May 1999
TL;DR: It is proved that for predicates reducible to conjunctions of elementary tests, the expected time to match a random event is no greater than O(N 1 ) where N is the number of subscriptions, and is a closed-form expression that depends on the number and type of attributes.
Abstract: Content-based subscription systems are an emerging alternative to traditional publish-subscribe systems, because they permit more flexible subscriptions along multiple dimensions. In these systems, each subscription is a predicate which may test arbitrary attributes within an event. However, the matching problem for content-based systems — determining for each event the subset of all subscriptions whose predicates match the event — is still an open problem. We present an efficient, scalable solution to the matching problem. Our solution has an expected time complexity that is sub-linear in the number of subscriptions, and it has a space complexity that is linear. Specifically, we prove that for predicates reducible to conjunctions of elementary tests, the expected time to match a random event is no greater than O(N 1 ) where N is the number of subscriptions, and is a closed-form expression that depends on the number and type of attributes (in some cases, 1=2). We present some optimizations to our algorithms that improve the search time. We also present the results of simulations that validate the theoretical bounds and that show acceptable performance levels for tens of thousands of subscriptions. Department of Computer Science, Cornell University, Ithaca, N.Y. 14853-7501, aguilera@cs.cornell.edu IBM T.J. Watson Research Center, Yorktown Heights, N.Y. 10598, fstrom, sturman, tusharg@watson.ibm.com Department of Computer Science, University of Illinois at Urbana-Champaign, 1304 W. Springfield Ave, Urbana, I.L. 61801, astley@cs.uiuc.edu

736 citations


Journal ArticleDOI
01 Mar 1999
TL;DR: A modification of the weight function used in the original version of the alignment program DIALIGN has two important advantages: it can be applied to both globally and locally related sequence sets, and the running time of the program is considerably improved.
Abstract: Motivation: The performance and time complexity of an improved version of the segment-to-segment approach to multiple sequence alignment is discussed In this approach, alignments are composed from gap-free segment pairs, and the score of an alignment is defined as the sum of so-called weights of these segment pairs Results: A modification of the weight function used in the original version of the alignment program DIALIGN has two important advantages it can be applied to both globally and locally related sequence sets, and the running time of the program is considerably improved The time complexity of the algorithm is discussed theoretically, and the program running time is reported for various test examples

724 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that any polygonal subdivision in the plane can be converted into an "m-guillotine" subdivision whose length is at most $(1+{c\over m})$ times that of the original subdivision, for a small constant c. In particular, a consequence of their main theorem is a simple polynomial-time approximation scheme for geometric instances of several network optimization problems, including the Steiner minimum spanning tree, the traveling salesperson problem (TSP), and the k-MST problem.
Abstract: We show that any polygonal subdivision in the plane can be converted into an "m-guillotine" subdivision whose length is at most $(1+{c\over m})$ times that of the original subdivision, for a small constant c. "m-Guillotine" subdivisions have a simple recursive structure that allows one to search for the shortest of such subdivisions in polynomial time, using dynamic programming. In particular, a consequence of our main theorem is a simple polynomial-time approximation scheme for geometric instances of several network optimization problems, including the Steiner minimum spanning tree, the traveling salesperson problem (TSP), and the k-MST problem.

486 citations


Proceedings ArticleDOI
21 Mar 1999
TL;DR: A linear programming-based algorithm is introduced to estimate the clock skew in network delay measurements and its performance is compared to that of three other algorithms to show that the algorithm is unbiased, and that the sample variance of the skew estimate is better than existing algorithms.
Abstract: Packet delay and loss traces are frequently used by network engineers, as well as network applications, to analyze network performance. The clocks on the end-systems used to measure the delays, however, are not always synchronized, and this lack of synchronization reduces the accuracy of these measurements. Therefore, estimating and removing relative skews and offsets from delay measurements between sender and receiver clocks are critical to the accurate assessment and analysis of network performance. We introduce a linear programming-based algorithm to estimate the clock skew in network delay measurements and compare it with three other algorithms. We show that our algorithm has a time complexity of O(N), leaves the delay after the skew removal positive, and is robust in the sense that the error margin of the skew estimate is independent of the magnitude of the skew. We use traces of real Internet delay measurements to assess the algorithm, and compare its performance to that of three other algorithms. Furthermore, we show through simulation that our algorithm is unbiased, and that the sample variance of the skew estimate is better (smaller) than existing algorithms.

467 citations


Book ChapterDOI
Miklós Ajtai1
11 Jul 1999
TL;DR: It is shown that lattices of the same random class can be generated not only together with a short vector in them, but also together withA short basis, which may make the construction more applicable for cryptographic protocols.
Abstract: A class of random lattices is given, in [1] so that (a) a random lattice can be generated in polynomial time together with a short vector in it, and (b) assuming that certain worst-case lattice problems have no polynomial time solutions, there is no polynomial time algorithm which finds a short vector in a random lattice with a polynomially large probability. In this paper we show that lattices of the same random class can be generated not only together with a short vector in them, but also together with a short basis. The existence of a known short basis may make the construction more applicable for cryptographic protocols.

410 citations


Journal ArticleDOI
TL;DR: In this article, the RANSAC-based DARCES method is proposed to solve the partially overlapping 3D registration problem without any initial estimation, which can be used even for the case that there are no local features in the 3D data sets.
Abstract: In this paper, we propose a new method, the RANSAC-based DARCES method (data-aligned rigidity-constrained exhaustive search based on random sample consensus), which can solve the partially overlapping 3D registration problem without any initial estimation. For the noiseless case, the basic algorithm of our method can guarantee that the solution it finds is the true one, and its time complexity can be shown to be relatively low. An extra characteristic is that our method can be used even for the case that there are no local features in the 3D data sets.

399 citations


Journal ArticleDOI
17 Oct 1999
TL;DR: This work provides a novel algorithmic analysis via a model of robust concept learning (closely related to “margin classifiers”), and shows that a relatively small number of examples are sufficient to learn rich concept classes.
Abstract: We study the phenomenon of cognitive learning from an algorithmic standpoint. How does the brain effectively learn concepts from a small number of examples despite the fact that each example contains a huge amount of information? We provide a novel analysis for a model of robust concept learning (closely related to "margin classifiers"), and show that a relatively small number of examples are sufficient to learn rich concept classes (including threshold functions, Boolean formulae and polynomial surfaces). As a result, we obtain simple intuitive proofs for the generalization bounds of Support Vector Machines. In addition, the new algorithm has several advantages-they are faster conceptually simpler and highly resistant to noise. For example, a robust half-space can be PAC-learned in linear time using only a constant number of training examples, regardless of the number of attributes. A general (algorithmic) consequence of the model, that "more robust concepts are easier to learn", is supported by a multitude of psychological studies.

Proceedings ArticleDOI
01 Jun 1999
TL;DR: A deterministic floorplanning algorithm utilizing the structure of O-tree is developed with promising performance with average 16% improvement in wire length, and 1% less in dead space over previous CPU-intensive cluster refinement method.
Abstract: We present an ordered tree, O-tree, structure to represent non-slicing floorplans. The O-tree uses only n(2+[Ig n]) bits for a floorplan of n rectangular blocks. We define an admissible placement as a compacted placement in both x and y direction. For each admissible placement, we can find an O-tree representation. We show that the number of possible O-tree combinations is O(n! 2/sup 2n-2//n/sup 1.5/). This is very concise compared to a sequence pair representation which has O((n!)2) combinations. The approximate ratio of sequence pair and O-tree combinations is O(n/sup 2/(n/4e)/sup n/). The complexity of the O-tree is even smaller than a binary tree structure for slicing floorplan which has O(n! 2/sup 5n-3//n/sup 1.5/) combinations. Given an O-tree, it takes only linear time to construct the placement and its constraint graph. We have developed a deterministic floorplanning algorithm utilizing the structure of O-tree. Empirical results on MCNC benchmarks show promising performance with average 16% improvement in wire length, and 1% less in dead space over previous CPU-intensive cluster refinement method.

Journal ArticleDOI
TL;DR: In this article, the first nontrivial approximation algorithms for the Steiner tree problem and the generalized Steiner network problem on general directed graphs were given, achieving an approximation ratio of O(i?1)k1/i in time O(nik2i) where k is the number of terminals.

Journal ArticleDOI
TL;DR: It is proved that this Steiner tree problem with minimum number of Steiner points and bounded edge-length is NP-complete and a polynomial time approximation algorithm is presented whose worst-case performance ratio is 5.

Book ChapterDOI
24 Aug 1999
TL;DR: In this article, the authors present a comprehensive study of the problem of verifying whether a model satisfies a temporal requirement given by an automaton, by developing algorithms for the different cases along with matching lower bounds.
Abstract: Scenario-based specifications such as message sequence charts (MSC) offer an intuitive and visual way of describing design requirements Such specifications focus on message exchanges among communicating entities in distributed software systems Structured specifications such as MSC-graphs and Hierarchical MSC-graphs (HMSC) allow convenient expression of multiple scenarios, and can be viewed as an early model of the system In this paper, we present a comprehensive study of the problem of verifying whether this model satisfies a temporal requirement given by an automaton, by developing algorithms for the different cases along with matching lower bounds When the model is given as an MSC, model checking can be done by constructing a suitable automaton for the linearizations of the partial order specified by the MSC, and the problem is coNP-complete When the model is given by an MSC-graph, we consider two possible semantics depending on the synchronous or asynchronous interpretation of concatenating two MSCs For synchronous model checking of MSC-graphs and HMSCs, we present algorithms whose time complexity is proportional to the product of the size of the description and the cost of processing MSCs at individual vertices Under the asynchronous interpretation, we prove undecidability of the model checking problem We, then, identify a natural requirement of boundedness, give algorithms to check boundedness, and establish asynchronous model checking to be Pspace-complete for bounded MSC-graphs and Expspace-complete for bounded HMSCs

Book ChapterDOI
17 Jun 1999
TL;DR: New properties for the VERTEX COVER problem are indicated and several new techniques are introduced, which lead to a simpler and improved algorithm of time complexity O(kn + 1:271kk2) for the problem.
Abstract: Recently, there have been increasing interests and progresses in lowering the worst case time complexity for well-known NP-hard problems, in particular for the VERTEX COVER problem. In this paper, new properties for the VERTEX COVER problem are indicated and several new techniques are introduced, which lead to a simpler and improved algorithm of time complexity O(kn + 1:271kk2) for the problem. Our algorithm also induces improvement on previous algorithms for the INDEPENDENT SET problem on graphs of small degree.

Journal ArticleDOI
TL;DR: This algorithm can be used to compute the Galois (concept) lattice, the maximal antichains lattice or the Dedekind‐MacNeille completion of a partial order, without increasing time complexity.

Journal ArticleDOI
TL;DR: A unified framework for designing polynomial time approximation schemes (PTASs) for “dense” instances of many NP-hard optimization problems, including maximum cut, graph bisection, graph separation, minimumk-way cut with and without specified terminals, and maximum 3-satisfiability is presented.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the parameterized complexity of MaxSat and MaxCut problems and showed that these problems remain fixed-parameter tractable even under this parameterization.

Book ChapterDOI
04 Mar 1999
TL;DR: This improves over the previously known 1/2 -approximation algorithms for maximum weighted matching which require O(|E| ċ log(|V|)) steps, where |V| is the number of vertices.
Abstract: A new approximation algorithm for maximum weighted matching in general edge-weighted graphs is presented. It calculates a matching with an edge weight of at least 1/2 of the edge weight of a maximum weighted matching. Its time complexity is O(|E|), with |E| being the number of edges in the graph. This improves over the previously known 1/2 -approximation algorithms for maximum weighted matching which require O(|E| ċ log(|V|)) steps, where |V| is the number of vertices.

Proceedings ArticleDOI
01 May 1999
TL;DR: In this article, the hardness versus randomness trade-offs for a broad class of randomized procedures are established for graph nonisomorphism and bounded round Arthur-Merlin games.
Abstract: We establish hardness versus randomness trade-offs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round Arthur-Merlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with access to satisfiability. We show that every language with a bounded round Arthur-Merlin game has subexponential size membership proofs for infinitely many input lengths unless the polynomial-time hierarchy collapses. This provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We set up a general framework for derandomization which encompasses more than the traditional model of randomized computation. For a randomized procedure to fit within this framework, we only require that for any fixed input the complexity of checking whether the procedure succeeds on a given random bit sequence is not too high. We then apply our derandomization technique to four fundamental complexity theoretic constructions: The Valiant-Vazirani random hashing technique which prunes the number of satisfying assignments of a Boolean formula to one, and related procedures like computing satisfying assignments to Boolean formulas non-adaptively given access to an oracle for satisfiability. The algorithm of Bshouty et al. for learning Boolean circuits. Constructing matrices with high rigidity. Constructing polynomial-size universal traversal sequences. We also show that if linear space requires exponential size circuits, then space bounded randomized computations can be simulated deterministically with only a constant factor overhead in space.

Journal ArticleDOI
TL;DR: It is proved the worst-case upper bound 1.5045..

Journal Article
TL;DR: In this article, a new algorithm for maximum weighted matching in general edge-weighted graphs is presented, which calculates a matching with an edge weight of at least one-half of the edge weight for a maximum weighted match.
Abstract: A new approximation algorithm for maximum weighted matching in general edge-weighted graphs is presented. It calculates a matching with an edge weight of at least of the edge weight of a maximum weighted matching. Its time complexity is O(|E|), with |E| being the number of edges in the graph. This improves over the previously known -approximation algorithms for maximum weighted matching which require O(|E| log(|V|)) steps, where |V| is the number of vertices.

Journal ArticleDOI
TL;DR: A polyhedral relaxation of the performance space of stochastic parallel machine scheduling is presented, which generalizes previous results from deterministic scheduling and shows that all employed LPs can be solved in polynomial time by purely combinatorial algorithms.
Abstract: We consider the problem to minimize the total weighted completion time of a set of jobs with individual release dates which have to be scheduled on identical parallel machines. Job processing times are not known in advance, they are realized on-line according to given probability distributions. The aim is to find a scheduling policy that minimizes the objective in expectation. Motivated by the success of LP-based approaches to deterministic scheduling, we present a polyhedral relaxation of the performance space of stochastic parallel machine scheduling. This relaxation extends earlier relaxations that have been used, among others, by Hall et al. [1997] in the deterministic setting. We then derive constant performance guarantees for priority policies which are guided by optimum LP solutions, and thereby generalize previous results from deterministic scheduling. In the absence of release dates, the LP-based analysis also yields an additive performance guarantee for the WSEPT rule which implies both a worst-case performance ratio and a result on its asymptotic optimality, thus complementing previous work by Weiss [1990]. The corresponding LP lower bound generalizes a previous lower bound from deterministic scheduling due to Eastman et al. [1964], and exhibits a relation between parallel machine problems and corresponding problems with only one fast single machine. Finally, we show that all employed LPs can be solved in polynomial time by purely combinatorial algorithms.

Journal ArticleDOI
TL;DR: An overview of the new recursive, divide-and-conquer algorithm for calculating the forward dynamics of a robot mechanism, or general rigid-body system, is presented and a detailed description of the simplest case: unbranched kinematic chains is presented.
Abstract: This paper presents a recursive, divide-and-conquer algorithm for calculating the forward dynamics of a robot mechanism, or general rigid-body system, on a parallel computer. It features O(log(n)) time complexity on O(n) processors and is the fastest available algorithm for a computer with a large number of processors and low communications costs. It is an exact, noniterative algorithm and is applicable to mechanisms with any joint type and any topology, including branches and kinematic loops. The algorithm works by recursive application of a formula that constructs the articulatedbody equations of motion of an assembly from those of its constituent parts. The inputs to this formula are the equations of motion of two independent subassemblies, plus a description of how they are to be connected, and the output is the equation of motion of the assembly. Starting with a collection of unconnected rigid bodies, the equations of motion of any rigid-body system can be constructed by repeated application of this ...

Proceedings ArticleDOI
17 Oct 1999
TL;DR: This work studies the problems of makespan minimization (load balancing), knapsack, and bin packing when the jobs have stochastic processing requirements or sizes and obtains quasi-polynomial time approximation schemes for all three problems.
Abstract: We study the problems of makespan minimization (load balancing), knapsack, and bin packing when the jobs have stochastic processing requirements or sizes. If the jobs are all Poisson, we present a two approximation for the first problem using Graham's rule, and observe that polynomial time approximation schemes can be obtained for the last two problems. If the jobs are all exponential, we present polynomial time approximation schemes for all three problems. We also obtain quasi-polynomial time approximation schemes for the last two problems if the jobs are Bernoulli variables.

Book ChapterDOI
11 Jul 1999
TL;DR: This paper shows that the situation changes substantially if the problem not only becomes NP-hard, but also the optimal cost version has no approximation algorithm achieving the approximation ratio of N1-Ɛ, where N is the instance size, unless P=NP.
Abstract: The original stable marriage problem requires all men and women to submit a complete and strictly ordered preference list. This is obviously often unrealistic in practice, and several relaxations have been proposed, including the following two common ones: one is to allow an incomplete list, i.e., a man is permitted to accept only a subset of the women and vice versa. The other is to allow a preference list including ties. Fortunately, it is known that both relaxed problems can still be solved in polynomial time. In this paper, we show that the situation changes substantially if we allow both relaxations (incomplete lists and ties) at the same time: the problem not only becomes NP-hard, but also the optimal cost version has no approximation algorithm achieving the approximation ratio of N1-Ɛ, where N is the instance size, unless P=NP.

Proceedings ArticleDOI
17 Oct 1999
TL;DR: These are the first approximation algorithms with guaranteed error bounds, and NP-completeness results in the literature in the area of protein structure alignment/fold recognition for measures of structure similarity of practical interest.
Abstract: We show that calculating contact map overlap (a measure of similarity of protein structures) is NP-hard, but can be solved in polynomial time for several interesting and relevant special cases. We identify an important special case of this problem corresponding to self-avoiding walks, and prove a decomposition theorem and a corollary approximation result for this special case. These are the first approximation algorithms with guaranteed error bounds, and NP-completeness results in the literature in the area of protein structure alignment/fold recognition for measures of structure similarity of practical interest.

Journal ArticleDOI
TL;DR: A linear time algorithm is presented that for a given graph G either finds an embedding of G in S or identifies a subgraph of G that is homeomorphic to a minimal forbidden subgraph for embeddability in S that yields a constructive proof of the result of Robertson and Seymour that for each closed surface there are only finitely many minimal forbiddenSubgraphs.
Abstract: For an arbitrary fixed surface S, a linear time algorithm is presented that for a given graph G either finds an embedding of G in S or identifies a subgraph of G that is homeomorphic to a minimal forbidden subgraph for embeddability in S. A side result of the proof of the algorithm is that minimal forbidden subgraphs for embeddability in S cannot be arbitrarily large. This yields a constructive proof of the result of Robertson and Seymour that for each closed surface there are only finitely many minimal forbidden subgraphs. The results and methods of this paper can be used to solve more general embedding extension problems.

Proceedings ArticleDOI
02 Jul 1999
TL;DR: A linear type system with recursion operators for inductive datatypes which ensures that all definable functions are polynomial time computable and improves upon previous such systems in that recursive definitions can be arbitrarily nested.
Abstract: We propose a linear type system with recursion operators for inductive datatypes which ensures that all definable functions are polynomial time computable. The system improves upon previous such systems in that recursive definitions can be arbitrarily nested, in particular no predicativity or modality restrictions are made.