scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 2000"


Journal ArticleDOI
TL;DR: It is proved that this is also the case for graphs of clique-width at most k, where this complexity measure is associated with hierarchical decompositions of another type, and where logical formulas are no longer allowed to use edge set quantifications.
Abstract: Hierarchical decompositions of graphs are interesting for algorithmic purposes There are several types of hierarchical decompositions Tree decompositions are the best known ones On graphs of tree-width at most k , ie, that have tree decompositions of width at most k , where k is fixed, every decision or optimization problem expressible in monadic second-order logic has a linear algorithm We prove that this is also the case for graphs of clique-width at most k , where this complexity measure is associated with hierarchical decompositions of another type, and where logical formulas are no longer allowed to use edge set quantifications We develop applications to several classes of graphs that include cographs and are, like cographs, defined by forbidding subgraphs with ``too many'' induced paths with four vertices

881 citations


Book ChapterDOI
14 May 2000
TL;DR: It is provided strong evidence that relinearization and XL can solve randomly generated systems of polynomial equations in subexponential time when m exceeds n by a number that increases slowly with n.
Abstract: The security of many recently proposed cryptosystems is based on the difficulty of solving large systems of quadratic multivariate polynomial equations. This problem is NP-hard over any field. When the number of equations m is the same as the number of unknowns n the best known algorithms are exhaustive search for small fields, and a Grobner base algorithm for large fields. Grobner base algorithms have large exponential complexity and cannot solve in practice systems with n ≥ 15. Kipnis and Shamir [9] have recently introduced a new algorithm called "relinearization". The exact complexity of this algorithm is not known, but for sufficiently overdefined systems it was expected to run in polynomial time. In this paper we analyze the theoretical and practical aspects of relinearization. We ran a large number of experiments for various values of n and m, and analysed which systems of equations were actually solvable. We show that many of the equations generated by relinearization are linearly dependent, and thus relinearization is less efficient that one could expect. We then develop an improved algorithm called XL which is both simpler and more powerful than relinearization. For all 0 < ∈ ≤ 1/2, and m ≥ ∈n2, XL and relinearization are expected to run in polynomial time of approximately nO(1/√Ɛ). Moreover, we provide strong evidence that relinearization and XL can solve randomly generated systems of polynomial equations in subexponential time when m exceeds n by a number that increases slowly with n.

872 citations


28 Jan 2000
TL;DR: In this article, a quantum algorithm for solving instances of the satisfiability problem, based on adiabatic evolution, is given, where the evolution of the quantum state is governed by a time-dependent Hamiltonian that interpolates between an initial Hamiltonian and a final Hamiltonian, whose ground state encodes the satisfying assignment.
Abstract: We give a quantum algorithm for solving instances of the satisfiability problem, based on adiabatic evolution. The evolution of the quantum state is governed by a time-dependent Hamiltonian that interpolates between an initial Hamiltonian, whose ground state is easy to construct, and a final Hamiltonian, whose ground state encodes the satisfying assignment. To ensure that the system evolves to the desired final ground state, the evolution time must be big enough. The time required depends on the minimum energy difference between the two lowest states of the interpolating Hamiltonian. We are unable to estimate this gap in general. We give some special symmetric cases of the satisfiability problem where the symmetry allows us to estimate the gap and we show that, in these cases, our algorithm runs in polynomial time.

713 citations


Proceedings ArticleDOI
01 Jun 2000
TL;DR: An efficient, flexible, and effective data structure, B-trees for non-slicing floorplans, based on ordered binary trees and the admissible placement presented in [1], and a B-tree based simulated annealing scheme for floorplan design.
Abstract: We present in this paper an efficient, flexible, and effective data structure, B*-trees for non-slicing floorplans. B*-trees are based on ordered binary trees and the admissible placement presented in [1]. Inheriting from the nice properties of ordered binary trees, B*-trees are very easy for implementation and can perform the respective primitive tree, operations search, insertion, and deletion in only O(1), O(1), and O(n) times while existing representations for non-slicing floorplans need at least O(n) time for each of these operations, where n is the number of modules. The correspondence between an admissible placement and its induced B*-tree is 1-to-1 (i.e., no redundancy); further, the transformation between them takes only linear time. Unlike other representations for non-slicing floorplans that need to construct constraint graphs for cost evaluation, in particular, the evaluation can be performed on B*-trees and their corresponding placements directly and incrementally. We further show the flexibility of B*-trees by exploring how to handle rotated, pre-placed, soft, and rectilinear modules. Experimental results on MCNC benchmarks show that the B*-tree representation runs about 4.5 times faster, consumes about 60% less memory, and results in smaller silicon area than the O-tree one [1]. We also develop a B*-tree based simulated annealing scheme for floorplan design; the scheme achieves near optimum area utilization even for rectilinear modules.

506 citations


Journal ArticleDOI
13 Jan 2000-Nature
TL;DR: The use of the immobilization and manipulation of combinatorial mixtures of DNA on a support to solve a NP-complete problem, and considers a small example of the satisfiability problem (SAT), in which the values of a set of boolean variables satisfying certain logical constraints are determined.
Abstract: DNA computing was proposed1 as a means of solving a class of intractable computational problems in which the computing time can grow exponentially with problem size (the ‘NP-complete’ or non-deterministic polynomial time complete problems). The principle of the technique has been demonstrated experimentally for a simple example of the hamiltonian path problem2 (in this case, finding an airline flight path between several cities, such that each city is visited only once3). DNA computational approaches to the solution of other problems have also been investigated4,5,6,7,8,9. One technique10,11,12,13 involves the immobilization and manipulation of combinatorial mixtures of DNA on a support. A set of DNA molecules encoding all candidate solutions to the computational problem of interest is synthesized and attached to the surface. Successive cycles of hybridization operations and exonuclease digestion are used to identify and eliminate those members of the set that are not solutions. Upon completion of all the multi-step cycles, the solution to the computational problem is identified using a polymerase chain reaction to amplify the remaining molecules, which are then hybridized to an addressed array. The advantages of this approach are its scalability and potential to be automated (the use of solid-phase formats simplifies the complex repetitive chemical processes, as has been demonstrated in DNA and protein synthesis14). Here we report the use of this method to solve a NP-complete problem. We consider a small example of the satisfiability problem (SAT)2, in which the values of a set of boolean variables satisfying certain logical constraints are determined.

505 citations


Proceedings ArticleDOI
01 Jul 2000
TL;DR: An algorithm for out-of-core simplification of large polygonal datasets that are too complex to fit in main memory is presented, using error quadric information for the placement of each cluster's representative vertex, which better preserves fine details and results in a low mean geometric error.
Abstract: We present an algorithm for out-of-core simplification of large polygonal datasets that are too complex to fit in main memory. The algorithm extends the vertex clustering scheme of Rossignac and Borrel [13] by using error quadric information for the placement of each cluster's representative vertex, which better preserves fine details and results in a low mean geometric error. The use of quadrics instead of the vertex grading approach in [13] has the additional benefits of requiring less disk space and only a single pass over the model rather than two. The resulting linear time algorithm allows simplification of datasets of arbitrary complexity.In order to handle degenerate quadrics associated with (near) flat regions and regions with zero Gaussian curvature, we present a robust method for solving the corresponding underconstrained least-squares problem. The algorithm is able to detect these degeneracies and handle them gracefully. Key features of the simplification method include a bounded Hausdorff error, low mean geometric error, high simplification speed (up to 100,000 triangles/second reduction), output (but not input) sensitive memory requirements, no disk space overhead, and a running time that is independent of the order in which vertices and triangles occur in the mesh.

348 citations


Journal ArticleDOI
TL;DR: A randomized algorithm with a polylogarithmic approximation guarantee for the group Steiner tree problem was given in this paper, running in time O(nik2i) in the worst case.

345 citations


Journal ArticleDOI
TL;DR: An algorithm that uses an estimation of the joint distribution of promising solutions in order to generate new candidate solutions and is able to solve all but one of the tested problems in linear or close to linear time with respect to the problem size.
Abstract: This paper proposes an algorithm that uses an estimation of the joint distribution of promising solutions in order to generate new candidate solutions. The algorithm is settled into the context of genetic and evolutionary computation and the algorithms based on the estimation of distributions. The proposed algorithm is called the Bayesian Optimization Algorithm (BOA). To estimate the distribution of promising solutions, the techniques for modeling multivariate data by Bayesian networks are used. The BOA identifies, reproduces, and mixes building blocks up to a specified order. It is independent of the ordering of the variables in strings representing the solutions. Moreover, prior information about the problem can be incorporated into the algorithm, but it is not essential. First experiments were done with additively decomposable problems with both nonoverlapping as well as overlapping building blocks. The proposed algorithm is able to solve all but one of the tested problems in linear or close to linear time with respect to the problem size. Except for the maximal order of interactions to be covered, the algorithm does not use any prior knowledge about the problem. The BOA represents a step toward alleviating the problem of identifying and mixing building blocks correctly to obtain good solutions for problems with very limited domain information.

343 citations


Proceedings ArticleDOI
05 Nov 2000
TL;DR: A corner block list-a new efficient topological representation for non-slicing floorplan is proposed with applications to VLSI floorplan and building block placement and the experimental results demonstrate the algorithm is quite promising.
Abstract: In this paper, a corner block list -- a new efficient topological representation for non-slicing floorplan is proposed with applications to VLSI floorplan and building block placement. Given a corner block list, it takes only linear time to construct the floorplan. Unlike the O-tree structure, which determines the exact floorplan based on given block sizes, corner block list defines the floorplan independent of the block sizes. Thus, the structure is better suited for floorplan optimization with various size configurations of each block. Based on this new structure and the simulated annealing technique, an efficient floorplan algorithm is given. Soft blocks and the aspect ratio of the chip are taken into account in the simulated annealing process. The experimental results demonstrate the algorithm is quite promising.

312 citations


Journal ArticleDOI
TL;DR: The first approximation scheme for maximum multicommodity flow that is independent of the number of commodities k is presented, and the algorithm improves upon the run time of previous algorithms by this factor of k, running in ${cal O}^*(\epsilon^{-2}m^2)}$ time.
Abstract: We describe fully polynomial time approximation schemes for various multicommodity flow problems in graphs with m edges and n vertices. We present the first approximation scheme for maximum multicommodity flow that is independent of the number of commodities k, and our algorithm improves upon the run time of previous algorithms by this factor of k, running in ${{\cal O}^*(\epsilon^{-2}m^2)}$ time. For maximum concurrent flow and minimum cost concurrent flow, we present algorithms that are faster than the current known algorithms when the graph is sparse or the number of commodities k is large, i.e., k > m/n. Our algorithms build on the framework proposed by Garg and Konemann [Proceedings of the 39th Annual IEEE Symposium on Foundations of Computer Science, IEEE, New York, 1998, pp. 300--309]. They are simple, deterministic, and for the versions without costs, they are strongly polynomial. The approximation guarantees are obtained by comparison with dual feasible solutions found by our algorithm. Our maximum multicommodity flow algorithm extends to an approximation scheme for the maximum weighted multicommodity flow, which is faster than those implied by previous algorithms by a factor of k/log W, where W is the maximum weight of a commodity.

310 citations


Book ChapterDOI
15 Jul 2000
TL;DR: In this article, a discrete strategy improvement algorithm for parity games is given for constructing winning strategies in parity games, thereby providing also a new solution of the model-checking problem for the modal μ-calculus.
Abstract: A discrete strategy improvement algorithm is given for constructing winning strategies in parity games, thereby providing also a new solution of the model-checking problem for the modal μ-calculus Known strategy improvement algorithms, as proposed for stochastic games by Hoffman and Karp in 1966, and for discounted payoff games and parity games by Puri in 1995, work with real numbers and require solving linear programming instances involving high precision arithmetic In the present algorithm for parity games these difficulties are avoided by the use of discrete vertex valuations in which information about the relevance of vertices and certain distances is coded An efficient implementation is given for a strategy improvement step Another advantage of the present approach is that it provides a better conceptual understanding and easier analysis of strategy improvement algorithms for parity games However, so far it is not known whether the present algorithm works in polynomial time The long standing problem whether parity games can be solved in polynomial time remains open

Journal ArticleDOI
TL;DR: This paper shows simple dynamic programming algorithms for RNA secondary structure prediction with pseudoknots that outputs a secondary structure in which the number of base pairs is at least 1 of the optimal, where ; are any constants satisfying 0<;<1.

Journal ArticleDOI
TL;DR: Significantly improving and extending recent results of Kleinberg, data structures whose size is polynomial in the size of the database and search algorithms that run in time nearly linear or nearly quadratic in the dimension are constructed.
Abstract: We address the problem of designing data structures that allow efficient search for approximate nearest neighbors. More specifically, given a database consisting of a set of vectors in some high dimensional Euclidean space, we want to construct a space-efficient data structure that would allow us to search, given a query vector, for the closest or nearly closest vector in the database. We also address this problem when distances are measured by the L1 norm and in the Hamming cube. Significantly improving and extending recent results of Kleinberg, we construct data structures whose size is polynomial in the size of the database and search algorithms that run in time nearly linear or nearly quadratic in the dimension. (Depending on the case, the extra factors are polylogarithmic in the size of the database.)

Book ChapterDOI
20 Sep 2000
TL;DR: The relationship between SPQR-trees and triconnected components is described and the incorrectness of the Hopcroft and Tarjan algorithm is shown and the resulting algorithm is applied to the computation of SPQRs.
Abstract: The data structure SPQR-tree represents the decomposition of a biconnected graph with respect to its triconnected components. SPQR-trees have been introduced by Di Battista and Tamassia [8] and, since then, became quite important in the field of graph algorithms. Theoretical papers using SPQR-trees claim that they can be implemented in linear time using a modification of the algorithm by Hopcroft and Tarjan [15] for decomposing a graph into its triconnected components. So far no correct linear time implementation of either triconnectivity decomposition or SPQR-trees is known to us. Here, we show the incorrectness of the Hopcroft and Tarjan algorithm [15], and correct the faulty parts. We describe the relationship between SPQR-trees and triconnected components and apply the resulting algorithm to the computation of SPQR-trees. Our implementation is publically available in AGD [1].

Journal ArticleDOI
11 Dec 2000
TL;DR: It is proved that any class learnable in the statistical query learning model is learnable from positive statistical queries and instance statistical queries only if a lower bound on the weight of any, target concept f can be estimated in polynomial time.
Abstract: In many machine learning settings, labeled examples are difficult to collect while unlabeled data are abundant. Also, for some binary classification problems, positive examples which are elements of the target concept are available. Can these additional data be used to improve accuracy of supervised learning algorithms? We investigate in this paper the design of learning algorithms from positive and unlabeled data only. Many machine learning and data mining algorithms, such as decision tree induction algorithms and naive Bayes algorithms, use examples only to evaluate statistical queries (SQ-like algorithms). Kearns designed the statistical query learning model in order to describe these algorithms. Here, we design an algorithm scheme which transforms any SQ-like algorithm into an algorithm based on positive statistical queries (estimate for probabilities over the set of positive instances) and instance statistical queries (estimate for probabilities over the instance space). We prove that any class learnable in the statistical query learning model is learnable from positive statistical queries and instance statistical queries only if a lower bound on the weight of any, target concept f can be estimated in polynomial time. Then, we design a decision tree induction algorithm POSC4.5, based on C4.5, that uses only positive and unlabeled examples and we give experimental results for this algorithm. In the case of imbalanced classes in the sense that one of the two classes (say the positive class) is heavily underrepresented compared to the other class, the learning problem remains open. This problem is challenging because it is encountered in many real-world applications.

Book ChapterDOI
01 Jan 2000
TL;DR: A new general algorithm for computing distance transforms of digital images is presented, which can be used for the computation of the exact Euclidean, Manhattan, and chessboard distance transforms.
Abstract: A new general algorithm for computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the computation per row (column) is independent of the computation of other rows (columns), the algorithm can be easily parallelized on shared memory computers. The algorithm can be used for the computation of the exact Euclidean, Manhattan (L 1 norm), and chessboard distance (L ∞ norm) transforms.

Book ChapterDOI
18 Apr 2000
TL;DR: A novel-dimensionality reduction technique is introduced that supports an indexing algorithm that is more than an order of magnitude faster than the previous best known method and has numerous other advantages.
Abstract: We address the problem of similarity search in large time series databases. We introduce a novel-dimensionality reduction technique that supports an indexing algorithm that is more than an order of magnitude faster than the previous best known method. In addition to being much faster our approach has numerous other advantages. It is simple to understand and implement, allows more flexible distance measures including weighted Euclidean queries and the index can be built in linear time. We call our approach PCA-indexing (Piece-wise Constant Approximation) and experimentally validate it on space telemetry, financial, astronomical, medical and synthetic data.

Book ChapterDOI
09 Jul 2000
TL;DR: The reductions used to prove Max Clique cannot be approximated in polynomial time within n1-Ɛ, for any constant Ɛ > 0, unless NP = ZPP are extended and combined with a recent result of Samorodnitsky and Trevisan.
Abstract: It was previously known that Max Clique cannot be approximated in polynomial time within n1-Ɛ, for any constant Ɛ > 0, unless NP = ZPP. In this paper, we extend the reductions used to prove this result and combine the extended reductions with a recent result of Samorodnitsky and Trevisan to show that clique cannot be approximated within n1-O(1/√log log n) unless NP ⊆ ZPTIME(2O(log n(log log n)3/2)).

Journal ArticleDOI
TL;DR: These characterizations provide a natural and uniform approach to fully polynomial time approximation schemes and illustrate their strength and generality by deducing from them the existence of FPTASs for a multitude of scheduling problems.
Abstract: We derive results of the following flavor: If a combinatorial optimization problem can be formulated via a dynamic program of a certain structure and if the involved cost and transition functions satisfy certain arithmetical and structural conditions, then the optimization problem automatically possesses a fully polynomial time approximation scheme (FPTAS). Our characterizations provide a natural and uniform approach to fully polynomial time approximation schemes. We illustrate their strength and generality by deducing from them the existence of FPTASs for a multitude of scheduling problems. Many known approximability results follow as corollaries from our main result.

Book ChapterDOI
20 Sep 2000
TL;DR: A fast algorithm for the 2D drawing, geometric clustering and multilevel viewing of large undirected graphs is presented and the decomposition tree provides a mechanism to view the hierarchical clustering on various levels of abstraction.
Abstract: A fast algorithm(FADE) for the 2D drawing, geometric clustering and multilevel viewing of large undirected graphs is presented. The algorithm is an extension of the Barnes-Hut hierarchical space decomposition method, which includes edges and multilevel visual abstraction. Compared to the original force directed algorithm, the time overhead is O(e + n log n) where n and e are the numbers of nodes and edges. The improvement is possible since the decomposition tree provides a systematic way to determine the degree of closeness between nodes without explicitly calculating the distance between each node. Different types of regular decomposition trees are introduced. The decomposition tree also represents a hierarchical clustering of the nodes, which improves in a graph theoretic sense as the graph drawing approaches a lower energy state. Finally, the decomposition tree provides a mechanism to view the hierarchical clustering on various levels of abstraction. Larger graphs can be represented more concisely, on a higher level of abstraction, with fewer graphics on screen.

Journal ArticleDOI
TL;DR: This paper presents a lower bound of $\Omega(D+\sqrt n/\log n)$ on the time required for the distributed construction of a minimum-weight spanning tree (MST) in weighted n-vertex networks of diameter D=Omega(\ log n) in the bounded message model.
Abstract: This paper presents a lower bound of $\Omega(D+\sqrt n/\log n)$ on the time required for the distributed construction of a minimum-weight spanning tree (MST) in weighted n-vertex networks of diameter $D=\Omega(\log n)$, in the bounded message model. This establishes the asymptotic near-optimality of existing time-efficient distributed algorithms for the problem, whose complexity is $O(D + \sqrt n \log^* n)$.

Proceedings Article
30 Jul 2000
TL;DR: A new extension of the DavisPutnam procedure, based on recursively identifying connected constraint-graph components, is presented that substantially improves counting performance on random 3-SAT instances as well as benchmark instances from the SATLIB and Beijing suites.
Abstract: Recent work by Birnbaum & Lozinskii [1999] demonstrated that a clever yet simple extension of the well-known DavisPutnam procedure for solving instances of propositional satisfiability yields an efficient scheme for counting the number of satisfying assignments (models). We present a new extension, based on recursively identifying connected constraint-graph components, that substantially improves counting performance on random 3-SAT instances as well as benchmark instances from the SATLIB and Beijing suites. In addition, from a structure-based perspective of worst-case complexity, while polynomial time satisfiability checking is known to require only a backtrack search algorithm enhanced with nogood learning, we show that polynomial time counting using backtrack search requires an additional enhancement: good learning.

Journal ArticleDOI
TL;DR: It is shown that the test scheduling decision problem is equivalent to the m-processor open shop scheduling problem and is therefore NP-complete and a commonly encountered instance of this problem (m=2) can be solved in polynomial time.
Abstract: We present optimal solutions to the test scheduling problem for core-based systems. Given a set of tasks (test sets for the cores), a set of test resources (e.g., test buses, BIST hardware) and a test access architecture, we determine start times for the tasks such that the total test application time is minimized. We show that the test scheduling decision problem is equivalent to the m-processor open shop scheduling problem and is therefore NP-complete. However a commonly encountered instance of this problem (m=2) can be solved in polynomial time. For the general case (m>2), we present a mixed-integer linear programming (MILP) model for optimal scheduling and apply it to a representative core-based system using an MILP solver available in the public domain. We also extend the MILP model to allow optimal test set selection from a set of alternatives. Finally, we present an efficient heuristic algorithm for handling larger systems for which the MILP model may be infeasible.

Journal ArticleDOI
TL;DR: It is shown that error estimates reported previously are not entirely satisfactory and provide sharper and more precise estimates and a rigorous analysis of the complexity (O(n log n)) of the algorithm.
Abstract: This paper is concerned with the application of the fast multipole method (FMM) to the Maxwell equations. This application differs in many aspects from other applications such as the N-body problem, Laplace equation, and quantum chemistry, etc. The FMM leads to a significant speed-up in CPU time with a major reduction in the amount of computer memory needed when performing matrix-vector products. This leads to faster resolution of scattering of harmonic plane waves from perfectly conducting obstacles. Emphasis here is on a rigorous mathematical approach to the problem. We focus on the estimation of the error introduced by the FMM and a rigorous analysis of the complexity (O(n log n)) of the algorithm. We show that error estimates reported previously are not entirely satisfactory and provide sharper and more precise estimates.

Book ChapterDOI
13 Dec 2000
TL;DR: In this paper, it was shown that deterministic P-systems without membrane division are not able to solve satisfiability and Hamiltonian path problems in polynomial time with respect to the input length.
Abstract: A recently introduced variant of P-systems considers membranes which can multiply by division. These systems use two types of division: division for elementary membranes (Le. membranes not containing other membranes inside) and division for non-elementary membranes. In two recent papers it is shown how to solve the Satisfiability problem and the Hamiltonian Path problem (two well known NP complete problems) in linear time with respect to the input length, using both types of division. We show in this paper that P-systems with only division for elementary membranes suffice to solve these two problems in linear time. Is it possible to solve NP complete problems in polynomial time using P-systems without membrane division? We show, moreover, that (if P ≠ NP) deterministic P-systems without membrane division are not able to solve NP complete problems in polynomial time.

Journal ArticleDOI
TL;DR: This work analyzes the computational complexity of evaluating policies and of determining whether a sufficiently good policy exists for a Markov decision process, based on a number of confounding factors, including the observability of the system state; the succinctness of the representation; the type of policy; even the number of actions relative to theNumber of states.
Abstract: Controlled stochastic systems occur in science engineering, manufacturing, social sciences, and many other cntexts. If the systems is modeled as a Markov decision process (MDP) and will run ad infinitum, the optimal control policy can be computed in polynomial time using linear programming. The problems considered here assume that the time that the process will run is finite, and based on the size of the input. There are mny factors that compound the complexity of computing the optimal policy. For instance, there are many factors that compound the complexity of this computation. For instance, if the controller does not have complete information about the state of the system, or if the system is represented in some very succint manner, the optimal policy is provably not computable in time polynomial in the size of the input. We analyze the computational complexity of evaluating policies and of determining whether a sufficiently good policy exists for a MDP, based on a number of confounding factors, including the observability of the system state; the succinctness of the representation; the type of policy; even the number of actions relative to the number of states. In almost every case, we show that the decision problem is complete for some known complexity class. Some of these results are familiar from work by Papadimitriou and Tsitsiklis and others, but some, such as our PL-completeness proofs, are surprising. We include proofs of completeness for natural problems in the as yet little-studied classes NPPP.

Journal ArticleDOI
TL;DR: It is shown how the concept of work functions, used previously mostly for the analysis of deterministic algorithms, can be applied, in a systematic fashion, to the randomized case and a new, Hk-competitive algorithm for paging is provided.

Journal ArticleDOI
01 Jun 2000
TL;DR: A randomized algorithm that runs in nearly linear time and outputs a linear arrangement whose bandwidth is within a polylogarithmic multiplicative factor of optimal, based on a new notion, called volume respecting embeddings.
Abstract: A linear arrangement of an n-vertex graph is a one-to-one mapping of its vertices to the integers {1, ?, n}. The bandwidth of a linear arrangement is the maximum difference between mapped values of adjacent vertices. The problem of finding a linear arrangement with smallest possible bandwidth is NP-hard. We present a randomized algorithm that runs in nearly linear time and outputs a linear arrangement whose bandwidth is within a polylogarithmic multiplicative factor of optimal. Our algorithm is based on a new notion, called volume respecting embeddings, which is a natural extension of small distortion embeddings of Bourgain and of Linial, London and Rabinovich.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the problem of finding a simple path between a source and a given destination is NP-hard, even when L is restricted to fixed simple regular languages and to very simple classes of graphs.
Abstract: Given an alphabet $\Sigma$, a (directed) graph G whose edges are weighted and $\Sigma$-labeled, and a formal language $L\subseteq\Sigma^*$, the formal-language-constrained shortest/simple path problem consists of finding a shortest (simple) path p in G complying with the additional constraint that l(p) \in L$. Here l(p) denotes the unique word obtained by concatenating the $\Sigma$-labels of the edges along the path p. The main contributions of this paper include the following: We show that the formal-language-constrained shortest path problem is solvable efficiently in polynomial time when L is restricted to be a context-free language (CFL). When L is specified as a regular language we provide algorithms with improved space and time bounds. In contrast, we show that the problem of finding a simple path between a source and a given destination is NP-hard, even when L is restricted to fixed simple regular languages and to very simple classes of graphs (e.g., complete grids). For the class of treewidth-bounded graphs, we show that (i) the problem of finding a regular-language-constrained simple path between source and destination is solvable in polynomial time and (ii) the extension to finding CFL-constrained simple paths is NP-complete. Our results extend the previous results in [SIAM J. Comput., 24 (1995), pp. 1235--1258; Proceedings of the 76th Annual Meeting of the Transportation Research Board, 1997; and Proceedings of the 9th ACM SIGACT-SIGMOD-SIGART Symposium on Database Systems, 1990, pp. 230--242]. Several additional extensions and applications of our results in the context of transportation problems are presented. For instance, as a corollary of our results, we obtain a polynomial-time algorithm for the best k-similar path problem studied in Proceedings of the 76th Annual Meeting of the Transportation Reasearch Board, 1997]. The previous best algorithm was given by [ Proceedings of the 76th Annual Meeting of the Transportation Research Board, 1997] and takes exponential time in the worst case.

Journal ArticleDOI
25 Jun 2000
TL;DR: In this paper, it was shown that the minimum distance d of a linear code is not approximable to within any constant factor in random polynomial time (RP), unless NP = RP.
Abstract: We show that the minimum distance d of a linear code is not approximable to within any constant factor in random polynomial time (RP), unless nondeterministic polynomial time (NP) equals RP. We also show that the minimum distance is not approximable to within an additive error that is linear in the block length n of the code. Under the stronger assumption that NP is not contained in random quasi-polynomial time (RQP), we show that the minimum distance is not approximable to within the factor 2/sup log1-/spl epsi//(n), for any /spl epsi/>0. Our results hold for codes over any finite field, including binary codes. In the process, we show that it is hard to find approximately nearest codewords even if the number of errors exceeds the unique decoding radius d/2 by only an arbitrarily small fraction /spl epsi/d. We also prove the hardness of the nearest codeword problem for asymptotically good codes, provided the number of errors exceeds (2/3)d. Our results for the minimum distance problem strengthen (though using stronger assumptions) a previous result of Vardy (1997) who showed that the minimum distance cannot be computed exactly in deterministic polynomial time (P), unless P = NP. Our results are obtained by adapting proofs of analogous results for integer lattices due to Ajtai (1998) and Micciancio (see SIAM J. Computing, vol.30, no.6, p.2008-2035, 2001). A critical component in the adaptation is our use of linear codes that perform better than random (linear) codes.