scispace - formally typeset
Search or ask a question

Showing papers in "Algorithmica in 1999"


Journal ArticleDOI
Kevin Atteson1
TL;DR: An upper bound on the amount of data necessary to reconstruct the topology with high confidence is demonstrated by finding conditions under which these methods will determine the correct tree topology and showing that these perform as well as possible in a certain sense.
Abstract: We analyze the performance of the popular class of neighbor-joining methods of phylogeny reconstruction In particular, we find conditions under which these methods will determine the correct tree topology and show that these perform as well as possible in a certain sense We also give indications of the performance of these methods when the conditions necessary to show that they determine the entire tree topology correctly, do not hold We use these results to demonstrate an upper bound on the amount of data necessary to reconstruct the topology with high confidence

220 citations


Journal ArticleDOI
TL;DR: The algorithm is based on the simulation of a nondeterministic finite automaton built from the pattern and using the text as input and it is shown that the algorithms are among the fastest for typical text searching, being the fastest in some cases.
Abstract: We present a new algorithm for on-line approximate string matching. The algorithm is based on the simulation of a nondeterministic finite automaton built from the pattern and using the text as input. This simulation uses bit operations on a RAM machine with word length w = Ω (log n) bits, where n is the text size. This is essentially similar to the model used in Wu and Manber's work, although we improve the search time by packing the automaton states differently. The running time achieved is O(n) for small patterns (i.e., whenever mk = O(log n)) , where m is the pattern length and k

175 citations


Journal ArticleDOI
TL;DR: The design and implementation of a practical parallel algorithm for Delaunay triangulation that works well on general distributions, achieves significantly better speedups over good sequential code, does not assume a uniform distribution of points, and is widely portable due to its use of MPI as a communication mechanism.
Abstract: This paper describes the design and implementation of a practical parallel algorithm for Delaunay triangulation that works well on general distributions. Although there have been many theoretical parallel algorithms for the problem, and some implementations based on bucketing that work well for uniform distributions, there has been little work on implementations for general distributions. We use the well known reduction of 2D Delaunay triangulation to find the 3D convex hull of points on a paraboloid. Based on this reduction we developed a variant of the Edelsbrunner and Shi 3D convex hull algorithm, specialized for the case when the point set lies on a paraboloid. This simplification reduces the work required by the algorithm (number of operations) from O(n log 2 n) to O(n log n) . The depth (parallel time) is O( log 3 n) on a CREW PRAM. The algorithm is simpler than previous O(n log n) work parallel algorithms leading to smaller constants. Initial experiments using a variety of distributions showed that our parallel algorithm was within a factor of 2 in work from the best sequential algorithm. Based on these promising results, the algorithm was implemented using C and an MPI-based toolkit. Compared with previous work, the resulting implementation achieves significantly better speedups over good sequential code, does not assume a uniform distribution of points, and is widely portable due to its use of MPI as a communication mechanism. Results are presented for the IBM SP2, Cray T3D, SGI Power Challenge, and DEC AlphaCluster.

103 citations


Journal ArticleDOI
TL;DR: The fastest known deterministic algorithm for this problem runs in time O(n 2 log n + b 0 min(n2, m log n), where b 0 is the number of edges which do not result in a new component as discussed by the authors.
Abstract: We are given a set script T = (T1, T2, . . . , Tk) of rooted binary trees, each Ti leaf-labeled by a subset L[Ti] C (1,2, . . . , n). If T is a tree on (1, 2, . . . , n), we let T|L denote the minimal subtree of T induced by the nodes of L and all their ancestors. The consensus tree problem asks whether there exists a tree T* such that, for every i, T*|L[Ti] is homeomorphic to Ti. We present algorithms which test if a given sel of trees has a consensus tree and if so. construct one. The deterministic algorithm takes time min(O (N.sqrt(n)), O (N + pow2(n).log(n)), where N = sum[i]( |Ti| ), and uses linear space. The randomized algorithm takes time O (N log3 n) and uses linear space. The previous best for this problem was a 1981 O (N n) algorithm by Aho et al. Our faster deterministic algorithm uses a new efficient algorithm for the following interesting dynamic graph problem: Given a graph G with n nodes and m edges and a sequence of b batches of one or more edge deletions, then, after each batch, either find a new component that has just been created or determine that there is no such component. For this problem, we have a simple algorithm with running time O (n2 log n + b0 min(n2, m log n)), where b0 is the number of batches which do not result in a new component. For our particular application b0 <= 1. If all edges are deleted, then the best previously known deterministic algorithm requires time O (m.sqrt(n)) to solve this problem. We also present two applications of these consensus tree algorithms which solve other problems in computational evolutionary biology.

102 citations


Journal ArticleDOI
TL;DR: This paper proposes a complete array of computer algebra and symbolic computational geometry methods for modeling the rigidity constraints, formulating the problems in algebraic terms and, lastly, visualizing the computed conformations.
Abstract: A relatively new branch of computational biology has been emerging as an effort to supplement traditional techniques of large scale search in drug design by structure-based methods, in order to improve efficiency and guarantee completeness. This paper studies the geometric structure of cyclic molecules, in particular the enumeration of all possible conformations, which is crucial in finding the energetically favorable geometries, and the identification of all degenerate conformations. Recent advances in computational algebra are exploited, including distance geometry, sparse polynomial theory, and matrix methods for numerically solving nonlinear multivariate polynomial systems. Moreover, we propose a complete array of computer algebra and symbolic computational geometry methods for modeling the rigidity constraints, formulating the problems in algebraic terms and, lastly, visualizing the computed conformations. The use of computer algebra systems and of public domain software is illustrated, in addition to more specialized programs developed by the authors, which are also freely available. Throughout our discussion, we show the relevance of successful paradigms and algorithms from geometry and robot kinematics to computational biology.

89 citations


Journal ArticleDOI
TL;DR: An O(mχ(G) + n log n) time ε -approximate algorithm is presented to solve the following NP-complete problem: given an interval graph G, find a node p -coloring G such that the cost of p-coloring is minimal.
Abstract: In this paper we study the following NP-complete problem: given an interval graph G = (V,E) , find a node p -coloring $ \langle V_1, V_2, . . ., V_p\rangle $ such that the cost $ \xi (\langle V_1, V_2, . . ., V_p \rangle ) = \sum^p_{i=1} i|V_i| $ is minimal, where $ \langle V_1, V_2, . . ., V_p \rangle $ denotes a partition of V whose subsets are ordered by nonincreasing cardinality. We present an O(m χ (G) + n log n) time e -approximate algorithm (e < 2) to solve the problem, where n , m , and χ #(G) are the number of nodes of the interval graph, its number of cliques, and its chromatic number, respectively. The algorithm is shown to solve the problem exactly on some classes of interval graphs, namely, the proper and the containment interval graphs, and the intersection graphs of sets of ``short'' intervals. The problem of determining the minimum number of colors needed to achieve the minimum $\xi (\langle V_1, V_2, . . ., V_p \rangle )$ over all p -colorings of G is also addressed.

80 citations


Journal ArticleDOI
TL;DR: The BSPRAM model is used to simplify the description of the algorithms, and new memory-efficient BSP algorithms both for standard and for fast matrix multiplication are proposed.
Abstract: The model of bulk-synchronous parallel (BSP) computation is an emerging paradigm of general-purpose parallel computing. Its modification, the BSPRAM model, allows one to combine the advantages of distributed and shared-memory style programming. In this paper we study the BSP memory complexity of matrix multiplication. We propose new memory-efficient BSP algorithms both for standard and for fast matrix multiplication. The BSPRAM model is used to simplify the description of the algorithms. The communication and synchronization complexity of our algorithms is slightly higher than that of known time-efficient BSP algorithms. The current time-efficient and new memory-efficient algorithms are connected by a continuous tradeoff.

77 citations


Journal ArticleDOI
TL;DR: A general framework for the isotonic regression algorithm is proposed and a new algorithm is presented for this case, which can be regarded as a generalization of the PAV algorithm of Ayer et al.
Abstract: The isotonic regression problem has applications in statistics, operations research, and image processing. In this paper a general framework for the isotonic regression algorithm is proposed. Under this framework, we discuss the isotonic regression problem in the case where the directed graph specifying the order restriction is a directed tree with n vertices. A new algorithm is presented for this case, which can be regarded as a generalization of the PAV algorithm of Ayer et al. Using a simple tree structure such as the binomial heap, the algorithm can be implemented in O(n log n) time, improving the previously best known O(n 2 ) time algorithm. We also present linear time algorithms for special cases where the directed graph is a path or a star.

75 citations


Journal ArticleDOI
TL;DR: An algorithm is presented that, on input two arrangements of k pebbles on a tree with n vertices, decides in time O(n) whether the two arrangements are reachable from one another.
Abstract: We consider the following pebble motion problem. We are given a tree T with n vertices and two arrangements $\cal R$ and $\cal S$ of k

72 citations


Journal ArticleDOI
TL;DR: This paper defines notions of risk and reward that are natural extensions of classical competitive analysis and then illustrates their ideas using the ski-rental problem to derive an optimal risk-tolerant algorithm.
Abstract: Competitive analysis is concerned with minimizing a relative measure of performance. When applied to financial trading strategies, competitive analysis leads to the development of strategies with minimum relative performance risk. This approach is too inflexible. Many investors are interested in managing their risk: they may be willing to increase their risk for some form of reward. They may also have some forecast of the future. In this paper we extend competitive analysis to provide a framework in which investors can develop optimal trading strategies based on their risk tolerance and forecast. We first define notions of risk and reward that are natural extensions of classical competitive analysis and then illustrate our ideas using the ski-rental problem. Finally, we analyze a financial game using the risk—reward framework, and, in particular, derive an optimal risk-tolerant algorithm.

67 citations


Journal ArticleDOI
TL;DR: It is shown here, for the leasing problem, that the interest rate factor reduces the uncertainty involved and the optimal deterministic competitive ratio is 1 + (1+ i)(1-1/k)(1 - k(i/1+i)) , a decreasing function of the interest i (for all reasonable values of i ).
Abstract: Consider an on-line player who needs some equipment (e.g., a computer) for an initially unknown number of periods. At the start of each period it is determined whether the player will need the equipment during the current period and the player has two options: to pay a leasing fee c and rent the equipment for the period, or to buy it for a larger amount P. The total cost incurred by the player is the sum of all leasing fees and perhaps one purchase. The above problem, called the leasing problem (in computer science folklore it is known as the ski-rental problem), has received considerable attention in the economic literature. Using the competitive ratio as a performance measure this paper is concerned with determining the optimal competitive on-line policy for the leasing problem. For the simplest version of the leasing problem (as described above) it is known and readily derived that the optimal deterministic competitive performance is achieved by leasing for the first ki 1 times and then buying, where kD P=c. This strategy pays at most 2i 1=k times the optimal off-line cost. When considering alternative financial transactions one must consider their net present value. Thus, ac- counting for the interest rate is an essential feature of any reasonable financial model. In this paper we deter- mine both deterministic and randomized optimal on-line leasing strategies while accounting for the interest rate factor. It is shown here, for the leasing problem, that the interest rate factor reduces the uncertainty involved. We find that the optimal deterministic competitive ratio is 1C.1C i/.1i 1=k/.1i k.i=1C i//, a decreasing function of the interest i (for all reasonable values of i ). For some applications, realistic values of interest rates result in relatively low competitive ratios. By using randomization the on-line player can further boost up the performance. In particular, against an oblivious adversary the on-line player can attain a strictly smaller compet- itive ratio of 2i..k=.ki 1// ∞ i 2/=..k=.ki 1// ∞ i 1/ where∞D ln.1i k.1i 1=.1C i///=ln.1=.1C i//. Here again, this competitive ratio strictly decreases with i . We also study the leasing problem against a distributional adversary called "Nature." This adversary chooses the probability distribution of the number of leasing periods and announces this distribution before the on-line player chooses a strategy. Although at the outset this adversary appears to be weaker than the oblivious adversary, it is shown that the optimal competitive ratio against Nature equals the optimal ratio against the oblivious adversary.

Journal ArticleDOI
TL;DR: Some areas of computer-aided drug design that are important to computational chemists but are also rich in algorithmic problems are described, both from the computational chemistry and the computer science literature.
Abstract: The rational approach to pharmaceutical drug design begins with an investigation of the relationship between chemical structure and biological activity. Information gained from this analysis is used to aid the design of new, or improved, drugs. Primary considerations during this investigation are the geometric and chemical characteristics of the molecules. Computational chemists who are involved in rational drug design routinely use an array of programs to compute, among other things, molecular surfaces and molecular volume, models of receptor sites, dockings of ligands inside protein cavities, and geometric invariants among different molecules that exhibit similar activity. There is a pressing need for efficient and accurate solutions to the above problems. {Often, limiting assumptions need to be made, in order to make the calculations tractable. Also,} the amount of data processed when searching for a potential drug is currently very large and is only expected to grow larger in the future. This paper describes some areas of computer-aided drug design that are important to computational chemists but are also rich in algorithmic problems. It surveys recent work in these areas both from the computational chemistry and the computer science literature.

Journal ArticleDOI
TL;DR: In this paper, the competitive analysis of LRU and FIFO was used to study the relative performance of the two best known algorithms for paging, and the authors proved that LRU is known to perform much better than FLO.
Abstract: In the paging problem we have to manage a two-level memory system, in which the first level has short access time but can hold only up to k pages, while the second level is very large but slow. We use competitive analysis to study the relative performance of the two best known algorithms for paging, LRU and FIFO. Sleator and Tarjan proved that the competitive ratio of LRU and FIFO is k . In practice, however, LRU is known to perform much better than FIFO. It is believed that the superiority of LRU can be attributed to locality of reference exhibited in request sequences. In order to study this phenomenon, Borodin et al. [2] refined the competitive approach by introducing the concept of access graphs. They conjectured that the competitive ratio of LRU on each access graph is less than or equal to the competitive ratio of FIFO. We prove this conjecture in this paper.

Journal ArticleDOI
TL;DR: In this paper, the authors present a general framework for distributed array redistribution algorithms, based on a generalized circulant matrix formalism of the redistribution problem and a general purpose distributed memory model of the parallel machine.
Abstract: The block-cyclic data distribution is commonly used to organize array elements over the processors of a coarse-grained distributed memory parallel computer. In many scientific applications, the data layout must be reorganized at run-time in order to enhance locality and reduce remote memory access overheads. In this paper we present a general framework for developing array redistribution algorithms. Using this framework, we have developed efficient algorithms that redistribute an array from one block-cyclic layout to another. Block-cyclic redistribution consists of index set computation , wherein the destination locations for individual data blocks are calculated, and data communication , wherein these blocks are exchanged between processors. The framework treats both these operations in a uniform and integrated way. We have developed efficient and distributed algorithms for index set computation that do not require any interprocessor communication. To perform data communication in a conflict-free manner, we have developed direct indirect and hybrid algorithms. In the direct algorithm, a data block is transferred directly to its destination processor. In an indirect algorithm, data blocks are moved from source to destination processors through intermediate relay processors. The hybrid algorithm is a combination of the direct and indirect algorithms. Our framework is based on a generalized circulant matrix formalism of the redistribution problem and a general purpose distributed memory model of the parallel machine. Our algorithms sustain excellent performance over a wide range of problem and machine parameters. We have implemented our algorithms using MPI, to allow for easy portability across different HPC platforms. Experimental results on the IBM SP-2 and the Cray T3D show superior performance over previous approaches. When the block size of the cyclic data layout changes by a factor of K , the redistribution can be performed in O( log K) communication steps. This is true even when K is a prime number. In contrast, previous approaches take O(K) communication steps for redistribution. Our framework can be used for developing scalable redistribution libraries, for efficiently implementing parallelizing compiler directives, and for developing parallel algorithms for various applications. Redistribution algorithms are especially useful in signal processing applications, where the data access patterns change significantly between computational phases. They are also necessary in linear algebra programs, to perform matrix transpose operations.

Journal ArticleDOI
TL;DR: The results show that the MODE algorithm provides dramatic advantages over the direct approach to density evaluation, for example, it is shown using a modest computing platform that on-line density updates and queries for 1 million points and two dimensions take 8 days to compute versus 40 seconds with the Mode approach.
Abstract: Nonparametric density estimation has broad applications in computational finance especially in cases where high frequency data are available. However, the technique is often intractable, given the run times necessary to evaluate a density. We present a new and efficient algorithm based on multipole techniques. Given the n kernels that estimate the density, current methods take O(n) time directly to sum the kernels to perform a single density query. In an on-line algorithm where points are continually added to the density, the cumulative O(n 2 ) running time for n queries makes it very costly, if not impractical, to compute the density for large n . Our new Multipole-accelerated On-line Density Estimation (MODE) algorithm is general in that it can be applied to any kernel (in arbitrary dimensions) that admits a Taylor series expansion. The running time for a density query reduces to O (logn) or even constant time, depending on the kernel chosen, and, hence, the cumulative running time is reduced to O (n logn) or O(n) , respectively. Our results show that the MODE algorithm provides dramatic advantages over the direct approach to density evaluation. For example, we show using a modest computing platform that on-line density updates and queries for 1 million points and two dimensions take 8 days to compute using the direct approach versus 40 seconds with the MODE approach.

Journal ArticleDOI
John H. Reif1
TL;DR: Techniques for executing lengthy computations using short DNA strands by more or less conventional biotechnology engineering techniques within a small number of lab steps are described, in the context of well defined abstract models of biomolecular computation.
Abstract: This paper is concerned with the development of techniques for massively parallel computation at the molecular scale, which we refer to as molecular parallelism. While this may at first appear to be purely science fiction, Adleman [Ad1] has already employed molecular parallelism in the solution of the Hamiltonian path problem, and successfully tested his techniques in a lab experiment on DNA for a small graph. Lipton [L] showed that finding the satisfying inputs to a Boolean expression of size n can be done in O(n) lab steps using DNA of length O(n log n) base pairs. This recent work by Adleman and Lipton in molecular parallelism considered only the solution of NP search problems, and provided no way of quickly executing lengthy computations by purely molecular means; the number of lab steps depended linearly on the size of the simulated expression. See [Re3] for further recent work on molecular parallelism and see [Re4] for an extensive survey of molecular parallelism. Our goal is to execute lengthy computations quickly by the use of molecular parallelism. We wish to execute these biomolecular computations using short DNA strands by more or less conventional biotechnology engineering techniques within a small number of lab steps. This paper describes techniques for achieving this goal, in the context of well defined abstract models of biomolecular computation. Although our results are of theoretical consequence only, due to the large amount of molecular parallelism (i.e., large test tube volume) required , we believe that our theoretical models and results may be a basis for more practical later work, just as was done in the area of parallel computing. We propose two abstract models of biomolecular computation. The first, the Parallel Associative Memory (PAM) model, is a very high-level model which includes a Parallel Associative Matching (PA-Match) operation, that appears to improve the power of molecular parallelism beyond the operations previously considered by Lipton [L]. We give some simulations of conventional sequential and parallel computational models by our PAM model. Each of the simulations use strings of length O(s) over an alphabet of size O(s) (which correspond to DNA of length O(s log s) base pairs). Using O(s log s) PAM operations that are not PA-Match (or O(s) operations assuming a ligation operation) and t PA-Match operations, we can: 1. simulate a nondeterministic Turing Machine computation with space bound s and time bound 2 O(s) , with t = O(s) , 2. simulate a CREW PRAM with time bound D, with M memory cells, and processor bound P, where here s = O( log (PM)) and t = O(D+s), 3. find the satisfying inputs to a Boolean circuit constructible in s space with n inputs, unbounded fan-out, and depth D, where here t = O(D+s). We also propose a Recombinant DNA (RDNA) model which is a low-level model that allows operations that are abstractions of very well understood recombinant DNA operations and provides a representation, which we call the complex , for the relevant structural properties of DNA. The PA-Match operation for lengthy strings of length s cannot be feasibly implemented by recombinant DNA techniques directly by a single step of complementary pairing in DNA; nevertheless we show this Matching operation can be simulated in the RDNA model with O(s) slowdown by multiple steps of complementary pairing of substrings of length 2 (corresponding to logarithmic length DNA subsequences). Each of the other operations of the PAM model can be executed in our RDNA model, without slowdown. We further show that, with a further O(s)/ log (1/e) slowdown, the simulations can be done correctly with probability 1/2 even if certain recombinant DNA operations (e.g., Separation) can error with a probability e. We also observe efficient simulations can be done by PRAMs and thus Turing Machines of our molecular models.

Journal ArticleDOI
TL;DR: It is shown that DNA chemistry allows one to simulate large semi-unbounded fan-in Boolean circuits with a logarithmic slowdown in computation time, and for the class NC1, the slowdown can be reduced to a constant.
Abstract: We demonstrate that DNA computers can simulate Boolean circuits with a small overhead. Boolean circuits embody the notion of massively parallel signal processing and are frequently encountered in many parallel algorithms. Many important problems such as sorting, integer arithmetic, and matrix multiplication are known to be computable by small size Boolean circuits much faster than by ordinary sequential digital computers. This paper shows that DNA chemistry allows one to simulate large semi-unbounded fan-in Boolean circuits with a logarithmic slowdown in computation time. Also, for the class NC 1 , the slowdown can be reduced to a constant. In this algorithm we have encoded the inputs, the Boolean AND gates, and the OR gates to DNA oligonucleotide sequences. We operate on the gates and the inputs by standard molecular techniques of sequence-specific annealing, ligation, separation by size, amplification, sequence-specific cleavage, and detection by size. Additional steps of amplification are not necessary for NC 1 circuits. The feasibility of the DNA algorithm has been successfully tested on a small circuit by actual biochemical experiments.

Journal ArticleDOI
TL;DR: An intrinsic generalization of the suffix tree, designed to index a string of length n which has a natural partitioning into m multicharacter substrings or words, which can allow a word suffix tree to be built in sublinear time.
Abstract: We discuss an intrinsic generalization of the suffix tree, designed to index a string of length n which has a natural partitioning into m multicharacter substrings or words . This word suffix tree represents only the m suffixes that start at word boundaries. These boundaries are determined by delimiters , whose definition depends on the application. Since traditional suffix tree construction algorithms rely heavily on the fact that all suffixes are inserted, construction of a word suffix tree is nontrivial, in particular when only O(m) construction space is allowed. We solve this problem, presenting an algorithm with O(n) expected running time. In general, construction cost is Ω(n) due to the need of scanning the entire input. In applications that require strict node ordering, an additional cost of sorting O(m') characters arises, where m' is the number of distinct words. In either case, this is a significant improvement over previously known solutions. Furthermore, when the alphabet is small, we may assume that the n characters in the input string occupy o(n) machine words. We illustrate that this can allow a word suffix tree to be built in sublinear time.

Journal ArticleDOI
TL;DR: A proof of NP-hardness for a lattice protein folding model whose instances contain protein sequences defined with a fixed, finite alphabet that contains 12 amino acid types is described.
Abstract: We describe a proof of NP-hardness for a lattice protein folding model whose instances contain protein sequences defined with a fixed, finite alphabet that contains 12 amino acid types. This lattice model represents a protein's conformation as a self-avoiding path that is embedded on the three-dimensional cubic lattice. A contact potential is used to determine the energy of a sequence in a given conformation; a pair of amino acids contributes to the conformational energy only if they are adjacent on the lattice. This result overcomes a significant weakness of previous intractability results, which do not examine protein folding models that have a finite alphabet of amino acids together with physically interesting conformations.

Journal ArticleDOI
TL;DR: If all possible capital investments obey the rule that lower production costs require higher capital investments, then this work presents an algorithm with constant competitive ratio, which is O (min{1+log C, 1+log log P, 1-log M }) competitive, where C is the ratio between the highest and the lowest capital costs.
Abstract: We deal with the problem of making capital investments in machines for manufacturing a product Opportunities for investment occur over time, every such option consists of a capital cost for a new machine and a resulting productivity gain, ie, a lower production cost for one unit of product The goal is that of minimizing the total production costs and capital costs when future demand for the product being produced and investment opportunities are unknown This can be viewed as a generalization of the ski-rental problem and related to the mortgage problem [3] If all possible capital investments obey the rule that lower production costs require higher capital investments, then we present an algorithm with constant competitive ratio If new opportunities may be strictly superior to previous ones (in terms of both capital cost and production cost), then we give an algorithm which is O (min{1+log C , 1+log log P , 1+log M }) competitive, where C is the ratio between the highest and the lowest capital costs, P is the ratio between the highest and the lowest production costs, and M is the number of investment opportunities We also present a lower bound on the competitive ratio of any on-line algorithm for this case, which is Ω (min{log C , log log P / log log log P , log M / log log M }) This shows that the competitive ratio of our algorithm is tight (up to constant factors) as a function of C , and not far from the best achievable as a function of P and M

Journal ArticleDOI
TL;DR: It is shown that computing the path distance width of a graph is NP-hard, but both path and tree distance width can be computed in O(nk+1) time, when they are bounded by a constant k.
Abstract: In this paper we study the GRAPH ISOMORPHISM problem on graphs of bounded treewidth, bounded degree, or bounded bandwidth. GRAPH ISOMORPHISM can be solved in polynomial time for graphs of bounded treewidth, pathwidth, or bandwidth, but the exponent depends on the treewidth, pathwidth, or bandwidth. Thus, we look for special cases where ``fixed parameter tractable'' polynomial time algorithms can be established. We introduce some new and natural graph parameters: the (rooted) path distance width, which is a restriction of bandwidth, and the (rooted) tree distance width, which is a restriction of treewidth. We give algorithms that solve GRAPH ISOMORPHISM in O(n 2 ) time for graphs with bounded rooted path distance width, and in O(n 3 ) time for graphs with bounded rooted tree distance width. Additionally, we show that computing the path distance width of a graph is NP-hard, but both path and tree distance width can be computed in O(n k+1 ) time, when they are bounded by a constant k; the rooted path or tree distance width can be computed in O(ne) time. Finally, we study the relationships between the newly introduced parameters and other existing graph parameters.

Journal ArticleDOI
TL;DR: This paper presents a new efficient incremental algorithm for maintaining a solution to a system of difference constraints as constraints are added, modified, or deleted, and the algorithm determines if the new system is feasible and updates its solution.
Abstract: Difference constraints systems consisting of inequalities of the form x i - x j $ \leq $ b i,j occur in many applications, most notably those involving temporal reasoning Often, it is necessary to maintain a solution to such a system as constraints are added, modified, and deleted Existing algorithms handle modifications by solving the resulting system anew each time, which is inefficient The best known algorithm to determine if a system of difference constraints is feasible (ie, if it has a solution) and to compute a solution runs in Θ (mn) time, where n is the number of variables and m is the number of constraints This paper presents a new efficient incremental algorithm for maintaining a solution to a system of difference constraints As constraints are added, modified, or deleted, the algorithm determines if the new system is feasible and updates its solution When the system becomes infeasible, the algorithm continues to process changes until it becomes feasible again, at which point a feasible solution will be produced The algorithm processes the addition of a constraint in time O(m + n log n) and the removal of a constraint in constant time when the original system is feasible More precisely, additions are processed in time O( || Δ || + |Δ| log|Δ| ) , where |Δ| is the number of variables whose values are changed to compute the new feasible solution, and || Δ || is the number of constraints involving the variables whose values are changed When the original system is infeasible, the algorithm processes any change in O(m + n log n) amortized time The new algorithm can also be used to check for the existence of negative cycles in dynamic graphs

Journal ArticleDOI
TL;DR: These are the first known polynomial-time approximation algorithms for the latin square completion problem that achieve nontrivial worst-case performance guarantees.
Abstract: In this paper we investigate the problem of computing the maximum number of entries which can be added to a partially filled latin square. The decision version of this question is known to be NP-complete. We present two approximation algorithms for the optimization version of this question. We first prove that the greedy algorithm achieves a factor of 1/3. We then use insights derived from the linear relaxation of an integer program to obtain an algorithm based on matchings that achieves a better performance guarantee of 1/2. These are the first known polynomial-time approximation algorithms for the latin square completion problem that achieve nontrivial worst-case performance guarantees. Our study is motivated by applications to lightpath assignment and switch configuration in wavelength routed multihop optical networks.

Journal ArticleDOI
TL;DR: An on-line algorithm with a competitive ratio of 3.5981 against currentload and a reassignment factor of 79.4 is presented to minimize the maximum load on any machine.
Abstract: We consider the following load balancing problem. Jobs arrive on-line and must be assigned to one of m machines thereby increasing the load on that machine by a certain weight. Jobs also depart on-line. The goal is to minimize the maximum load on any machine, the load being defined as the sum of the weights of the jobs assigned to the machine divided by the machine capacity. The scheduler also has the option of preempting a job and reassigning it to another machine. Whenever a job is assigned or reassigned to a machine, the on-line algorithm incurs a reassignment cost depending on the job. For arbitrary reassignment costs and identical machines, we present an on-line algorithm with a competitive ratio of 3.5981 against current load , i.e., the maximum load at any time is less than 3.5981 times the lowest achievable load at that time. Our algorithm also incurs a reassignment cost less than 6.8285 times the cost of assigning all the jobs. For arbitrary reassignment costs and related machines we present an algorithm with a competitive ratio of 32 and a reassignment factor of 79.4. We also describe algorithms with better performance guarantees for some special cases of the problem.

Journal ArticleDOI
TL;DR: It is proved that both Interval Sandwich and Intervalizing Colored Graphs are polynomial when either (1) the input graph degree and the solution graph clique size are bounded, or (2) the solutiongraph degree is bounded.
Abstract: The problems of Interval Sandwich (IS) and Intervalizing Colored Graphs (ICG) have received a lot of attention recently, due to their applicability to DNA physical mapping problems with ambiguous data. Most of the results obtained so far on the problems were hardness results. Here we study the problems under assumptions of sparseness, which hold in the biological context. We prove that both problems are polynomial when either (1) the input graph degree and the solution graph clique size are bounded, or (2) the solution graph degree is bounded. In particular, this implies that ICG is polynomial on bounded degree graphs for every fixed number of colors, in contrast with the recent result of Bodlaender and de Fluiter.

Journal ArticleDOI
TL;DR: Algorithms to compute, in constant time per string, all p -distinct (resp. b - Distinct) strings of length n formed using exactly k letters, and it is shown how to compute all elements p'[k, n] and b'[ k,n] .
Abstract: This paper discusses how to count and generate strings that are ``distinct'' in two senses: p -distinct and b -distinct. Two strings x on alphabet A and x' on alphabet A' are said to be p -distinct iff they represent distinct ``patterns''; that is, iff there exists no one—one mapping from A to A' that transforms x into x' . Thus aab and baa are p -distinct while aab and ddc are p -equivalent. On the other hand, x and x' are said to be b -distinct iff they give rise to distinct border (failure function) arrays: thus aab with border array 010 is b -distinct from aba with border array 001 . The number of p -distinct (resp. b -distinct) strings of length n formed using exactly k different letters is the [k,n] entry in an infinite p' (resp. b' ) array. Column sums p[n] and b[n] in these arrays give the number of distinct strings of length n . We present algorithms to compute, in constant time per string, all p -distinct (resp. b -distinct) strings of length n formed using exactly k letters, and we also show how to compute all elements p'[k,n] and b'[k,n] . These ideas and results have application to the efficient generation of appropriate test data sets for many string algorithms.

Journal ArticleDOI
TL;DR: These heuristics identify a small number of subsets with few, geometrically close, terminals using minimum spanning trees and other well-known structures from computational geometry: Delaunay triangulations, Gabriel graphs, relative neighborhood graphs, and higher-order Voronoi diagrams.
Abstract: We present a class of O(n log n) heuristics for the Steiner tree problem in the Euclidean plane. These heuristics identify a small number of subsets with few, geometrically close, terminals using minimum spanning trees and other well-known structures from computational geometry: Delaunay triangulations, Gabriel graphs, relative neighborhood graphs, and higher-order Voronoi diagrams. Full Steiner trees of all these subsets are sorted according to some appropriately chosen measure of quality. A tree spanning all terminals is constructed using greedy concatenation. New heuristics are compared with each other and with heuristics from the literature by performing extensive computational experiments on both randomly generated and library problem instances.

Journal ArticleDOI
TL;DR: This article considers a natural extension of the subtree-transfer distance, called the linear-cost subtreeshift distance, and studies the complexity and efficient approximation algorithms for this distance as well as its relationship to the nni distance.
Abstract: Different phylogenetic trees for the same group of species are often produced either by procedures that use diverse optimality criteria [16] or from different genes [12] in the study of molecular evolution. Comparing these trees to find their similarities and dissimilarities (i.e., distance ) is thus an important issue in computational molecular biology. Several distance metrics including the nearest neighbor interchange (nni) distance and the subtree-transfer distance have been proposed and extensively studied in the literature. This article considers a natural extension of the subtree-transfer distance, called the linear-cost subtree-transfer distance, and studies the complexity and efficient approximation algorithms for this distance as well as its relationship to the nni distance. The linear-cost subtree-transfer model seems more suitable than the (unit-cost) subtree-transfer model in some applications. The following is a list of our results:

Journal ArticleDOI
TL;DR: The following are proved: • The original art gallery problem is NP-hard for the very restricted class of street polygons, but the vision point problem can be solved efficiently for the class ofStreet polygons.
Abstract: We consider a restricted version of the art gallery problem within simple polygons in which the guards are required to lie on a given one-dimensional object, a watchman route We call this problem

Journal ArticleDOI
TL;DR: The value of a perpetual American put option is in many cases a good approximation of the value of an otherwise identical n -period American putoption, and certain types of path-dependent options can be valued exactly in polynomial time.
Abstract: As increasingly large volumes of sophisticated options are traded in world financial markets, determining a ``fair'' price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n -period option on a stock is the expected time-discounted value of the future cash flow on an n -period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte Carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this article we show that pricing an arbitrary path-dependent option is \#-P hard. We show that certain types of path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these we design deterministic polynomial-time approximate algorithms. We show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation of the value of an otherwise identical n -period American put option. In contrast to Monte Carlo methods, our algorithms have guaranteed error bounds that are polynomially small (and in some cases exponentially small) in the maturity n . For the error analysis we derive large-deviation results for random walks that may be of independent interest.