scispace - formally typeset
Search or ask a question

Showing papers on "Greedy algorithm published in 2000"


Journal ArticleDOI
TL;DR: A new greedy alignment algorithm is introduced with particularly good performance and it is shown that it computes the same alignment as does a certain dynamic programming algorithm, while executing over 10 times faster on appropriate data.
Abstract: For aligning DNA sequences that differ only by sequencing errors, or by equivalent errors from other sources, a greedy algorithm can be much faster than traditional dynamic programming approaches and yet produce an alignment that is guaranteed to be theoretically optimal. We introduce a new greedy alignment algorithm with particularly good performance and show that it computes the same alignment as does a certain dynamic programming algorithm, while executing over 10 times faster on appropriate data. An implementation of this algorithm is currently used in a program that assembles the UniGene database at the National Center for Biotechnology Information.

4,628 citations


Proceedings ArticleDOI
24 Apr 2000
TL;DR: A simple and efficient randomized algorithm is presented for solving single-query path planning problems in high-dimensional configuration spaces by incrementally building two rapidly-exploring random trees rooted at the start and the goal configurations.
Abstract: A simple and efficient randomized algorithm is presented for solving single-query path planning problems in high-dimensional configuration spaces. The method works by incrementally building two rapidly-exploring random trees (RRTs) rooted at the start and the goal configurations. The trees each explore space around them and also advance towards each other through, the use of a simple greedy heuristic. Although originally designed to plan motions for a human arm (modeled as a 7-DOF kinematic chain) for the automatic graphic animation of collision-free grasping and manipulation tasks, the algorithm has been successfully applied to a variety of path planning problems. Computed examples include generating collision-free motions for rigid objects in 2D and 3D, and collision-free manipulation motions for a 6-DOF PUMA arm in a 3D workspace. Some basic theoretical analysis is also presented.

3,102 citations


Journal ArticleDOI
TL;DR: An implementation of the Lin–Kernighan heuristic, one of the most successful methods for generating optimal or near-optimal solutions for the symmetric traveling salesman problem (TSP), is described.

1,462 citations


Book ChapterDOI
TL;DR: This paper gives simple greedy approximation algorithms for these optimization problems of finding subgraphs maximizing these notions of density for undirected and directed graphs and answers an open question about the complexity of the optimization problem for directed graphs.
Abstract: We study the problem of finding highly connected subgraphs of undirected and directed graphs. For undirected graphs, the notion of density of a subgraph we use is the average degree of the subgraph. For directed graphs, a corresponding notion of density was introduced recently by Kannan and Vinay. This is designed to quantify highly connectedness of substructures in a sparse directed graph such as the web graph. We study the optimization problems of finding subgraphs maximizing these notions of density for undirected and directed graphs. This paper gives simple greedy approximation algorithms for these optimization problems. We also answer an open question about the complexity of the optimization problem for directed graphs.

523 citations


Journal ArticleDOI
16 May 2000
TL;DR: This paper proposes three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic that incorporates novel optimizations that improve efficiency greatly.
Abstract: Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multiquery optimization aims at exploiting common sub-expressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space.In this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algorithms, using workloads consisting of queries from the TPC-D benchmark. The study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time.

414 citations


Journal ArticleDOI
TL;DR: The overall performance of the GA for the QAP improves by using greedy methods but not their overuse, and the use of several possible enhancements to GAs are investigated and illustrated using the Quadratic Assignment Problem, one of the hardest nut in the field of combinatorial optimization.

336 citations


01 Jan 2000
TL;DR: A greedy algorithm for learning a Gaussian mixture which is capable of achieving solutions superior to EM with k components in terms of the likelihood of a test set.

294 citations


Journal ArticleDOI
TL;DR: The convergence theorems are proved and estimates for the rate of approximation are given by means of these algorithms, which apply to approximation from an arbitrary dictionary in a Hilbert space.
Abstract: Theoretical greedy type algorithms are studied: a Weak Greedy Algorithm, a Weak Orthogonal Greedy Algorithm, and a Weak Relaxed Greedy Algorithm. These algorithms are defined by weaker assumptions than their analogs the Pure Greedy Algorithm, an Orthogonal Greedy Algorithm, and a Relaxed Greedy Algorithm. The weaker assumptions make these new algorithms more ready for practical implementation. We prove the convergence theorems and also give estimates for the rate of approximation by means of these algorithms. The convergence and the estimates apply to approximation from an arbitrary dictionary in a Hilbert space.

222 citations


Proceedings ArticleDOI
26 Mar 2000
TL;DR: A distributed database coverage heuristic (DDCH) is introduced, which is equivalent to the centralized greedy algorithm for virtual backbone generation, but only requires local information exchange and local computation.
Abstract: In this paper, we present the implementation issues of a virtual backbone that supports the operations of the uniform quorum system (UQS) and the randomized database group (RDG) mobility management schemes in an ad hoc network. The virtual backbone comprises nodes that are dynamically selected to contain databases that store the location information of the network nodes. Together with the UQS and RDG schemes, the virtual backbone allows both dynamic database residence and dynamic database access, which provide high degree of location data availability and reliability. We introduce a distributed database coverage heuristic (DDCH), which is equivalent to the centralized greedy algorithm for virtual backbone generation, but only requires local information exchange and local computation. We show how DDCH can be employed to dynamically maintain the structure of the virtual backbone, along with database merging, as the network topology changes. We also provide a means to maintain connectivity among the virtual backbone nodes. We discuss optimization issues of DDCH through simulations. Simulation results suggest that the cost of ad hoc mobility management with a virtual backbone can be far below that of the conventional link-state routing.

212 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider quasi-greedy conditional bases in a wide range of Banach spaces and compare the greedy algorithm for the multidimensional Haar system with the optimal m-term approximation for this system.

200 citations


Journal ArticleDOI
TL;DR: A class of efficient numerical algorithms that iteratively select small subsets of the interpolation points and refine the current approximative solution there turn out to be linear, and the technique can be generalized to positive definite linear systems in general.
Abstract: For the solution of large sparse linear systems arising from interpolation problems using compactly supported radial basis functions, a class of efficient numerical algorithms is presented. They iteratively select small subsets of the interpolation points and refine the current approximative solution there. Convergence turns out to be linear, and the technique can be generalized to positive definite linear systems in general. A major feature is that the approximations tend to have only a small number of nonzero coefficients, and in this sense the technique is related to greedy algorithms and best n-term approximation.

Journal ArticleDOI
TL;DR: A new and efficient composite heuristic is proposed for the pickup and delivery traveling salesman problem, which is composed of two phases: a solution construction phase including a local optimization component and a deletion and re-insertion improvement phase.

Journal ArticleDOI
TL;DR: A framework for automatic landmark identification is presented based on an algorithm for corresponding the boundaries of two shapes based on a binary tree of corresponded pairs of shapes to generate landmarks automatically on each of a set of example shapes.
Abstract: A framework for automatic landmark identification is presented based on an algorithm for corresponding the boundaries of two shapes. The auto-landmarking framework employs a binary tree of corresponded pairs of shapes to generate landmarks automatically on each of a set of example shapes. The landmarks are used to train statistical shape models, known as point distribution models. The correspondence algorithm locates a matching pair of sparse polygonal approximations, one for each of a pair of boundaries by minimizing a cost function, using a greedy algorithm. The cost function expresses the dissimilarity in both the shape and representation error (with respect to the defining boundary) of the sparse polygons. Results are presented for three classes of shape which exhibit various types of nonrigid deformation.

Journal ArticleDOI
TL;DR: The backward greedy algorithm is shown to be optimal for the subset selection problem in the sense that it selects the "correct" subset of columns from A if the perturbation of the data vector b is small enough.
Abstract: The following linear inverse problem is considered: Given a full column rank m × n data matrix A, and a length m observation vector b, find the best least-squares solution to A x = b with at most r < n nonzero components. The backward greedy algorithm computes a sparse solution to A x = b by removing greedily columns from A until r columns are left. A simple implementation based on a QR downdating scheme using Givens rotations is described. The backward greedy algorithm is shown to be optimal for the subset selection problem in the sense that it selects the "correct" subset of columns from A if the perturbation of the data vector b is small enough. The results generalize to any other norm of the residual.

Journal ArticleDOI
01 Dec 2000
TL;DR: In using the D-optimality criterion to minimize the workpiece positioning errors, two different greedy algorithms are developed for force-closure fixturing in the point set domain.
Abstract: Addresses the problem of fixture synthesis for 3-D workpieces with a set of discrete locations on the workpiece surface as a point set of candidates for locator and clamp placement. A sequential optimization approach is presented in order to reduce the complexity associated with an exhaustive search. The approach is based on a concept of optimum experimental design, while the optimization focuses on the fixture performance of workpiece localization accuracy. In using the D-optimality criterion to minimize the workpiece positioning errors, two different greedy algorithms are developed for force-closure fixturing in the point set domain. Both 2-D and 3-D examples are presented to illustrate the effectiveness of the synthesis approach.

Journal ArticleDOI
01 Nov 2000
TL;DR: In this article, a greedy off-line textual substitution approach is proposed for text compression or structural inference, where a substring W is identified such that replacing all instances of W in X except one by a suitable pair of pointers yields the highest possible contraction of X; the process is then repeated on the contracted text string until substrings capable of producing contractions can no longer be found.
Abstract: Greedy off-line textual substitution refers to the following approach to compression or structural inference. Given a long text string x, a substring W is identified such that replacing all instances of W in X except one by a suitable pair of pointers yields the highest possible contraction of X; the process is then repeated on the contracted text string until substrings capable of producing contractions can no longer be found. This paper examines computational issues arising in the implementation of this paradigm and describes some applications and experiments.

Journal ArticleDOI
TL;DR: A modified greedy algorithm is developed, which for Vertex Cover gives an expected performance ratio ≤ 2, based on a formal definition for covering problems, which includes all the above fundamental problems and others.
Abstract: We present a simple and unified approach for developing and analyzing approximation algorithms for covering problems. We illustrate this on approximation algorithms for the following problems: Vertex Cover, Set Cover, Feedback Vertex Set, Generalized Steiner Forest, and related problems. The main idea can be phrased as follows: iteratively, pay two dollars (at most) to reduce the total optimum by one dollar (at least), so the rate of payment is no more than twice the rate of the optimum reduction. This implies a total payment (i.e., approximation cost) ≤ twice the optimum cost. Our main contribution is based on a formal definition for covering problems, which includes all the above fundamental problems and others. We further extend the Bafna et al. extension of the Local-Ratio theorem. Our extension eventually yields a short generic r -approximation algorithm which can generate most known approximation algorithms for most covering problems. Another extension of the Local-Ratio theorem to randomized algorithms gives a simple proof of Pitt's randomized approximation for Vertex Cover. Using this approach, we develop a modified greedy algorithm, which for Vertex Cover gives an expected performance ratio ≤ 2 .

Journal ArticleDOI
TL;DR: An integer approximation algorithm capable of generating integer values of the load fractions in time O(m), where m is the number of processors in the network, is proposed and the upper bound on the suboptimal solution generated by the algorithm lies within a radius given by the sum of the computation and communication delays.
Abstract: Optimal distribution of divisible loads in bus networks is considered in this paper. The problem of minimizing the processing time is investigated by including all the overhead components that could penalize the performance of the system, in addition to the inherent communication and computation delays. These overheads are considered to be constant additive factors to the respective communication and computation components. Closed-form solution for the processing time is derived and the influence of overheads on the optimal processing time is analyzed. We derive a necessary and sufficient condition for the existence of the optimal processing time. We then study the effect of changing the load distribution sequence on the time performance. Through rigorous analysis, an optimal sequence to distribute the load among the processors is identified, whenever it exists. In case such an optimal sequence fails to exist, we present a greedy algorithm to obtain a suboptimal sequence based on some important properties of the overhead factors. Then, the effect of granularity of the data that is divisible is considered in the analysis for the case of homogeneous networks. An integer approximation algorithm capable of generating integer values of the load fractions in time O(m), where m is the number of processors in the network, is proposed. We then show that the upper bound on the suboptimal solution generated by our algorithm lies within a radius given by the sum of the computation and communication delays. Several numerical examples are presented to illustrate the concepts.

Proceedings Article
01 Jan 2000
TL;DR: In this article, a greedy off-line textual substitution approach is proposed for text compression or structural inference, where a substring w is identified such that replacing all instances of w in x except one by a suitable pair of pointers yields the highest possible contraction of x; the process is then repeated on the contracted textstring until substrings capable of producing contractions can no longer be found.
Abstract: Greedy off-line textual substitution refers to the following approach to compression or structural inference. Given a long textstring x, a substring w is identified such that replacing all instances of w in x except one by a suitable pair of pointers yields the highest possible contraction of x; the process is then repeated on the contracted textstring until substrings capable of producing contractions can no longer be found. This paper examines computational issues arising in the implementation of this paradigm and describes some applications and experiments.

Proceedings ArticleDOI
29 Oct 2000
TL;DR: A framework for scalable streaming media delivery is proposed, that involves a novel scheduling algorithm called Expected runtime Distortion Based Scheduling (EDBS) which decides the order in which packets should be transmitted in order to improve client playback quality in the presence of channel losses.
Abstract: Scalable, or layered, media representation appears to be more suitable for transmission over the current heterogeneous networks. In this paper we study the problem of scalable layered streaming media delivery over a lossy channel. The goal is to find an optimal transmission policy to achieve the best playback quality at the client end. The problem involves some trade-offs such as time-constrained delivery and data dependencies. For example, a layer should be dropped before transmission if it already has a delay such that it cannot be played before its scheduled time. Moreover, less important layers with near-playback-time may also be dropped or delayed for delivery in order to save bandwidth for other layers with a high priority. We propose a framework for scalable streaming media delivery, that involves a novel scheduling algorithm called Expected runtime Distortion Based Scheduling (EDBS) which decides the order in which packets should be transmitted in order to improve client playback quality in the presence of channel losses. A fast greedy search algorithm is presented that achieves almost the same performance as an exhaustive search technique (98% of the time it results in the same schedule) with very low complexity and is applicable for real-time application.

Proceedings ArticleDOI
20 Sep 2000
TL;DR: It is proved that the greedy algorithm always gives the optimal switching activities of the instruction bus and the problem is NP-hard, and a heuristic algorithm is proposed.
Abstract: In this paper, we investigate the compiler transformation techniques to the problem of scheduling VLIW instructions aimed to reduce the power consumption on the instruction bus. It can be categorized into two types: horizontal and vertical scheduling. For the horizontal case, we propose a bipartite-matching scheme. We prove that our greedy algorithm always gives the optimal switching activities of the instruction bus. In the vertical case, we prove that the problem is NP-hard, and propose a heuristic algorithm. Experimental results show average 13% improvements with 4-way issue architecture and average 20% improvement with 8-way issue architecture for power consumptions of instruction bus as compared with conventional list scheduling for an extensive set of benchmarks.

Journal ArticleDOI
TL;DR: A relationship with the partial solution given by the LP-relaxation of the GAP is found, and the conditions under which the algorithm is asymptotically optimal in a probabilistic sense are derived.

Journal ArticleDOI
TL;DR: The problem of scheduling jobs with release dates on a single-batch processor in order to minimize the makespan is considered and a greedy heuristic for the general problem is shown to have the best-performance bound 2.

Proceedings Article
10 Jul 2000
TL;DR: This paper explores the application of multi-objective Genetic Algorithms (mGAs) to rural land use planning, a spatial allocation problem and the strengths and weaknesses of the underlying framework and each representation are identified.
Abstract: This paper explores the application of multi-objective Genetic Algorithms (mGAs) to rural land use planning, a spatial allocation problem. Two mGAs are proposed. Both share an underlying structure of: fitness assignment using Pareto-dominance ranking, niche induction and an individual replacement strategy. They are differentiated by their representations: a fixed-length genotype composed of genes that map directly to a land parcel's use and a variable-length, order-dependent representation making allocations indirectly via a greedy algorithm. The latter representation requires additional breeding operators to be defined and post-processing of the genotype structure to identify and remove duplicate genotypes. The two mGAs are compared on a real land use planning problem and the strengths and weaknesses of the underlying framework and each representation are identified.

Proceedings Article
11 Apr 2000
TL;DR: This work provides algorithms for reasoning with partitions of axioms in propositional and first-order logic and provides a greedy algorithm that automatically decomposes a given theory into partitions, exploiting the parameters that influence the efficiency of computation.
Abstract: We investigate the problem of reasoning with partitions of related logical axioms. Our motivation is two-fold. First, we are concerned with how to reason effectively with multiple knowledge bases that have overlap in content. Second, and more fundamentally, we are concerned with how to exploit structure inherent in a set of logical axioms to induce a partitioning of the axioms that will lead to an improvement in the efficiency of reasoning. To this end, we provide algorithms for reasoning with partitions of axioms in propositional and first-order logic. Craig’s interpolation theorem serves as a key to proving completeness of these algorithms. We analyze the computational benefit of our algorithms and detect those parameters of a partitioning that influence the efficiency of computation. These parameters are the number of symbols shared by a pair of partitions, the size of each partition, and the topology of the partitioning. Finally, we provide a greedy algorithm that automatically decomposes a given theory into partitions, exploiting the parameters that influence the efficiency of computation.

Journal ArticleDOI
TL;DR: A new fast and efficient reconfiguration algorithm is proposed and empirical study shows that the new algorithm indeed produces good results in terms of the percentages of harvest and degradation of VLSI/WSI arrays.
Abstract: This paper considers the problem of reconfiguring two-dimensional degradable VLSI/WSI arrays under the constraint of row and column rerouting. The goal of the reconfiguration problem is to derive a fault-free subarray T from the defective host array such that the dimensions of T are larger than some specified minimum. This problem has been shown to be NP-complete under various switching and routing constraints. However, we show that a special case of the reconfiguration problem is optimally solvable in linear time. Using this result, a new fast and efficient reconfiguration algorithm is proposed. Empirical study shows that the new algorithm indeed produces good results in terms of the percentages of harvest and degradation of VLSI/WSI arrays.

Proceedings ArticleDOI
10 Apr 2000
TL;DR: A simple and fast greedy heuristic that yields good solutions when the system is predominantly read-oriented and an extended genetic algorithm that rapidly adapts to the dynamically changing characteristics such as the frequency of reads and writes for particular objects are proposed.
Abstract: Creating replicas of frequently accessed objects across a read-intensive network can result in large bandwidth savings which, in turn, can lead to reduction in user response time. On the contrary, data replication in the presence of writes incurs extra cost due to multiple updates. The set of sites at which an object is replicated constitutes its replication scheme. Finding an optimal replication scheme that minimizes the amount of network traffic given read and write frequencies for various objects, is NP-complete in general. We propose two heuristics to deal with this problem for static read and write patterns. The first is a simple and fast greedy heuristic that yields good solutions when the system is predominantly read-oriented. The second is a genetic algorithm that through an efficient exploration of the solution space provides better solutions for cases where the greedy heuristic does not perform well. We also propose an extended genetic algorithm that rapidly adapts to the dynamically changing characteristics such as the frequency of reads and writes for particular objects.

Journal ArticleDOI
TL;DR: It is shown that the bounds-which can be efficiently computed-provide an excellent estimate of the error probabilities over the entire range of the signal-to-noise ratio (SNR) E/sub b//N/sub 0/.
Abstract: We consider a Bonferroni-type lower bound due to Kounias (1968) on the probability of a finite union. The bound is expressed in terms of only the individual and pairwise event probabilities; however, it suffers from requiring an exponentially complex search for its direct implementation. We address this problem by presenting a practical algorithm for its evaluation. This bound is applied together with two other bounds, a recent lower bound (the KAT bound) and a greedy algorithm implementation of an upper bound due to Hunter (1976), to examine the symbol error (P/sub a/) and bit error (P/sub b/) probabilities of an uncoded communication system used in conjunction with M-ary phase-shift keying (PSK)/quadrature amplitude (QAM) (PSK/QAM) modulations and maximum a posteriori (MAP) decoding over additive white Gaussian noise (AWGN) channels. It is shown that the bounds-which can be efficiently computed-provide an excellent estimate of the error probabilities over the entire range of the signal-to-noise ratio (SNR) E/sub b//N/sub 0/. The new algorithmic bound and the greedy bound are particularly impressive as they agree with the simulation results even during very severe channel conditions.

Journal ArticleDOI
TL;DR: It is proved that for any f from the closure of the convex hull of D the error of m-term approximation by WCGA is of order and similar results are obtained for Weak Relaxed Greedy Algorithm and its modification.
Abstract: We study efficiency of approximation and convergence of two greedy type algorithms in uniformly smooth Banach spaces. The Weak Chebyshev Greedy Algorithm (WCGA) is defined for an arbitrary dictionary D and provides nonlinear m-term approximation with regard to D. This algorithm is defined inductively with the mth step consisting of two basic substeps: (1) selection of an mth element ϕ m c from D, and (2) constructing an m-term approximant G m c . We include the name of Chebyshev in the name of this algorithm because at the substep (2) the approximant G m c is chosen as the best approximant from Span(ϕ 1 c ,...,ϕ m c ). The term Weak Greedy Algorithm indicates that at each substep (1) we choose ϕ m c as an element of D that satisfies some condition which is “t m -times weaker” than the condition for ϕ m c to be optimal (t m =1). We got error estimates for Banach spaces with modulus of smoothness ρ(u)≤γu q , 1

Journal ArticleDOI
TL;DR: A new heuristic method is presented for the Traveling Salesman Problem with Time Windows, based on the solution of an auxiliary problem to solve an assignment problem with an ad hoc objective function to obtain a solution close enough to a feasible solution of the original problem.
Abstract: The aim of this paper is to present a new heuristic method for the Traveling Salesman Problem with Time Windows, based on the solution of an auxiliary problem. The idea is to solve an assignment problem with an ad hoc objective function to obtain a solution close enough to a feasible solution of the original problem. Given this solution, made by a long main tour containing the depot and few small subtours, it is easy to insert all the subtours into the main path using a greedy insertion procedure. The algorithm described applies the proposed constructive scheme and then uses a local search procedure to improve the initial solution. The computational results show the effectiveness of this approach.