scispace - formally typeset
Search or ask a question

Showing papers on "Greedy algorithm published in 1990"


Proceedings ArticleDOI
24 Jun 1990
TL;DR: This research has demonstrated that the bipartite weighted matching algorithm is indeed a very good solution for the data path allocation problem and is able to take the interconnection cost into account.
Abstract: We propose a graph-theoretic approach for the data path allocation problem. We decompose the problem into three subproblems: (1) register allocation, (2) operation assignment, and (3) connection allocation. The first two subproblems are modeled as two bipartite weighted matching problems and solved using the Hungarian Method [Pap82]. The third subproblem is solved using a greedy method. While previous researches suffer controversy over which one of subproblems (1) and (2) should be done first, we show that, by taking the other into consideration while performing one, equally satisfactory results can be obtained. We have implemented two programs, LYRA and ARYL, to solve the subproblems in different orders, namely, “(1), (2), then (3)” and “(2), (1), then (3)”, respectively. The matching paradigm allows us to take a more global approach toward the problem than previous researches do. For register allocation, our approach is the first one to guarantee minimal usage of registers while being able to take the interconnection cost into account. For all the benchmarks from the literature, both LYRA and ARYL produced designs as good as, if not better than, those by others in very short time. This research has demonstrated that the bipartite weighted matching algorithm is indeed a very good solution for the data path allocation problem.

185 citations


Book ChapterDOI
01 Oct 1990
TL;DR: A genetic algorithm for solving the traveling salesman problem by genetic algorithms to optimality for traveling salesman problems with up to 442 cities is presented.
Abstract: We present a genetic algorithm for solving the traveling salesman problem by genetic algorithms to optimality for traveling salesman problems with up to 442 cities. Muhlenbein et al. [MGK 88], [MK 89] have proposed a genetic algorithm for the traveling salesman problem, which generates very good but not optimal solutions for traveling salesman problems with 442 and 531 cities. We have improved this approach by improving all basic components of that genetic algorithm. For our experimental investigations we used the traveling salesman problems TSP (i) with i cities for i=137, 202, 229, 318, 431, 442, 666 which were solved to optimality in [CP 80], [GH 89].

164 citations


Journal ArticleDOI
E. B. Baum1
TL;DR: It is proved that no such approach to evading the CAP can work, and a new, fast algorithm is given for learning unions of half spaces in fixed dimension, suggesting a generalization of this approach which naively would avoid a credit assignment problem and learn in time polynomial in dimension.

126 citations


Journal ArticleDOI
TL;DR: In this paper, a variant of the greedy algorithm for weight functions defined on the system of m-subsets of a given set E and characterize completely those classes of weight functions for which this algorithm works are studied.

117 citations


Proceedings ArticleDOI
01 May 1990
TL;DR: Overall, this paper finds that certain greedy algorithms perform surprisingly well on average, and shows that the maximum size of a queue over a time span of T steps is O(e) with high probability.
Abstract: In this paper, we analyze the average case behavior of greedy routing algorithms on arrays under a variety of assumptions. Overall, we find that certain greedy algorithms perform surprisingly well on average. For example, given an N x N array or torus where every node starts with one packet headed for a random destination, we show that some (but not all) greedy store-andforward algorithms route every packet to its destinrt, tion with only O(log N) delay per packet and maximum queuesize 4 with probability near 1. Moreover, the expected delay per packet is only a small constant, independent of N. We also extend the analysis to a steady state model of routing in which packets enter the network at random times. Provided that the overall arrival rate of packets to the network is less than 100% of the network capacity, we show that any packet encounters at most O(log N) delay with high probability. In addition, we show that the maximum size of a queue over a time span of T steps is O(e) with high probability. The results can also be extended to analyze the average case behavior of cut-through (or, flit-serial) routing under lighter loading.

116 citations


Journal ArticleDOI
TL;DR: In this paper, the optimal multicast tree (OMT) was proposed for interprocessor communication in distributed-memory multiprocessors and a greedy multicast algorithm was proposed to guarantee a minimized message delivery time.

104 citations


Journal ArticleDOI
01 May 1990
TL;DR: The algorithm is a generalization of an algorithm for graph optimal isomorphism, and its potential for engineering application is demonstrated by a simple structural pattern recognition problem and a plant allocation and distribution problem.
Abstract: An algorithm for finding the optimal monomorphism between two attributed graphs is proposed. The problem is formulated as a tree search problem. To guide the search the branch-and-bound heuristic approach is adopted, using an efficient consistent lower bounded estimate for the evaluation function of the cost associated with the optimal solution path in the search tree. The algorithm is a generalization of an algorithm for graph optimal isomorphism. The algorithm's potential for engineering application is demonstrated by a simple structural pattern recognition problem and a plant allocation and distribution problem. >

90 citations


Book ChapterDOI
01 Jan 1990
TL;DR: In this article, an algorithm for automated construction of a sparse Bayesian network given an unstructured probabilistic model and causal domain information from an expert is developed and implemented.
Abstract: An algorithm for automated construction of a sparse Bayesian network given an unstructured probabilistic model and causal domain information from an expert has been developed and implemented. The goal is to obtain a network that explicitly reveals as much information regarding conditional independence as possible. The network is built incrementally adding one node at a time. The expert's information and a greedy heuristic that tries to keep the number of arcs added at each step to a minimum are used to guide the search for the next node to add. The probabilistic model is a predicate that can answer queries about independencies in the domain. In practice the model can be implemented in various ways. For example, the model could be a statistical independence test operating on empirical data or a deductive prover operating on a set of independence statements about the domain.

49 citations


Journal ArticleDOI
01 Aug 1990-Infor
TL;DR: Efficient implementations of greedy-type procedures for approximate solution of Trimetric Steiner Tree problems with arbitrary nonnegative lengths on the edges and a special subclass of problems (meeting the so-called “nonadjacency condition”) are discussed.
Abstract: The optimum Steiner Tree problem in a (nondirected) graph is known to belong to the class of NP-hard problems. However large scale instances (typically hundreds or thousands of nodes) arise in such important applications as optimal communication network design, or VLSI routing. There is therefore a strong need for efficient heuristics.We discuss in this paper efficient implementations of greedy-type procedures for approximate solution of s)Trimetric Steiner Tree problems with arbitrary nonnegative lengths on the edges. A special subclass of problems (meeting the so-called “nonadjacency condition”) is also introduced and studied. Various ways of improving the computational efficiency of the basic greedy algorithm are examined, among which the use of reoptimization procedures and proper exploitation of a supermodularity property of an associated set function; thanks to the latter the total number of minimum spanning tree computations can be significantly reduced according to the “accelerated greedy ...

48 citations


Journal ArticleDOI
TL;DR: This paper presents the results of statistical experiments comparing the Greedy algorithm with the Threshold algorithm and concludes that theGreedy algorithm is an attractive alternative to the Th threshold algorithm.

42 citations


Journal ArticleDOI
TL;DR: An algorithm is introduced that efficiently matches parts of boundaries of two-dimensional objects in order to assemble apictorial jigsaw puzzles using Weiner's string matching technique combined with compact position trees to find the longest shared pattern between two strings.
Abstract: We introduce an algorithm that efficiently matches (fits together) parts of boundaries of two-dimensional objects in order to assemble apictorial jigsaw puzzles. A rotation-independent shape encoding allows us to find the best (longest) match between two shapes in time proportional to the sum of the lengths of their representations. In order to find this match, we use Weiner's string matching technique combined with compact position trees to find, in linear time, the longest shared pattern between two strings. The shape matching procedure is then used by two greedy algorithms to assemble the apictorial jigsaw puzzles.

Proceedings ArticleDOI
01 Jan 1990
TL;DR: Techniques for solving geometric closest-point aud farthest-point query problems, in the presence of deletions, are presented, including efficient implementations of classical greedy heuristics for .miuimnmweight matching, and maximum-weight matching.
Abstract: We present techniques for solving geometric closest-point aud farthest-point query problems, in the presence of deletions. Applications include efficient implementations of classical greedy heuristics for .miuimnmweight matching (where our result improves on that of Bentley and Saxe), and maximum-weight matching.


Journal ArticleDOI
TL;DR: A necessary and sufficient condition for a greedy algorithm to find a maximal basis for a fuzzy matroid is established.

Proceedings ArticleDOI
01 Jan 1990
TL;DR: The characterization and the algorithm are based on Hoffman’s notion of Monge sequences, as defmed for the special case when no shippings are disallowed, and on an antimatroid interpretation of this notion.
Abstract: We study transportation problems in which shipping between certain sources and destinations is disallowed. Given such a problem we seek a permutation of the decision variables which, when used by the greedy algorithm, which maximizes each variable in turn according to the order prescribed by the permutation, provides an optimal solution for every feasible supply and demand vectors. We give a necessary and sufficient condition under which a permutation satisfies that requirement, and devise an efficient algorithm which constructs such a permutation or determines that none exist. Our characterization and the algorithm are based on Hoffman’s notion of Monge sequences, as defmed for the special case when no shippings are disallowed, and on an antimatroid interpretation of this notion. * DIMACS Center and Tel Aviv University, The running time of our algorithm is better than that of the best known algorithms for solving the transportation problem, both for sparse and for dense problems. Having constructed such a permutation, a solution of any problem with that cost matrix can be obtained in linear time.

Journal ArticleDOI
TL;DR: This paper evaluates several hypercube embedding heuristics, including simulated annealing, local search, greedy, and recursive mincut bipartitioning, and proposes a new greedy heuristic, a new Kernighan-Lin style heuristic), and some new features to enhance local search.
Abstract: The hypercube embedding problem, a restricted version of the general mapping problem, is the problem of mapping a set of communicating processes to a hypercube multiprocessor. The goal is to find a mapping that minimizes the length of the paths between communicating processes. Unfortunately the hypercube embedding problem has been shown to be NP-hard. Thus many heuristics have been proposed for hypercube embedding. This paper evaluates several hypercube embedding heuristics, including simulated annealing, local search, greedy, and recursive mincut bipartitioning. In addition to known heuristics, we propose a new greedy heuristic, a new Kernighan-Lin style heuristic, and some new features to enhance local search. We then assess variations of these strategies (e.g., different neighborhood structures) and combinations of them (e.g., greedy as a front end of iterative improvement heuristics). The asymptotic running times of the heuristics are given, based on efficient implementations using a priority-queue data structure.

Proceedings ArticleDOI
01 Jan 1990
TL;DR: The RCTSP algorithm is used to optimally schedule a processing facility involving sequence dependent transition costs and an aggregate due date on all job completion times.
Abstract: The Resource Constrained Traveling Salesman Problem (RCTSP) is introduced and an optimal algorithm for its solution is presented. The RCTSP is shown to subsume the Prize Collecting TSP and Orienteering Problem. Computational results are presented for sequential and parallel computations for problems containing up to 200 cities. The RCTSP algorithm is used to optimally schedule a processing facility involving sequence dependent transition costs and an aggregate due date on all job completion times. A penalty is incurred for each job not completed by the aggregate due date.

Proceedings ArticleDOI
23 May 1990
TL;DR: A heuristic, called the neighborhood decoupling (ND) algorithm, is described, which makes more efficient use of minterms truncated to the highest logic value.
Abstract: A heuristic, called the neighborhood decoupling (ND) algorithm, is described. It first selects a minterm and then selects an implicant, a two-step process employed in previous heuristics. The approach taken closely resembles the G.W. Dueck and D.M. Miller (1987) heuristic; however, it makes more efficient use of minterms truncated to the highest logic value. The authors present the algorithm, discuss its implementation, show that it performs consistently better than others, and explain the reason for its improved performance. >

Journal ArticleDOI
TL;DR: This paper considers a family of subsets of a finite set, each of cardinality n, as feasible solutions of a combinatorial optimization problem, and shows that such problems can be solved efficiently if the n-sum problem can be solve efficiently.

Journal ArticleDOI
TL;DR: A natural parallel version of the classical greedy algorithm for finding a maximal independent set in a graph and proves that its expected running time on random graphs of arbitrary edge density of 0 (log n) is proved.
Abstract: We consider a natural parallel version of the classical greedy algorithm for finding a maximal independent set in a graph. This version was studied in Coppersmith, Raghavan, and Tompa* and they conjecture there that its expected running time on random graphs of arbitrary edge density of 0 (log n). We prove that conjecture.

Book ChapterDOI
01 Jul 1990
TL;DR: This work presents the first quadratic-time algorithm for the greedy triangulation of a finite planar point set, and the first linear-time algorithms for the greedier triangulating of a convex polygon.
Abstract: We present the first quadratic-time algorithm for the greedy triangulation of a finite planar point set, and the first linear-time algorithm for the greedy triangulation of a convex polygon.

Journal ArticleDOI
TL;DR: An adaptive algorithm that iteratively improves upon a given initial bisection of a graph is presented and its performance is compared with that of the well-known Kernighan-Lin method on many random graphs with large numbers of vertices.
Abstract: The graph bisectioning problem has several applications in VLSI layout, such as floorplanning and module placement. A sufficient condition for optimality of a given bisection is presented. This condition leads to an algorithm that always finds an optimal bisection for a certain class of graphs. A greedy approach is then used to develop a more powerful heuristic. On small random graphs with up to 20 vertices, one of the greedy algorithms generated the optimal bisection in each case considered. For very large graphs with 300 vertices or more, the algorithm generated bisections with costs within 30% of a lower bound previously derived. An adaptive algorithm that iteratively improves upon a given initial bisection of a graph is presented. Its performance is compared with that of the well-known Kernighan-Lin method on many random graphs with large numbers of vertices. The results indicate that the new adaptive heuristic produces bisections with costs within 2% of those produced by the Kernighan-Lin method (the costs were actually lower in about 70% of the cases) with a three times faster computation speed in most cases. >

Journal ArticleDOI
TL;DR: Experimental results, agreeing with theoretical analysis, show that the OCR algorithm behaves quite well in average cases and an optimal solution is obtained for the Deutsch difficult case in 5.5-min-CPU time.
Abstract: An algorithm known as optimal channel routing (OCR) is proposed which finds an optimal solution for the channel routing problem in VLSI design. The algorithm is an A* algorithm with good heuristics and dominance rules for terminating unnecessary nodes in the searching tree. Experimental results, agreeing with theoretical analysis, show that it behaves quite well in average cases. An optimal solution is obtained for the Deutsch difficult case in 5.5-min-CPU time after the algorithm is implemented in Pascal and run on a VAX 11/750 computer. >

Journal ArticleDOI
01 Oct 1990-Networks
TL;DR: It is proved that the asymptotic growth rate of the weight of such a greedy matching is exactly βn(d-α)/d, where β is a positive constant that depends on the parameters α and d.
Abstract: The worst-case behavior of greedy matchings of n points in the unit d-cube, where d ≥ 2, is analyzed. The weighting function is taken to be the α'th power of Euclidean distance, where 0 < α < d. It is proved that the asymptotic growth rate of the weight of such a greedy matching is exactly βn(d-α)/d, where β is a positive constant that depends on the parameters α and d. Included in the analysis is a minimax theorem equating the worstcase behaviors of matchings resulting from greedy algorithms that, when ordering edges for the greedy process, break ties in different ways.

Book ChapterDOI
24 Sep 1990
TL;DR: This paper investigates the simple class of greedy scheduling algorithms, namely, algorithms that always forward a packet if they can, and proves that for various “natural” classes of routes, the time required to complete the transmission of a set of packets is bounded by the sum of the number of packets and the maximal route length, for any greedy algorithm.
Abstract: Scheduling packets to be forwarded over a link is an important subtask of the routing process both in parallel computing and in communication networks This paper investigates the simple class of greedy scheduling algorithms, namely, algorithms that always forward a packet if they can It is first proved that for various “natural” classes of routes, the time required to complete the transmission of a set of packets is bounded by the sum of the number of packets and the maximal route length, for any greedy algorithm (including the arbitrary scheduling policy) Next, tight time bounds of Θ(n) are proved for a specific greedy algorithm on the class of shortest paths in n-vertex networks Finally it is shown that when the routes are arbitrary, the time achieved by various “natural” greedy algorithms can be as bad as Ω(n15), when O(n) packets have to be forwarded on an n-vertex network

Proceedings ArticleDOI
03 Apr 1990
TL;DR: The proposed neural network does not require zero autoconnections, which is one of the major drawbacks of the Hopfield network, and the proposed network with sequential update is shown to converge.
Abstract: A modified Hopfield network model for image restoration is presented. The proposed neural network does not require zero autoconnections, which is one of the major drawbacks of the Hopfield network. A new number-representation scheme for implementing the proposed network is given. The proposed network with sequential update is shown to converge. The sufficient conditions for convergence of n-simultaneous updates are also given. When the image-restoration problem does not satisfy the convergence conditions, a greedy algorithm which guarantees convergence (at the expense of the image quality) is used. >

Proceedings ArticleDOI
M.J. Kaelbling1, David M. Ogle
02 Jan 1990
TL;DR: A probabilistic model is presented for quantifying the cost of monitoring a set of conditions when data collection is done by sampling or tracing, and a simple greedy algorithm that finds good solutions is presented.
Abstract: A method is presented for reducing communication costs in parallel and distributed systems that use message passing to transmit monitoring information. As groundwork, a hierarchical model of monitoring is reviewed and an existing, sample, distributed environment is briefly described. A probabilistic model is presented for quantifying the cost of monitoring a set of conditions when data collection is done by sampling or tracing. With the model one can select an optimal set of conditions to trace in order to minimize the amount of intercommunication. Because finding an optimal set is difficult, a simple greedy algorithm that finds good solutions is presented, and an empirical analysis of its performance is given. >

Journal ArticleDOI
TL;DR: It is shown that if $2n$ points are randomly chosen uniformly in $[0,1]$, then the expected length of the matching given by the greedy algorithm is $\theta (\log n)$.
Abstract: The problem of finding a perfect matching of small total length in a complete graph whose vertices are points in the interval [0,1] is considered. The greedy heuristic for this problem repeatedly picks the two closest unmatched points x and y, and adds the edge $xy$ to the matching. It is shown that if $2n$ points are randomly chosen uniformly in $[0,1]$, then the expected length of the matching given by the greedy algorithm is $\theta (\log n)$. This compares unfavourably with the length of the shortest perfect matching, which is always less than 1.

Proceedings ArticleDOI
24 Jun 1990
TL;DR: An algorithm to decompose the routing area into a set of straight channels and switchboxes such that the number of switchboxes in the decomposition is minimized is presented.
Abstract: In this paper we study the problem of routing region definition in the VLSI building-block layout design style. We present an algorithm to decompose the routing area into a set of straight channels and switchboxes such that the number of switchboxes in the decomposition is minimized. Our algorithm is based on a graph-theoretic approach that makes use of an efficient polynomial time optimal algorithm for computing minimum clique covers of triangulated graphs. Experimental results indicate our algorithm performs well. We compared our algorithm with a previously known greedy approach and an exhaustive search optimal algorithm. For all the test problems we considered, our algorithm consistently outperformed the greedy algorithm, and it produced optimal solutions in almost all cases.

Journal Article
TL;DR: The problem is a variation of the weighted set-covering problem which requires the minimum-cost cover to be self-covered and it is shown that direct extension of the well-known greedy heuristic for SCP can have an arbitrarily large error in the worst case.
Abstract: The problem is a variation of the weighted set-covering problem (SCP) which requires the minimum-cost cover to be self-covering. It is shown that direct extension of the well-known greedy heuristic for SCP can have an arbitrarily large error in the worst case. It remains an open question whther these exists a greedy heuristic with a finite error bound.