scispace - formally typeset
Search or ask a question

Showing papers on "Greedy algorithm published in 1986"


Proceedings ArticleDOI
02 Jul 1986
TL;DR: Two algorithms for spare allocation that are based on graph-theoretic analysis are presented, which provide highly efficient and flexible reconfiguration analysis and are shown to be NP-complete.
Abstract: The issue of yield degradation due to physical failures in large memory and processor arrays is of significant importance to semiconductor manufacturers. One method of increasing the yield for iterated arrays of memory cells or processing elements is by incorporating spare rows and columns in the die or wafer which can be programmed into the array. This paper addresses the issue of computer-aided design approaches to optimal reconfiguration of such arrays. The paper presents the first formal analysis of the problem. The complexity of optimal reconfiguration is shown to be NP-complete for rectangular arrays utilizing spare rows and columns. In contrast to previously proposed exhaustive search and greedy algorithms, this paper develops a heuristic branch and bound approach based on the complexity analysis, which allows for flexible and highly efficient reconfiguration. Initial screening is performed by a bipartite graph matching algorithm.

142 citations


Book
01 Jun 1986
TL;DR: In this paper, the problem of partitioning and allocation of the database over the processor nodes of the network can be solved in a computationally feasible manner using a greedy heuristic.
Abstract: In a distributed database system the partitioning and allocation of the database over the processor nodes of the network can be a critical aspect of the database design effort. In this paper we develop and evaluate algorithms that perform this task in a computationally feasible manner. The network we consider is characterized by a relatively high communication bandwidth, considering the processing and input output capacities in its processors. Such a balance is typical if the processors are connected via busses or local networks. The common constraint that transactions have a specific root node no longer exists, so that there are more distribution choices. However, a poor distribution leads to less efficient computation, higher costs, and higher loads in the nodes or in the communication network so that the system may not be able to handle the required set of transactions.Our approach is to first split the database into fragments which constitute appropriate units for allocation. The fragments to be allocated are selected based on maximal benefit criteria using a greedy heuristic. The assignment to processor nodes uses a first-fit algorithm. The complete algorithm, called GFF, is stated in a procedural form.The complexity of the problem and of its candidate solutions are analyzed and several interesting relationships are proven. Alternate benefit metrics are considered, since the execution cost of the allocation procedure varies by orders of magnitude with the alternatives of benefit evaluation. A mixed benefit evaluation strategy is eventually proposed.A model for evaluation is presented. Two of the strategies are experimentally evaluated, and the reported results support the discussion. The approach should be suitable for other cases where resources have to be allocated subject to resource constraints.

133 citations


Journal ArticleDOI
TL;DR: Two linear expected time complexity greedy algorithms are proposed for the determination of a lower bound on the optimal value by using a cascade of surrogate relaxations of the original problem whose sizes are decreasing step by step.

83 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that the greedy algorithm introduced in [1] and [5] to perform the parallel QR decomposition of a dense rectangular matrix of sizem×n is optimal.
Abstract: We show that the greedy algorithm introduced in [1] and [5] to perform the parallel QR decomposition of a dense rectangular matrix of sizem×n is optimal. Then we assume thatm/n2 tends to zero asm andn go to infinity, and prove that the complexity of such a decomposition is asymptotically2n, when an unlimited number of processors is available.

61 citations


Proceedings Article
01 Jan 1986
TL;DR: The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered and it was shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the three strategies.
Abstract: The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but suboptimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the three strategies.

48 citations


Journal ArticleDOI
TL;DR: In this paper, hybrid algorithms are developed and tested against two common forms of the greedy heuristic for solving minimum cardinality set covering problems (MCSCP) and the empirical results for 60 large randomly generated problems indicate that one algorithm performed better than the others.
Abstract: Minimum cardinality set covering problems (MCSCP) tend to be more difficult to solve than weighted set covering problems because the cost or weight associated with each variable is the same Since MCSCP is NP-complete, large problem instances are commonly solved using some form of a greedy heuristic In this paper hybrid algorithms are developed and tested against two common forms of the greedy heuristic Although all the algorithms tested have the same worst case bounds provided by Ho [7], empirical results for 60 large randomly generated problems indicate that one algorithm performed better than the others

34 citations


Journal ArticleDOI
TL;DR: The problem of finding a maximum-weight set of facilities such that no two are closer than a given distance from each other is equivalent to the maximum independent set problem in graph theory.
Abstract: The problem is to locate a maximum-weight set of facilities such that no two are closer than a given distance from each other. The unweighted version is equivalent to the maximum independent set problem in graph theory. This paper presents four greedy heuristics and shows that they all have bad worst-case behavior. Empirically, however, these heuristics perform quite well in the relatively large test problems generated randomly.

28 citations


01 Mar 1986
TL;DR: The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered and it was shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.
Abstract: The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.

25 citations


Journal ArticleDOI
TL;DR: In this paper, a branch and bound model with penalty tour building is developed for solving traveling salesman and transportation routing problems, and the algorithm for determining the optimal solution of the problem is of the general form which can solve symmetric and asymmetric single and multiple travelling salesman problems (STS and MTS).

23 citations


01 Sep 1986
TL;DR: This paper studies Clark Thompson''s heuristic experimentally and finds that it gives solutions about 9% shorter than minimum spanning trees on medium size problems (40-100 nodes).
Abstract: Clark Thompson recently suggested a very natural "greedy" heuristic for the rectilinear Steiner problem (RSP), analogous to Kruskal''s algorithm for the minimum spanning tree problem. We study this heuristic by comparing the solutions it finds with rectilinear minimum spanning trees. We first prove a theoretical result on instances of RSP consisting of a large number of random points in the unit square. Thompson''s heuristic produces a tree expected length some fraction shorter than a minimum spanning tree. The second part of this paper studies Thompsons''s heuristic experimentally and finds that it gives solutions about 9% shorter than minimum spanning trees on medium size problems (40-100 nodes). this performance is very similar to that of other RSP heuristics described in the literature.

18 citations


01 Jun 1986
TL;DR: The concept of a P-complete algorithm is introduced to capture what it means for an algorithm to be inherently sequential, and a number of sequential greedy algorithms are P- complete, including the greedy algorithm for finding a path in a graph.
Abstract: This thesis addresses a number of theoretical issues in parallel computation. There are many open questions relating to what can be done with parallel computers and what are the most effective techniques to use to develop parallel algorithms. We examine various problems in hope of gaining insight to the general questions. One topic that is investigated is the relationship between sequential and parallel algorithms. We introduce the concept of a P-complete algorithm to capture what it means for an algorithm to be inherently sequential. We show that a number of sequential greedy algorithms are P-complete, including the greedy algorithm for finding a path in a graph. However, an algorithm being P-complete does not necessarily mean that the problem is difficult. In some cases, the natural sequential algorithm is P-complete but a different technique gives a fast parallel algorithm. This shows that it is necessary to use different techniques for parallel computation than are used for sequential computation. We give fast parallel algorithms for a number of simple graph theory problems. The algorithms illustrate a number of different techniques that are useful for parallel algorithms. The most important results are that the maximal path problem can be solved in RNC and that a depth first search tree can be constructed in approximately square root n parallel time, where n is the number of vertices. This shows that substantial speed up is possible on both of these problems using parallelism. The final topic that we address is parallel approximation of P-complete problems. P-complete problems probably cannot be solved by fast parallel algorithms. We give a number of results on approximating P-complete with parallel algorithms that are similar to results on approximating NP-complete problems with sequential algorithms. We give upper and lower bounds on the degree of approximation that is possible for some problems. We also investigate the role that numbers play in P-complete problems, showing that some P-complete problems remain difficult even if the numbers are small.

Journal ArticleDOI
01 Jun 1986-Order
TL;DR: Various problems concerning greedy and super greedy linear extensions are shown to be NP-complete, as is the problem of determining that an ordered set is not greedy.
Abstract: Various problems concerning greedy and super greedy linear extensions are shown to be NP-complete. In particular, the problem, due to Cogis, of determining that an ordered set is not greedy is NP-complete, as is the problem, due to Rival and Zaguia, of determining whether an ordered set has a greedy linear extension, which satisfies certain additional constraints.

Book ChapterDOI
01 Jun 1986
TL;DR: An algorithm for constructing shortest common superstrings for a given set R of strings is developed, based on Knuth-Morris-Pratt string matching procedure and on the greedy heuristics for finding longest Hamiltonian paths in weighted graphs.
Abstract: An algorithm for constructing shortest common superstrings for a given set R of strings is developed, based on Knuth-Morris-Pratt string matching procedure and on the greedy heuristics for finding longest Hamiltonian paths in weighted graphs. The algorithm runs in O(mn + m2 log m) steps where m is the number of strings in R and n is the total length of these strings. The compression in the common superstring constructed by the algorithm is shown to be at least half of the compression in a shortest superstring.

Journal ArticleDOI
TL;DR: This work considers hereditary systems (such as matroids) where the underlying elements have independent random costs, and investigates the cost of the base picked by the greedy algorithm.
Abstract: We consider hereditary systems (such as matroids) where the underlying elements have independent random costs, and investigate the cost of the base picked by the greedy algorithm.

Journal ArticleDOI
TL;DR: Characteristic structural properties are established for those greedoids where the greedy algorithm selects a maximal word for each linear objective function defined on it.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the relationship between the minimum and the maximum traveling salesman problem and proposed a method based on the idea of applying heuristics for the maximum TSP to the minimum TSP.

Journal ArticleDOI
TL;DR: This work provides an efficient 2- quasi-greedy algorithm where a minimum weight base is constrained to have a fixed number of elements from two disjoint sets and gives theorems making it possible to jump over certain adjacent states, further increasing efficiency.

01 Mar 1986
TL;DR: In this article, a nonlinear knapsack problem is modeled as a generalized assignment problem and the objective is to minimize the sum over all cells of a weighted squared deviation from the reenlistment target in each cell.
Abstract: : Selective Reenlistment Bonuses (SRBs) are offered to improve retention in designated military occupational specialties (MOSs) for specified years-of-service intervals (zones). The amount of the bonus is set by assigning an 'SRB Multiplier' for each MOS and zone combination (cell). Determination of multipliers is modeled as a nonlinear knapsack problem which is then linearized to a generalized assignment problem. The objective is to minimize the sum over all cells of a weighted squared deviation from the reenlistment target in each cell. Lagrangian relaxation provides lower bounds and feasible solutions. The best feasible solution is improved using a greedy heuristic to apportion unexpended funds. A FORTRAN 77 computer program implements the procedure. Data for FY86 yields a 0-1 integer program with 4795 binary variables and 980 constraints. A solution within .01% of optimality is obtained on an IBM 3033AP in 1.7 seconds and on an IBM PC in about four minutes. Keywords: Math programming; Integer programming; Knapsack problem; Selective reenlistment bonus; Lagrangian relaxation; Generalized assignment problem; Theses.

Book ChapterDOI
01 Jan 1986
TL;DR: It is proved, that the sequence of Boltzmann distributions for T→O converges to all optimal solutions equally distributed and all other feasible solutions are reached with probability O.
Abstract: A general principle was elaborated to apply thermodynamically motivated strategies on NP-complete subgraph optimization problems with given placing requirements for subsets of vertices. Such problems arise for example in the area of automation equipment, integrated circuit and computer network layout where wiring problems are important. The thermodynamically motivated strategy is a stochastic optimization algorithm by simulated annealing of ideal gases. For fixed temperature T it is a finite homogeneous Markov chain, which converges to the Boltzmann distribution under certain assumption. It is proved, that the sequence of Boltzmann distributions for T→O converges to all optimal solutions equally distributed and all other feasible solutions are reached with probability O. Beside, some remarks on the speed of convergence are given.