scispace - formally typeset
Search or ask a question

Showing papers on "Greedy algorithm published in 1998"


Proceedings ArticleDOI
01 Jan 1998
TL;DR: It is shown that a simple greedy heuristic combined with the algorithm by Shmoys, Tardos, and Aardal, can be used to obtain an approximation guarantee of 2.408, and a lower bound of 1.463 is proved on the best possible approximation ratio.
Abstract: A fundamental facility location problem is to choose the location of facilities, such as industrial plants and warehouses, to minimize the cost of satisfying the demand for some commodity. There are associated costs for locating the facilities, as well as transportation costs for distributing the commodities. We assume that the transportation costs form a metric. This problem is commonly referred to as theuncapacitated facility locationproblem. Application to bank account location and clustering, as well as many related pieces of work, are discussed by Cornuejols, Nemhauser, and Wolsey. Recently, the first constant factor approximation algorithm for this problem was obtained by Shmoys, Tardos, and Aardal. We show that a simple greedy heuristic combined with the algorithm by Shmoys, Tardos, and Aardal, can be used to obtain an approximation guarantee of 2.408. We discuss a few variants of the problem, demonstrating better approximation factors for restricted versions of the problem. We also show that the problem is max SNP-hard. However, the inapproximability constants derived from the max SNP hardness are very close to one. By relating this problem to Set Cover, we prove a lower bound of 1.463 on the best possible approximation ratio, assumingNP?DTIMEnO(loglogn)].

689 citations


Proceedings ArticleDOI
01 Jan 1998
TL;DR: These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random loss-resilient codes, solving random k-SAT formula using the pure literal rule, and the greedy algorithm for matchings in random graphs.
Abstract: We introduce a new set of probabilistic analysis tools based on the analysis of And-Or trees with random inputs. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random loss-resilient codes, solving random k-SAT formula using the pure literal rule, and the greedy algorithm for matchings in random graphs. In addition, these tools allow generalizations of these problems not previously analyzed to be analyzed in a straightforward manner. We illustrate our methodology on the three problems listed above

386 citations


Proceedings ArticleDOI
30 Mar 1998
TL;DR: The author studies the performance of four mapping algorithms and concludes that the use of intelligent mapping algorithms is beneficial, even when the expected time for completion of a job is not deterministic.
Abstract: The author studies the performance of four mapping algorithms. The four algorithms include two naive ones: opportunistic load balancing (OLB), and limited best assignment (LBA), and two intelligent greedy algorithms: an O(nm) greedy algorithm, and an O(n/sup 2/m) greedy algorithm. All of these algorithms, except OLB, use expected run-times to assign jobs to machines. As expected run-times are rarely deterministic in modern networked and server based systems, he first uses experimentation to determine some plausible run-time distributions. Using these distributions, he next executes simulations to determine how the mapping algorithms perform. Performance comparisons show that the greedy algorithms produce schedules that, when executed, perform better than naive algorithms, even though the exact run-times are not available to the schedulers. He concludes that the use of intelligent mapping algorithms is beneficial, even when the expected time for completion of a job is not deterministic.

260 citations


Proceedings Article
01 Jan 1998
TL;DR: This paper presents a new mesh reduction algorithm which clearly reflects this meta scheme and efficiently generates decimated high quality meshes while observing global error bounds and considers most of the suggested algorithms as generic templates leaving the freedom to plug in specific instances of predicates.
Abstract: The decimation of highly detailed meshes has emerged as an important issue in many computer graphics related fields. A whole library of different algorithms has been proposed in the literature. By carefully investigating such algorithms, we can derive a generic structure for mesh reduction schemes which is analogous to a class of greedy-algorithms for heuristic optimization. Particular instances of this algorithmic template allow to adapt to specific target applications. We present a new mesh reduction algorithm which clearly reflects this meta scheme and efficiently generates decimated high quality meshes while observing global error bounds. Introduction In several areas of computer graphics and geometric modeling, the representation of surface geometry by polygonal meshes is a well established standard. However, the complexity of the object models has increased much faster than the through-put of today’s graphics hardware. Hence, in order to be able to display and modify geometric objects within reasonable response times, it is necessary to reduce the amount of data by removing redundant information from triangle meshes. A precise definition of the term redundancy in this context obviously depends on the application for which the decimated mesh is to be used. Technically speaking, the most important aspect is the approximation error, i.e., the modified mesh has to stay within a prescribed tolerance to the original data. From an optical point of view, local flatness of the mesh might be a better indicator for redundancy. It is natural that applications as different as rendering and finite element analysis put their emphasis also on the preservation of different aspects in the simplified geometric shape. In the last years, a host of proposed algorithms for mesh reduction has been applied successfully to level of detail generation [14, 2], progressive transmission [6], and reverse engineering [1]. See [15] for an overview of some relevant literature. We consider most of the suggested algorithms as generic templates leaving the freedom to plug in specific instances of predicates. For example, each algorithm is based on a scalar valued oracle which indicates the degree of redundancy of a particular vertex, edge, or triangle. Depending on the target application, different choices for this oracle are appropriate but this does not affect the algorithmic structure of the scheme. On the most abstract level, there are two different basic approaches to find a coarser approximation of a given polygonal mesh. The one is to build the new mesh without necessarily inheriting the topology of the original and the other is to obtain the new mesh by (iteratively) modifying the original without changing the topology. Having a topologically simplified model of the original mesh is useful in applications where the topology itself does not carry crucial information. For example, when rendering remote objects, small holes can be removed without affecting the quality but for a finite element simulation on the same object the holes might be important to obtain reliable results. In this paper we will analyze incremental mesh reduction, i.e., algorithms that reduce the mesh complexity by the iterative application of simple topological operations instead of completely reorganizing the mesh. We will identify the slots where custom tailored predicates or operators can be inserted and will give recommendations when to use which. We then present an original mesh reduction algorithm based on these considerations. The algorithm is fast according to Schroeder’s recent definition [17] yet allows global error control with respect to the geometric Hausdorff distance. The scheme is validated in the result section by showing and discussing some examples. Relevant algorithmic aspects The topology preserving mesh reduction schemes typically use a simple operation which removes a small submesh and retriangulates the remaining hole. Some schemes use local optimization to find the best retriangulation. To control the decimation process, a scalar valued predicate induces a priority ordering on the set of candidates for being removed. This predicate can be based purely on distance measures between the original and the reduced mesh or it can additionally take local flatness into account. This macroscopic description matches most of the known incremental mesh reduction schemes. Due to the overwhelming variety of different algorithms that have been proposed in the literature, there are several authors who attempted to identify important features and classify the different approaches accordingly [16, 15, 3]. We do not want to add another survey but we just give an abridged overview. We will focus on three fundamental ingredients that are necessary (and sufficient) to build your own mesh reduction algorithm. The ingredients are a topological operator to modify the mesh locally, a distance measure to check whether the maximum tolerance is not violated, and a fairness criterion to evaluate the quality of the current mesh. Topological operators The classical scheme of [18] removes a single vertex v and retriangulates its crown. Thus, in every step, a patch of n triangles (the valence of v) is replaced by a new patch with n 2 triangles. In general, a local edgeswapping optimization is necessary to guarantee a reasonable quality of the retriangulated patch. In [6], edges pq are collapsed into a new vertex r which removes two triangles from the mesh. This operation can also be understood as submesh removal and retriangulation. In this case the local connectivity of the retriangulation is fixed but the optimal location for r is determined by a local energy minimization heuristic. We could cut out larger submeshes from the original mesh but this would require a more sophisticated treatment of special cases. A nice property of the basic vertex-removal and edgecollapse operators is that consistency preservation is easy to guarantee. We just have to check the injectivity of the crown of the vertex v or the edge pq respectively. The rejection of all operations that would lead to complex vertices or edges is the reason why most incremental schemes do not change the global topology of a mesh. Our observation when testing different reduction schemes on a variety of meshed models is that the underlying topological operator on which an algorithm is based does not have a significant impact on the results. The quality of the resulting mesh turns out to be much more sensitive to the criteria which decide where to apply the next reduction operation. Hence, we recommend to make the topological operator itself as simple as possible, i.e., by eliminating all geometric degrees of freedom. Concluding from these considerations, we suggest the use of what we call the half-edge collapse. A common way to store orientable triangle meshes is the half-edge structure [13] where an undirected edge pq is represented by two directed halves p ! q and q ! p. Collapsing the halfedge p ! q means to pull the vertex q into p and to remove the triangles that have become singular. This topological operator’s major advantage is that it does not contain any unset degrees of freedom which would have to be determined by local optimization. If we treat the two half-edge mates as separate entities then the only decision is whether a particular collapse is to be performed or not. Moreover, the reduction operation does not “invent” new geometry by letting some heuristic decide about the position of r. The vertices of the decimated mesh are always a proper subset of the original vertices. The half-edge collapse can be understood as a vertex removal without the freedom of chosing the triangulation or as an edge collapse without the freedom of setting the position of the new vertex. Figure 1 shows the submeshes involved in the basic topological operations. Figure 1: Vertex-removal, Edge-collapse, and Half-Edge-

235 citations


Journal ArticleDOI
TL;DR: In this article, a greedy heuristic and a genetic algorithm are proposed for the solution to the integrated problem of inventory-level-dependent demand inventory model and product assortment and shelf-space allocation.

229 citations


Proceedings ArticleDOI
01 Jan 1998
TL;DR: It is proved that an optimal cyclic schedule for the general problem exists, and the NP-hardness of the problem is established, and an efficient algorithm for finding a near-optimal solution to the nonlinear program is presented.
Abstract: We study the problem of scheduling activities of several types under the constraint that, at most, a fixed number of activities can be scheduled in any single time slot. Any given activity type is associated with a service cost and an operating cost that increases linearly with the number of time slots since the last service of this type. The problem is to find an optimal schedule that minimizes the long-run average cost per time slot. Applications of such a model are the scheduling of maintenance service to machines, multi-item replenishment of stock, and minimizing the mean response time in Broadcast Disks. Broadcast Disks recently gained a lot of attention because they were used to model backbone communications in wireless systems, Teletext systems, and Web caching in satellite systems. The first contribution of this paper is the definition of a general model that combines into one several important previous models. We prove that an optimal cyclic schedule for the general problem exists, and we establish the NP-hardness of the problem. Next, we formulate a nonlinear program that relaxes the optimal schedule and serves as a lower bound on the cost of an optimal schedule. We present an efficient algorithm for finding a near-optimal solution to the nonlinear program. We use this solution to obtain several approximation algorithms. 1 A 9/8 approximation for a variant of the problem that models the Broadcast Disks application. The algorithm uses some properties of “Fibonacci sequences.” Using this sequence, we present a 1.57-approximation algorithm for the general problem. 2 A simple randomized algorithm and a simple deterministic greedy algorithm for the problem. We prove that both achieve approximation factor of 2. To the best of our knowledge this is the first worst-case analysis of a widely used greedy heuristic for this problem.

227 citations


Proceedings ArticleDOI
29 Mar 1998
TL;DR: The planning tool prototype ICEPT (Integrated Cellular network Planning Tool), which is based on the application of a new discrete population model for the traffic description, the demand node concept, is presented and a first result from a real world planning case is shown.
Abstract: This paper presents a demand-based engineering method for designing radio networks of cellular mobile communication systems. The proposed procedure is based on a forward-engineering method, the integrated approach to cellular network planning and is facilitated by the application of a new discrete population model for the traffic description, the demand node concept. The use of the concept enables the formulation of the transmitter locating task as a maximal coverage location problem (MCLP), which is well known in economics for modeling and solving facility location problems. For the network optimization task, we introduced the set cover base station positioning algorithm (SCBPA), which is based on a greedy heuristic for solving the MCLP problem. Furthermore, we present the planning tool prototype ICEPT (Integrated Cellular network Planning Tool), which is based on these ideas and show a first result from a real world planning case.

203 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider a general covering problem in which k subsets are to be selected such that their union covers as large a weight of objects from a universal set of elements as possible.
Abstract: In this paper, we consider a general covering problem in which k subsets are to be selected such that their union covers as large a weight of objects from a universal set of elements as possible. Each subset selected must satisfy some structural constraints. We analyze the quality of a k-stage covering algorithm that relies, at each stage, on greedily selecting a subset that gives maximum improvement in terms of overall coverage. We show that such greedily constructed solutions are guaranteed to be within a factor of 1 − 1/e of the optimal solution. In some cases, selecting a best solution at each stage may itself be difficult; we show that if a β-approximate best solution is chosen at each stage, then the overall solution constructed is guaranteed to be within a factor of 1 − 1/eβ of the optimal. Our results also yield a simple proof that the number of subsets used by the greedy approach to achieve entire coverage of the universal set is within a logarithmic factor of the optimal number of subsets. Examples of problems that fall into the family of general covering problems considered, and for which the algorithmic results apply, are discussed. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 615–627, 1998

202 citations


Journal ArticleDOI
TL;DR: Three algorithms are presented: a constructive algorithm, a randomized greedy algorithm and a very simple tabu search procedure used in a Branch and Cut procedure that successfully solved to optimality large CVRP instances.

162 citations


Journal ArticleDOI
TL;DR: This paper derives new approaches for applying Lagrangian methods in discrete space, shows that an equilibrium is reached when a feasible assignment to the original problem is found and presents heuristic algorithms to look for equilibrium points, and proposes a new discrete Lagrange-multiplier-based global-search method (DLM) for solving satisfiability problems.
Abstract: Satisfiability is a class of NP-complete problems that model a wide range of real-world applications. These problems are difficult to solve because they have many local minima in their search space, often trapping greedy search methods that utilize some form of descent. In this paper, we propose a new discrete Lagrange-multiplier-based global-search method (DLM) for solving satisfiability problems. We derive new approaches for applying Lagrangian methods in discrete space, we show that an equilibrium is reached when a feasible assignment to the original problem is found and present heuristic algorithms to look for equilibrium points. Our method and analysis provides a theoretical foundation and generalization of local search schemes that optimize the objective alone and penalty-based schemes that optimize the constraints alone. In contrast to local search methods that restart from a new starting point when a search reaches a local trap, the Lagrange multipliers in DLM provide a force to lead the search out of a local minimum and move it in the direction provided by the Lagrange multipliers. In contrast to penalty-based schemes that rely only on the weights of violated constraints to escape from local minima, DLM also uses the value of an objective function (in this case the number of violated constraints) to provide further guidance. The dynamic shift in emphasis between the objective and the constraints, depending on their relative values, is the key of Lagrangian methods. One of the major advantages of DLM is that it has very few algorithmic parameters to be tuned by users. Besides the search procedure can be made deterministic and the results reproducible. We demonstrate our method by applying it to solve an extensive set of benchmark problems archived in DIMACS of Rutgers University. DLM often performs better than the best existing methods and can achieve an order-of-magnitude speed-up for some problems.

147 citations


Journal ArticleDOI
TL;DR: A variation of a greedy algorithm is presented that can be used in a wide range of test as sembly problems and selects items to have a locally optimal fit to a moving set of average criterion values.
Abstract: Numerous algorithms and heuristics have be en in-troduced that allow test developers to simultan eously generate multiple test forms that match quali tative constraints, such as content blueprint s, and titative targets, such as test information func tions. A variation of a greedy algorithm is presente d here that can be used in a wide range of test as sembly problems. The algorithm selects items to have a locally optimal fit to a moving set of average criterion values. A normalization procedure is used to allow the heuristic to work simultaneously with numerous qualitative and quantitative constraints. A complex sample application is demonstrated.

Book ChapterDOI
27 Sep 1998
TL;DR: An Ant Colony Optimisation (ACO) algorithm for the Shortest Common Supersequence (SCS) problem, which has applications in production system planning, mechanical engineering and molecular biology is introduced.
Abstract: In this paper we introduce an Ant Colony Optimisation (ACO) algorithm for the Shortest Common Supersequence (SCS) problem, which has applications in production system planning, mechanical engineering and molecular biology. The ACO algorithm is used to find good parameters for a heuristic for the SCS problem. An island model with several populations of ants is used for the ACO algorithm. Besides we introduce a lookahead function which makes the decisions of the ants dependent on the state arrived after the decision.

Journal ArticleDOI
TL;DR: In this paper, it was shown that simple greedy methods can be used to find weak hypotheses (hypotheses that correctly classify noticeably more than half of the examples) in polynomial time, without dependence on any separation parameter.
Abstract: In this paper we consider the problem of learning a linear threshold function (a half-space in n dimensions, also called a ``perceptron''). Methods for solving this problem generally fall into two categories. In the absence of noise, this problem can be formulated as a Linear Program and solved in polynomial time with the Ellipsoid Algorithm or Interior Point methods. Alternatively, simple greedy algorithms such as the Perceptron Algorithm are often used in practice and have certain provable noise-tolerance properties; but their running time depends on a separation parameter, which quantifies the amount of ``wiggle room'' available for a solution, and can be exponential in the description length of the input. In this paper we show how simple greedy methods can be used to find weak hypotheses (hypotheses that correctly classify noticeably more than half of the examples) in polynomial time, without dependence on any separation parameter. Suitably combining these hypotheses results in a polynomial-time algorithm for learning linear threshold functions in the PAC model in the presence of random classification noise. (Also, a polynomial-time algorithm for learning linear threshold functions in the Statistical Query model of Kearns.) Our algorithm is based on a new method for removing outliers in data. Specifically, for any set S of points in R n , each given to b bits of precision, we show that one can remove only a small fraction of S so that in the remaining set T , for every vector v , max x ∈ T (v . x) 2 ≤ poly(n,b) E x ∈ T (v . x) 2 ; i.e., for any hyperplane through the origin, the maximum distance (squared) from a point in T to the plane is at most polynomially larger than the average. After removing these outliers, we are able to show that a modified version of the Perceptron Algorithm finds a weak hypothesis in polynomial time, even in the presence of random classification noise.

Journal ArticleDOI
TL;DR: This paper proposes a greedy algorithm and provides a heuristic, based on regular cycles for all but one activity type, with a guaranteed worse case bound, and investigates properties of an optimal solution and shows that there is always a cyclic optimal policy.

Journal ArticleDOI
TL;DR: This work studies the average performance of a simple greedy algorithm for finding a matching in a sparse random graph Gn, c/n, where c>0 is constant and gives significantly improved estimates of the errors made by the algorithm.
Abstract: We study the average performance of a simple greedy algorithm for finding a matching in a sparse random graph Gn, c/n, where c>0 is constant. The algorithm was first proposed by Karp and Sipser [Proceedings of the Twenty-Second Annual IEEE Symposium on Foundations of Computing, 1981, pp. 364–375]. We give significantly improved estimates of the errors made by the algorithm. For the subcritical case where c e then with high probability the algorithm produces a matching which is within n1/5+o(1) of maximum size. © 1998 John Wiley & Sons, Inc. Random Struct. Alg., 12, 111–177, 1998

Proceedings ArticleDOI
01 Nov 1998
TL;DR: CM-line loading of R-trees is useful to improve node utilization and query performance and is the method of choice for data with skew in locations, areas, or aspect ratios.
Abstract: CM-line loading of R-trees is useful to improve node utilization and query performance. WTepresent an algorithm for bulk loading R-trees -r&id diifers horn previous ones in two aspects (a) it partitions input data into subtrees in a top-down fashion (based on the fact that splits close to the root are likely to have a greater impact on performance), (b) at each tree level, it considers all cuts orthogonal to the coordinate axes that result in packed trees and greedily picks those optimizing an arbitrary cost function. EMm.sive esperirnentation with both real and synthetic data indicate that for region data our algorithm requires up to three times fewer disk accesses than other algorithms. It is the method of choice for data with skew in locations, areas, or aspect ratios. Such data is common in practice. Let n = number of input rectangles Let S = maximumnumber of rectangles per subtree Let M = maximumnumber of entries per node Let f (rl, r2) be the “user-supplied” cost function If n < S return {stop condition} For each dimension d For each ordering considered in this dimension d For i from 1 to [n/iVfl – 1 Let B. = MSR of first i S rectangles Let B1 = MSR of the other rectangles Remember i if f(Bo, BI) is better valued Split input set and orderings at best position.

Book ChapterDOI
Kai Ming Ting1
23 Sep 1998
TL;DR: The algorithm incorporating the instance-weighting method is found to be better than the original algorithm in terms of total misclassification costs, the number of high cost errors and tree size in two-class datasets.
Abstract: We introduce an instance-weighting method to induce costsensitive trees in this paper. It is a generalization of the standard tree induction process where only the initial instance weights determine the type of tree to be induced—minimum error trees or minimum high cost error trees. We demonstrate that it can be easily adapted to an existing tree learning algorithm. Previous research gave insufficient evidence to support the fact that the greedy divide-and-conquer algorithm can effectively induce a truly cost-sensitive tree directly from the training data. We provide this empirical evidence in this paper. The algorithm incorporating the instance-weighting method is found to be better than the original algorithm in terms of total misclassification costs, the number of high cost errors and tree size in two-class datasets. The instanceweighting method is simpler and more effective in implementation than a previous method based on altered priors.

Book ChapterDOI
27 Sep 1998
TL;DR: The experimental results obtained for non-geometric graphs show that the proposed memetic algorithm (MA) is superior to any other heuristic known to us, and for the geometric graphs considered, only the initialization phase of the MA is required to find (near) optimum solutions.
Abstract: In this paper, two types of fitness landscapes of the graph bipartitioning problem are analyzed, and a memetic algorithm — a genetic algorithm incorporating local search — that finds near-optimum solutions efficiently is presented. A search space analysis reveals that the fitness landscapes of geometric and non-geometric random graphs differ significantly, and within each type of graph there are also differences with respect to the epistasis of the problem instances. As suggested by the analysis, the performance of the proposed memetic algorithm based on Kernighan-Lin local search is better on problem instances with high epistasis than with low epistasis. Further analytical results indicate that a combination of a recently proposed greedy heuristic and Kernighan-Lin local search is likely to perform well on geometric graphs. The experimental results obtained for non-geometric graphs show that the proposed memetic algorithm (MA) is superior to any other heuristic known to us. For the geometric graphs considered, only the initialization phase of the MA is required to find (near) optimum solutions.

Book ChapterDOI
17 Dec 1998
TL;DR: The Steiner tree problem in graphs is studied for the case when vertices as well as edges have weights associated with them, and a greedy approximation algorithm based on “spider decompositions” is developed.
Abstract: In this paper we study the Steiner tree problem in graphs for the case when vertices as well as edges have weights associated with them. A greedy approximation algorithm based on spider decompositions was developed by Klein and Ravi for this problem. This algorithm provides a worst case approximation ratio of 2ln κ, where κ is the number of terminals. However, the best known lower bound on the approximation ratio is (1 - o(1)) ln κ 1 assuming that NP DTIME[n O(log log n) ], by a reduction from set cover. We show that for the unweighted case we can obtain an approximation factor of In κ. For the weighted case we develop a new decomposition theorem, and generalize the notion of spiders to branch-spiders, that are used to design a new algorithm with a worst case approximation factor of 1.5 ln κ. We then generalize the method to yield an approximation factor of (1.35+∈) ln κ, for any constant e > 0. These algorithms, although polynomial, are not very practical due to their high running time; since we need to repeatedly find many minimum weight matchings in each iteration. We also develop a simple greedy algorithm that is practical and has a worst case approximation factor of 1.6103 ln κ. The techniques developed for this algorithm imply a method of approximating node weighted network design problems defined by 0-1 proper functions as well. These new ideas also lead to improved approximation guarantees for the problem of finding a minimum node weighted connected dominating set. The previous best approximation guarantee for this problem was 3 ln n due to Guha and Khuller. By a direct application of the methods developed in this paper we are able to develop an algorithm with an approximation factor of (1.35 + ∈) ln n for any fixed ∈ > 0.

Journal ArticleDOI
TL;DR: The core algorithm used in an implementation of a scheduler currently being installed in a major Asian railway is described, which extends previous work on a greedy heuristic for scheduling trains to provide a powerful and practically useful method that is fast enough for real-time use in many cases.
Abstract: This paper describes the core algorithm used in an implementation of a scheduler currently being installed in a major Asian railway. It extends previous work on a greedy heuristic for scheduling trains, to provide a powerful and practically useful method that is fast enough for real-time use in many cases. Real-world railway systems have constraints that do not fit easily into a simple mathematical formulation. The algorithm described here makes it straightforward to incorporate many such realistic features.

Journal ArticleDOI
TL;DR: In this paper, the estimation algorithm for hinging hyperplane (HH) models is analyzed and it is shown that it is a special case of a Newton algorithm applied to a sum of squared error criterion.
Abstract: This correspondence concerns the estimation algorithm for hinging hyperplane (HH) models, a piecewise-linear model for approximating functions of several variables, suggested in Breiman (1993). The estimation algorithm is analyzed and it is shown that it is a special case of a Newton algorithm applied to a sum of squared error criterion. This insight is then used to suggest possible improvements of the algorithm so that convergence to a local minimum can be guaranteed. In addition, the way of updating the parameters in the HH model is discussed. In Breiman, a stepwise updating procedure is proposed where only a subset of the parameters are changed in each step. This connects closely to some previously suggested greedy algorithms and these greedy algorithms are discussed and compared to a simultaneous updating of all parameters.

Journal ArticleDOI
TL;DR: In this paper, an operation-sequence-based method for forming flow-line manufacturing cells is proposed to find the minimum-cost set of flowline cells that is capable of producing the desired part mix.
Abstract: In this paper we study the generalized grouping problem of cellular manufacturing. We propose an operation-sequence-based method for forming flow-line manufacturing cells. Process planning in the form of selection of the machine for each operation is included in the problem formulation. Input requirements include the set of operation requirements for each part type, and operation capabilities for all available machine types. The objective is to find the minimum-cost set of flow-line cells that is capable of producing the desired part mix. A similarity coefficient based on the longest common operation subsequence between part types is defined and used to group parts into independent, flow-line families. An algorithm is developed for finding a composite operation supersequence for each family. Given machine options for each operation in this sequence, the optimal machine sequence and capacity for each cell is then found by solving a shortest path problem on an augmented graph. The method is shown to be effi...

Book ChapterDOI
TL;DR: A parallel GRASP for the Steiner problem in graphs is described, which was implemented using the Message Passing Interface library on an IBM SP2 machine in order to improve load balancing.
Abstract: A greedy randomized adaptive search procedure (GRASP) is a metaheuristic for combinatorial optimization. Given an undirected graph with weights associated with its nodes, the Steiner tree problem consists in finding a minimum weight subgraph spanning a given subset of (terminal) nodes of the original graph. In this paper, we describe a parallel GRASP for the Steiner problem in graphs. We review basic concepts of GRASP: construction and local search algorithms. The implementation of a sequential GRASP for the Steiner problem in graphs is described in detail. Feasible solutions are characterized by their non-terminal nodes. A randomized version of Kruskal's algorithm for the minimum spanning tree problem is used in the construction phase. Local search is based on insertions and eliminations of nodes to/from the current solution. Parallelization is done through the distribution of the GRASP iterations among the processors on a demand-driven basis, in order to improve load balancing. The parallel procedure was implemented using the Message Passing Interface library on an IBM SP2 machine. Computational experiments on benchmark problems are reported.

Proceedings ArticleDOI
21 Apr 1998
TL;DR: The goal of the paper is to solve the problem about local optimal solutions by introducing a measure of diversity of populations using the concept of information entropy and obtain a best approximate solution of the TSP by using this entropy-based GA.
Abstract: The traveling salesman problem (TSP) is used as a paradigm for a wide class of problems having complexity due to the combinatorial explosion. The TSP has become a target for the genetic algorithm (GA) community, because it is probably the central problem in combinatorial optimization and many new ideas in combinatorial optimization have been tested on the TSP. However, by using GA for solving TSPs, we obtain a local optimal solution rather than a best approximate solution frequently. The goal of the paper is to solve the above mentioned problem about local optimal solutions by introducing a measure of diversity of populations using the concept of information entropy. Thus, we can obtain a best approximate solution of the TSP by using this entropy-based GA.

Book ChapterDOI
31 Jan 1998
TL;DR: In this article, the authors studied the greedy expansion of real numbers in Pisot number base and proved that every sufficiently small positive rational numbers have purely periodic greedy expansion in unit base under a certain finiteness condition.
Abstract: We study the greedy expansion of real numbers in Pisot number base. We will show a certain criterions of finiteness, periodicity, and purely periodicity. Further, it is proved that every sufficiently small positive rational numbers has purely periodic greedy expansion in Pisot unit base under a certain finiteness condition.

Journal ArticleDOI
TL;DR: A new heuristic for the BiQAP is proposed, a greedy randomized adaptive search procedure (GRASP).

Journal ArticleDOI
TL;DR: This paper presents the details of a GA and discusses the main characteristics of an assembly line balancing problem that is typical in the clothing industry and explains how such problems can be formulated for genetic algorithms to solve.
Abstract: Assembly line balancing problems that occur in real world situations are dynamic and are fraught with various sources of uncertainties such as the performance of workers and the breakdown of machinery. This is especially true in the clothing industry. The problem cannot normally be solved deterministically using existing techniques. Recent advances in computing technology, especially in the area of computational intelligence, however, can be used to alleviate this problem. For example, some techniques in this area can be used to restrict the search space in a combinatorial problem, thus opening up the possibility of obtaining better results. Among the different computational intelligence techniques, genetic algorithms (GA) is particularly suitable. GAs are probabilistic search methods that employ a search technique based on ideas from natural genetics and evolutionary principles. In this paper, we present the details of a GA and discuss the main characteristics of an assembly line balancing problem that is typical in the clothing industry. We explain how such problems can be formulated for genetic algorithms to solve. To evaluate the appropriateness of the technique, we have carried out some experiments. Our results show that the GA approach performs much better than the use of a greedy algorithm, which is used by many factory supervisors to tackle the assembly line balancing problem.

Journal ArticleDOI
TL;DR: A few greedy algorithms and other heuristic methods are suggested, which result in new, record-breaking codes, based on tabu search and evolutionary algorithms.
Abstract: Many of the fundamental coding problems can be represented as graph problems. These problems are often intrinsically difficult and unsolved even if the code length is relatively small. With the motivation to improve lower bounds on the sizes of constant weight codes and asymmetric codes, we suggest a few greedy algorithms and other heuristic methods, which result in new, record-breaking codes. Some of the heuristics used are based on tabu search and evolutionary algorithms. Tables of new codes are presented.

Proceedings ArticleDOI
04 Jan 1998
TL;DR: These techniques, easily extendible to the routing of staircase channels, yield efficient solutions to detailed routing in general floorplans, thus establishing the superiority of MD-routing over classical strategies.
Abstract: New techniques are presented for routing L-shaped channels, switchboxes and staircases in 2-layer Manhattan-diagonal (MD) model with tracks in horizontal, vertical and /spl plusmn/45/spl deg/ directions. First, a simple O(l.d) time algorithm is proposed which routes any L-shaped channel with length l, density d and no cyclic vertical constraints, in w (d/spl les/w/spl les/d+1) tracks. Next, an O(l.w) time greedy method for routing an L-shaped channel with cyclic vertical constraints, is described. Then, the switchbox routing problem in the MD model is solved elegantly. These techniques, easily extendible to the routing of staircase channels, yield efficient solutions to detailed routing in general floorplans. Experimental results show significantly low via-count and reduced wire length, thus establishing the superiority of MD-routing over classical strategies.

Book ChapterDOI
27 Sep 1998
TL;DR: A Genetic Algorithm for solving the minimum span frequency assignment problem (MSFAP) is described and shows that it produces optimal solutions to several practical problem instances, and compares favourably to simulated annealing and tabu search algorithms.
Abstract: We describe a Genetic Algorithm (GA) for solving the minimum span frequency assignment problem (MSFAP).The MSFAP involves assigning frequencies to each transmitter in a region, subject to a number of constraints being satisfied, such that the span, i.e. the range of frequencies used, is minimized. The technique involves finding an ordering of the transmitters for use in a sequential (greedy) assignment process. Results are given which show that our GA produces optimal solutions to several practical problem instances, and compares favourably to simulated annealing and tabu search algorithms.