scispace - formally typeset
Search or ask a question

Showing papers on "Greedy algorithm published in 2001"


Journal ArticleDOI
TL;DR: It is argued that power-aware methodology uses an embedded microoperating system to reduce node energy consumption by exploiting both sleep state and active power management.
Abstract: We propose an OS-directed power management technique to improve the energy efficiency of sensor nodes. Dynamic power management (DPM) is an effective tool in reducing system power consumption without significantly degrading performance. The basic idea is to shut down devices when not needed and wake them up when necessary. DPM, in general, is not a trivial problem. If the energy and performance overheads in sleep-state transition were negligible, then a simple greedy algorithm that makes the system enter the deepest sleep state when idling would be perfect. However, in reality, sleep-state transitioning has the overhead of storing processor state and turning off power. Waking up also takes a finite amount of time. Therefore, implementing the correct policy for sleep-state transitioning is critical for DPM success. It is argued that power-aware methodology uses an embedded microoperating system to reduce node energy consumption by exploiting both sleep state and active power management.

747 citations


Journal ArticleDOI
TL;DR: A hybrid genetic algorithm for the container loading problem with boxes of different sizes and a single container for loading that uses specific genetic operators based on an integrated greedy heuristic to generate offspring.

303 citations


Journal ArticleDOI
05 Oct 2001
TL;DR: An algorithm which is using rough set theory with greedy heuristics for feature selection and selects the features that do not damage the performance of induction is proposed.
Abstract: Practical machine learning algorithms are known to degrade in performance (prediction accuracy) when faced with many features (sometimes attribute is used instead of feature) that are not necessary for rule discovery. To cope with this problem, many methods for selecting a subset of features have been proposed. Among such methods, the filter approach that selects a feature subset using a preprocessing step, and the wrapper approach that selects an optimal feature subset from the space of possible subsets of features using the induction algorithm itself as a part of the evaluation function, are two typical ones. Although the filter approach is a faster one, it has some blindness and the performance of induction is not considered. On the other hand, the optimal feature subsets can be obtained by using the wrapper approach, but it is not easy to use because of the complexity of time and space. In this paper, we propose an algorithm which is using rough set theory with greedy heuristics for feature selection. Selecting features is similar to the filter approach, but the evaluation criterion is related to the performance of induction. That is, we select the features that do not damage the performance of induction.

295 citations


Proceedings ArticleDOI
29 May 2001
TL;DR: An intuitive interpretation of this equivalence is given that this problem of traffic grooming to reduce the number of transceivers in optical networks is equivalent to a certain traffic maximization problem and this interpretation is used to derive a greedy algorithm for transceiver minimization.
Abstract: We study the problem of traffic grooming to reduce the number of transceivers in optical networks. We show that this problem is equivalent to a certain traffic maximization problem. We give an intuitive interpretation of this equivalence and use this interpretation to derive a greedy algorithm for transceiver minimization. We discuss implementation issues and present computational results comparing the heuristic solutions with the optimal solutions for several small example networks. For larger networks, the heuristic solutions are compared with known bounds on the optimal solution obtained using integer programming tools.

164 citations


Proceedings ArticleDOI
14 Oct 2001
TL;DR: This work proposes a heuristic for allocation in combinatorial auctions that can provide excellent solutions for problems with over 1000 items and 10,000 bids and achieves an average approximation error of less than 1%.
Abstract: We propose a heuristic for allocation in combinatorial auctions. We first run an approximation algorithm on the linear programming relaxation of the combinatorial auction. We then run a sequence of greedy algorithms, starting with the order on the bids determined by the approximate linear program and continuing in a hill-climbing fashion using local improvements in the order of bids. We have implemented the algorithm and have tested it on the complete corpus of instances provided by Vohra and de Vries as well as on instances drawn from the distributions of Leyton-Brown, Pearson, and Shoham. Our algorithm typically runs two to three orders of magnitude faster than the reported running times of Vohra and de Vries, while achieving an average approximation error of less than 1%. This algorithm can provide, in less than a minute of CPU time, excellent solutions for problems with over 1000 items and 10,000 bids. We thus believe that combinatorial auctions for most purposes face no practical computational hurdles.

163 citations


Journal ArticleDOI
TL;DR: Some new conditions that arise naturally in the study of the Thresholding Greedy Algorithm are introduced for bases of Banach spaces and a complete duality theory for greedy bases is obtained.
Abstract: Some new conditions that arise naturally in the study of the Thresholding Greedy Algorithm are introduced for bases of Banach spaces. We relate these conditions to best n-term approximation and we study their duality theory. In particular, we obtain a complete duality theory for greedy bases.

159 citations


Journal ArticleDOI
TL;DR: An integer programming formulation for the problem of batching and scheduling of certain kinds of batch processors, generates a lower bound from a partial LP relaxation, provides a polynomial algorithm to solve a special case, and tests a set of heuristics on the general problem.
Abstract: This paper discusses the problem of batching and scheduling of certain kinds of batch processors. Examples of these processors include heat treatment facilities, particularly in the steel and ceramics industries, as well as a variety of operations in the manufacture of integrated circuits. In general, for our problem there is a set of jobs waiting to be processed. Each job is associated with a given family and has a weight or delay cost and a volume. The scheduler must organize jobs into batches in which each batch consists of jobs from a single family and in which the total volume of jobs in a batch does not exceed the capacity of the processor. The scheduler must then sequence all the batches. The processing time for a batch depends only on the family and not on the number or the volume of jobs in the batch. The objective is to minimize the mean weighted flow time.The paper presents an integer programming formulation for this problem, generates a lower bound from a partial LP relaxation, provides a polynomial algorithm to solve a special case, and tests a set of heuristics on the general problem. The ability to pack jobs into batches is the key to efficient solutions and is the basis of the different solution procedures in this paper. The heuristics include a greedy heuristic, a successive knapsack heuristic, and a generalized assignment heuristic. Optimal solutions are obtained by complete enumeration for small problems.The conclusions of the computational study show that the successive knapsack and generalized assignment heuristics perform better than the greedy. The generalized assignment heuristic does slightly better than the successive knapsack heuristic in some cases, but the latter is substantially faster and more robust. For problems with few jobs, the generalized assignment heuristic and the knapsack heuristic almost always provide optimal solutions. For problems with more jobs, we compare the heruistic solutions' values to lower bounds; the computational work suggests that the heuristics continue to provide solutions that are optimal or close to the optimal. The study also shows that the volume of the job relative to the capacity of the facility and the number of jobs in a family affect the performance of the heuristics, whereas the number of families does not. Finally, we give a worst-case analysis of the greedy heuristic.

158 citations


Book ChapterDOI
05 Jan 2001
TL;DR: A preliminary report on the first broad-based experimental comparison of modern heuristics for the asymmetric traveling salesmen problem (ATSP), finding that no single algorithm is dominant over all instance classes, although for each class the best tours are found by Zhang's algorithm or an iterated variant on Kanellakis-Papadimitriou.
Abstract: The purpose of this paper is to provide a preliminary report on the first broad-based experimental comparison of modern heuristics for the asymmetric traveling salesmen problem (ATSP). There are currently three general classes of such heuristics: classical tour construction heuristics such as Nearest Neighbor and the Greedy algorithm, local search algorithms based on re-arranging segments of the tour, as exemplified by the Kanellakis-Papadimitriou algorithm [KP80], and algorithms based on patching together the cycles in a minimum cycle cover, the best of which are variants on an algorithm proposed by Zhang [Zha93]. We test implementations of the main contenders from each class on a variety of instance types, introducing a variety of new random instance generators modeled on real-world applications of the ATSP. Among the many tentative conclusions we reach is that no single algorithm is dominant over all instance classes, although for each class the best tours are found either by Zhang's algorithm or an iterated variant on Kanellakis-Papadimitriou.

137 citations


Proceedings ArticleDOI
01 May 2001
TL;DR: It is shown that the optimum cost-delay trade-off (Pareto) curve in Mariposa's framework can be approximated fast within any desired accuracy, and a polynomial algorithm is presented for the general multiobjective query optimization problem, which approximates arbirarily well the optimum Cost- delay tradeoff.
Abstract: The optimization of queries in distributed database systems is known to be subject to delicate trade-offs. For example, the Mariposa database system allows users to specify a desired delay-cost tradeoff (that is, to supply a decreasing function u(d), specifying how much the user is willing to pay in order to receive the query results within time d); Mariposa divides a query graph into horizontal “strides,” analyzes each stride, and uses a greedy heuristic to find the “best” plan for all strides. We show that Mariposa's greedy heuristic can be arbitrarily far from the desired optimum. Applying a recent approach in multiobjective optimization algorithms to this problem, we show that the optimum cost-delay trade-off (Pareto) curve in Mariposa's framework can be approximated fast within any desired accuracy. We also present a polynomial algorithm for the general multiobjective query optimization problem, which approximates arbirarily well the optimum cost-delay tradeoff (without the restriction of Mariposa's heuristic stride subdivision).

132 citations


Journal ArticleDOI
TL;DR: An approximation algorithm for the weighted k-set packing problem is presented that combines the two paradigms by starting with an initial greedy solution and then repeatedly choosing the best possible local improvement, which is the first asymptotic improvement over the straightforward ratio of k.

115 citations


Proceedings ArticleDOI
06 Jul 2001
TL;DR: It is proved that the greedy algorithm that drops the earliest packets among all low-value packets is the best greedy algorithm, and the competitive ratio of any online algorithm for a uniform bounded delay buffer is bounded away from 1, independent of the delay size.
Abstract: We consider two types of buffering policies that are used in network switches supporting QoS (Quality of Service). In the FIFO type, packets must be released in the order they arrive; the difficulty in this case is the limited buffer space. In the bounded-delay type, each packet has a maximum delay time by which it must be released, or otherwise it is lost. We study the cases where the incoming streams overload the buffers, resulting in packet loss. In our model, each packet has an intrinsic value; the goal is to maximize the total value of packets transmittedOur main contribution is a thorough investigation of the natural greedy algorithms in various models. For the FIFO model we prove tight bounds on the competitive ratio of the greedy algorithm that discards the packets with the lowest value. We also prove that the greedy algorithm that drops the earliest packets among all low-value packets is the best greedy algorithm. This algorithm can be as much as 1.5 times better than the standard tail-drop policy, that drops the latest packets.In the bounded delay model we show that the competitive ratio of any online algorithm for a uniform bounded delay buffer is bounded away from 1, independent of the delay size. We analyze the greedy algorithm in the general case and in three special cases: delay bound 2; link bandwidth 1; and only two possible packet values.Finally, we consider the off-line scenario. We give efficient optimal algorithms and study the relation between the bounded-delay and FIFO models in this case.

Proceedings ArticleDOI
06 Jul 2001
TL;DR: This work presents a primal-dual based constant factor approximation algorithm that achieves a logarithmic approximation which also applies when the distance function is asymmetric and an incremental clustering algorithm that maintains a solution whose cost is at most a constant factors times that of optimal with a constant factor blowup in the number of clusters.
Abstract: We study the problem of clustering points in a metric space so as to minimize the sum of cluster diameters. Significantly improving on previous results, we present a primal-dual based constant factor approximation algorithm for this problem. We present a simple greedy algorithm that achieves a logarithmic approximation which also applies when the distance function is asymmetric. The previous best known result obtained a logarithmic approximation with a constant factor blowup in the number of clusters. We also obtain an incremental clustering algorithm that maintains a solution whose cost is at most a constant factor times that of optimal with a constant factor blowup in the number of clusters.

Proceedings ArticleDOI
01 May 2001
TL;DR: This work presents a general model for schedules with pipelining, and shows that finding a valid schedule with minimum cost is NP-hard, and presents a greedy heuristic for finding good schedules.
Abstract: Database systems frequently have to execute a set of related queries, which share several common subexpressions. Multi-query optimization exploits this, by finding evaluation plans that share common results. Current approaches to multi-query optimization assume that common subexpressions are materialized. Significant performance benefits can be had if common subexpressions are pipelined to their uses, without being materialized. However, plans with pipelining may not always be realizable with limited buffer space, as we show. We present a general model for schedules with pipelining, and present a necessary and sufficient condition for determining validity of a schedule under our model. We show that finding a valid schedule with minimum cost is NP-hard. We present a greedy heuristic for finding good schedules. Finally, we present a performance study that shows the benefit of our algorithms on batches of queries from the TPCD benchmark.

Book ChapterDOI
18 Aug 2001
TL;DR: A natural greedyalgorithm for the metric uncapacitated facilitylo cation problem is presented and the method of dual fitting is used to analyze its approximation ratio, which turns out to be 1.861.
Abstract: We present a natural greedyalgorithm for the metric uncapacitated facilitylo cation problem and use the method of dual fitting to analyze its approximation ratio, which turns out to be 1.861. The running time of our algorithm is O(mlogm), where m is the total number of edges in the underlying complete bipartite graph between cities and facilities. We use our algorithm to improve recent results for some variants of the problem, such as the fault tolerant and outlier versions. In addition, we introduce a new variant which can be seen as a special case of the concave cost version of this problem.

Journal ArticleDOI
TL;DR: The designed and tested several heuristic procedures for solving the field technician scheduling problem, namely a greedy heuristic, a local search algorithm, and a greedy randomized adaptive search procedure (GRASP), indicate that GRASP is the most effective among them but requires more CPU time.
Abstract: This paper addresses a field technician scheduling problem faced by many service providers in telecommunication industry. The problem is to assign a set of jobs, at different locations with time windows, to a group of field technicians with different job skills. Such a problem can be viewed as a generalization of the well-known vehicle routing problem with time windows since technician skills need to be matched with job types. We designed and tested several heuristic procedures for solving the problem, namely a greedy heuristic, a local search algorithm, and a greedy randomized adaptive search procedure (GRASP). Our computational results indicate that GRASP is the most effective among them but requires more CPU time. However, the unique structure of GRASP allows us to exploit parallelism to achieve linear speed-up with respect to the number of machines used.

Journal ArticleDOI
TL;DR: In this paper, a fixed charge facility location model with coverage restrictions was developed to minimize cost while maintaining an appropriate level of service, in identifying facility locations, and two Lagrangian relaxation based heuristics were presented and tested.
Abstract: This paper develops a fixed charge facility location model with coverage restrictions, minimizing cost while maintaining an appropriate level of service, in identifying facility locations Further, it discusses the insights that can be gained using the model Two Lagrangian relaxation based heuristics are presented and tested Both heuristics use a greedy adding algorithm to calculate upper bounds and subgradient optimization to calculate lower bounds While both procedures are capable of generating good solutions, one is computationally superior

Proceedings ArticleDOI
25 Nov 2001
TL;DR: This paper proposes a location-aided power-aware routing protocol that dynamically makes local routing decisions so that a near-optimal power-efficient end-to-end route is formed for forwarding data packets.
Abstract: In multi-hop wireless ad-hop networks, designing energy-efficient routing protocols is critical since nodes are power-constrained. However, it is also an inherently hard problem due to two important factors: First, the nodes may be mobile, demanding the energy-efficient routing protocol to be fully distributed and adaptive to the current states of nodes; Second, the wireless links may be uni-directional due to asymmetric power configurations of adjacent nodes. In this paper, we propose a location-aided power-aware routing protocol that dynamically makes local routing decisions so that a near-optimal power-efficient end-to-end route is formed for forwarding data packets. The protocol is fully distributed such that only location information of neighboring nodes are exploited in each routing node. Through rigorous theoretical analysis for our distributed protocol based on greedy algorithms, we are able to derive critical global properties with respect to end-to-end energy-efficient routes. Finally, preliminary simulation results are presented to verify the performance of our protocol.

Proceedings ArticleDOI
22 Apr 2001
TL;DR: An algorithm based on simulated annealing (SA) for the solution of the resulting problem of location area planning is proposed and the quality of the SA technique is investigated by comparing its results to greedy search and random generation methods.
Abstract: Location area (LA) planning plays an important role in cellular networks because of the trade-off caused by paging and registration signaling. The upper bound on the size of an LA is the service area of a mobile switching center (MSC). In that extreme case, the cost of paging is at its maximum, but no registration is needed. On the other hand, if each cell is an LA, the paging cost is minimal, but the registration cost is the largest. In general, the most important component of these costs is the load on the signaling resources. Between the extremes lie one or more partitions of the MSC service area that minimize the total cost of paging and registration. In this paper, we try to find an optimal method for determining the location areas. For that purpose, we use the available network information to formulate a realistic optimization problem. We propose an algorithm based on simulated annealing (SA) for the solution of the resulting problem. Then, we investigate the quality of the SA technique by comparing its results to greedy search and random generation methods.

Journal ArticleDOI
TL;DR: A computationally efficient way of incorporating look-ahead into fuzzy decision tree induction by jointly optimizing the node splitting criterion and the classifiability of instances along each branch of the node is presented.
Abstract: Decision tree induction is typically based on a top-down greedy algorithm that makes locally optimal decisions at each node. Due to the greedy and local nature of the decisions made at each node, there is considerable possibility of instances at the node being split along branches such that instances along some or all of the branches require a large number of additional nodes for classification. In this paper, we present a computationally efficient way of incorporating look-ahead into fuzzy decision tree induction. Our algorithm is based on establishing the decision at each internal node by jointly optimizing the node splitting criterion (information gain or gain ratio) and the classifiability of instances along each branch of the node. Simulations results confirm that the use of the proposed look-ahead method leads to smaller decision trees and as a consequence better test performance.

Journal ArticleDOI
TL;DR: It is shown the optimal solution to the replenishment decision can be efficiently derived from a greedy algorithm, and inspection-rework is optimally applied to a single source identified by the algorithm.
Abstract: We study a production-inventory system with multiple unreliable supply sources. Through inspection and rework, the system can improve the quality of the units received from the supply sources. There are two interleaved decisions: the replenishment quantities from the sources and the inspection-rework quantities among the units received. We show the optimal solution to the replenishment decision can be efficiently derived from a greedy algorithm, and inspection-rework is optimally applied to a single source identified by the algorithm. Furthermore, in the case of linear cost functions, it is optimal to place orders from two supply sources, i.e., dual sourcing. The results extend to the infinite-horizon case, where an order-up-to policy is optimal. The model also readily adapts to situations in which the supply imperfection takes the form of a reduced delivery quantity (yield loss).

Journal ArticleDOI
TL;DR: In this article, the authors extended the applicability of the greedy approach to wider classes of problems and gave new approximate solutions for two different types of problems, namely, finding the spanning tree of minimum weight among those whose diameter is bounded by D.

Journal ArticleDOI
TL;DR: This work introduces a novel method of data partitioning based on artificial ants that is shown to perform better than recursive partitioning on three well-studied data sets.
Abstract: Among the multitude of learning algorithms that can be employed for deriving quantitative structure-activity relationships, regression trees have the advantage of being able to handle large data sets, dynamically perform the key feature selection, and yield readily interpretable models. A conventional method of building a regression tree model is recursive partitioning, a fast greedy algorithm that works well in many, but not all, cases. This work introduces a novel method of data partitioning based on artificial ants. This method is shown to perform better than recursive partitioning on three well-studied data sets.

Proceedings Article
12 Mar 2001
TL;DR: In this paper, a polynomial-time optimal algorithm was proposed to insert maximum number of buffers into the free space between the circuit blocks. But the algorithm is based on efficient min-cost network-flow computations.
Abstract: The problem of planning the locations of large number of buffers is of utmost importance in deep submicron VLSI design. Recently, Cong et al in p1] proposed an algorithm to directly address this problem. Given a placement of circuit blocks, a key step in [1] is to use the free space between the circuit blocks for inserting as many buffers as possible. This step is very important because if all buffers can be inserted into existing spaces, no expansion of chip area would be needed. An effective greedy heuristic was used in [1] for this step. In this paper, we give a polynomial-time optimal algorithm for solving the problem of inserting maximum number of buffers into the free space between the circuit blocks. In the case where the "costs" of placing a buffer at different locations are different, we can guarantee to insert maximum number of buffers with minimum total cost. Our algorithm is based on efficient min-cost network-flow computations.

Book ChapterDOI
28 Aug 2001
TL;DR: This paper gives approximation algorithms for the minimum test collection problem in the case when all the tests have a small cardinality, significantly improving the performance guarantee achievable by the greedy algorithm.
Abstract: The minimum test collection problem is defined as follows. Given a ground set S and a collection C of tests (subsets of S), find the minimum subcollection C′ of C such that for every pair of elements (x, y) in S there exists a test in C′ that contains exactly one of x and y. It is well known that the greedy algorithm gives a 1 + 2 ln n approximation for the test collection problem where n = |S|, the size of the ground set. In this paper, we show that this algorithm is close to the best possible, namely that there is no o(log n)-approximation algorithm for the test collection problem unless P = NP. We give approximation algorithms for this problem in the case when all the tests have a small cardinality, significantly improving the performance guarantee achievable by the greedy algorithm. In particular, for instances with test sizes at most k we derive an O(log k) approximation. We show APX-hardness of the version with test sizes at most two, and present an approximation algorithm with ratio 7/6 + Ɛ for any fixed Ɛ > 0.

Proceedings ArticleDOI
07 May 2001
TL;DR: It is concluded that the modified matching pursuit (MMP) algorithm offers the best compromise between performance and complexity using these search techniques.
Abstract: Matching pursuit (MP) uses a greedy search to construct a subset of vectors, from a larger set, which best represents a signal of interest. We extend this search for the best subset by keeping the K vectors which maximize the selection criterion at each iteration. This is termed the MP:K algorithm and represents a suboptimal search through the tree of all possible subsets where each node is limited to having K children. As a more suboptimal search, we can use the M-L search to select a subset of dictionary vectors, leading to the MP:M-L algorithm. We compare the computation and storage requirements for three variants of the MP algorithm using these searches. Through simulations, the significantly improved performance obtained using the MP:K and MP:M-L algorithms is demonstrated. We conclude that the modified matching pursuit (MMP) algorithm offers the best compromise between performance and complexity using these search techniques.

Journal ArticleDOI
TL;DR: In this article, the authors consider the online load balancing problem where there are m identical machines (servers) and a sequence of jobs and show that for the sum of the squares the greedy algorithm performs within 4/3 of the optimum, and no on-line algorithm achieves a better competitive ratio.
Abstract: We consider the on-line load balancing problem where there are m identical machines (servers) and a sequence of jobs. The jobs arrive one by one and should be assigned to one of the machines in an on-line fashion. The goal is to minimize the sum (over all machines) of the squares of the loads, instead of the traditional maximum load. We show that for the sum of the squares the greedy algorithm performs within 4/3 of the optimum, and no on-line algorithm achieves a better competitive ratio. Interestingly, we show that the performance of Greedy is not monotone in the number of machines. More specifically, the competitive ratio is 4/3 for any number of machines divisible by 3 but strictly less than 4/3 in all the other cases (although it approaches 4/3 for a large number of machines). To prove that Greedy is optimal, we show a lower bound of 4/3 for any algorithm for three machines. Surprisingly, we provide a new on-line algorithm that performs within 4/3 -δ of the optimum, for some fixed δ>0 , for any sufficiently large number of machines. This implies that the asymptotic competitive ratio of our new algorithm is strictly better than the competitive ratio of any possible on-line algorithm. Such phenomena is not known to occur for the classic maximum load problem. Minimizing the sum of the squares is equivalent to minimizing the load vector with respect to the l 2 norm. We extend our techniques and analyze the exact competitive ratio of Greedy with respect to the l p norm. This ratio turns out to be 2 - Θ(( ln p)/p) . We show that Greedy is optimal for two machines but design an algorithm whose asymptotic competitive ratio is better than the ratio of Greedy.

Book ChapterDOI
TL;DR: This paper focuses on the problem of how to lay out multicast sessions so as to cover a set of links of interest within a network, and defines two variations of this layout (cover) problems that differ in what it means for a link to be covered.
Abstract: There has been considerable activity recently to develop monitoring and debugging tools for a multicast session (tree). With these tools in mind, we focus on the problem of how to lay out multicast sessions so as to cover a set of links of interest within a network. We define two variations of this layout (cover) problems that differ in what it means for a link to be covered. We then focus on the minimum cost problem, to determine the minimum cost set of trees that cover the links in question. We show that, with few exceptions, the minimum cost problems are NP-hard and that even finding an approximation within a certain factor is NP-hard. One exception is when the underlyingne twork topology is a tree. For this case, we demonstrate an efficient algorithm that finds the optimal solution. We also present several computationally efficient heuristics and their evaluation through simulation. We find that two heuristics, a greedy heuristic that combines sets of trees with three or fewer receivers, and a heuristic based on our tree algorithm, both perform reasonably well. The remainder of the paper applies our techniques to the vBNS network and randomly generated networks, examining the effectiveness of the different heuristics.

Proceedings ArticleDOI
07 Oct 2001
TL;DR: An integrated greedy heuristic that simultaneously deals with the assignment and the sequencing subproblems is developed to solve the general case with more than two jobs.
Abstract: The job shop scheduling problem (JSP) deals with the sequencing operations of a set of jobs on a set of machines with minimum cost. The flexible job shop scheduling problem (FJSP) is a generalization of the JSP, which is concerned with both the assignment of machines to operations and the sequencing of the operations on the assigned machines. The paper first presents an extension of the geometric approach for solving a two-job shop problem, when there is one flexible job and the second job is a job shop job. Based on this extension and the notion of the combined job, an integrated greedy heuristic that simultaneously deals with the assignment and the sequencing subproblems is developed to solve the general case with more than two jobs. The results obtained by the greedy heuristic on existing benchmarks from the literature are promising.

Journal ArticleDOI
TL;DR: A tight bound on the approximation ratio of a greedy heuristic (discrete analog of the steepest descent algorithm) for this problem of minimizing a supermodular set function whose special case is the NP-hard p-median problem.

Journal ArticleDOI
TL;DR: In this article, a lower estimate for convergence of the Pure Greedy algorithm with respect to a general dictionary and a higher estimate for the weak greedy algorithm with a special weakness sequence was presented.
Abstract: We prove one lower estimate for the rate of convergence of the Pure Greedy Algorithm with regard to a general dictionary and another lower estimate for the rate of convergence of the Weak Greedy Algorithm with a special weakness sequence τ = {t}, 0 < t < 1, with regard to a general dictionary. The second lower estimate combined with the known upper estimate gives the right (in the sense of order) dependence of the exponent in the rate of convergence on the parameter t when t ↠ 0.