scispace - formally typeset
Search or ask a question

Showing papers on "Greedy algorithm published in 2002"


Book ChapterDOI
20 Aug 2002
TL;DR: It is proved that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema and the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete.
Abstract: DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm.

892 citations


Book
21 Aug 2002
TL;DR: In this article, the authors integrate two areas of computer science, namely data mining and evolutionary algorithms, to discover comprehensible, interesting knowledge, which is potentially useful for intelligent decision making.
Abstract: From the Publisher: This book integrates two areas of computer science, namely data mining and evolutionary algorithms. Both these areas have become increasingly popular in the last few years, and their integration is currently an active research area. In general, data mining consists of extracting knowledge from data. In particular, in this book we emphasize the importance of discovering comprehensible, interesting knowledge, which is potentially useful for intelligent decision making.In a nutshell, the motivation for applying evolutionary algorithms to data mining is that evolutionary algorithms are robust search methods which perform a global search in the space of candidate solutions. In contrast, most rule induction methods perform a local, greedy search in the space of candidate rules. Intuitively, the global search of evolutionary algorithms can discover interesting rules and patterns that would be missed by the greedy search performed by most rule induction methods.

608 citations


Proceedings ArticleDOI
19 May 2002
TL;DR: A simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61 and proving a lower bound of 1+2/e on the approximability of the k-median problem.
Abstract: We present a simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61. We use this algorithm to find better approximation algorithms for the capacitated facility location problem with soft capacities and for a common generalization of the k-median and facility location problems. We also prove a lower bound of 1+2/e on the approximability of the k-median problem. At the end, we present a discussion about the techniques we have used in the analysis of our algorithm, including a computer-aided method for proving bounds on the approximation factor.

487 citations


Proceedings ArticleDOI
24 Jun 2002
TL;DR: This paper presents an algorithm for generating attack graphs using model checking as a subroutine, and provides a formal characterization of this problem, proving that it is polynomially equivalent to the minimum hitting set problem and presenting a greedy algorithm with provable bounds.
Abstract: An attack graph is a succinct representation of all paths through a system that end in a state where an intruder has successfully achieved his goal. Today Red Teams determine the vulnerability of networked systems by drawing gigantic attack graphs by hand. Constructing attack graphs by hand is tedious, error-prone, and impractical for large systems. By viewing an attack as a violation of a safety property, we can use off-the-shelf model checking technology to produce attack graphs automatically: a successful path from the intruder's viewpoint is a counterexample produced by the model checker In this paper we present an algorithm for generating attack graphs using model checking as a subroutine. Security analysts use attack graphs for detection, defense and forensics. In this paper we present a minimization analysis technique that allows analysts to decide which minimal set of security measures would guarantee the safety of the system. We provide a formal characterization of this problem: we prove that it is polynomially equivalent to the minimum hitting set problem and we present a greedy algorithm with provable bounds. We also present a reliability analysis technique that allows analysts to perform a simple cost-benefit trade-off depending on the likelihoods of attacks. By interpreting attack graphs as Markov Decision Processes we can use the value iteration algorithm to compute the probabilities of intruder success for each attack the graph.

444 citations


Posted Content
TL;DR: The method of dual fitting and the idea of factor-revealing LP are formalized and used to design and analyze two greedy algorithms for the metric uncapacitated facility location problem.
Abstract: In this paper, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61, with running times of O(mlog m) and O(n^3), respectively, where n is the total number of vertices and m is the number of edges in the underlying complete bipartite graph between cities and facilities. The algorithms are used to improve recent results for several variants of the problem.

409 citations


Journal ArticleDOI
TL;DR: This paper presents a method for solving the multi-depot location-routing problem (MDLRP) in which several unrealistic assumptions are relaxed and the setting of parameters throughout the solution procedure for obtaining quick and favorable solutions is suggested.

372 citations


Book ChapterDOI
03 Apr 2002
TL;DR: Techniques that are useful for the detection of dense subgraphs (quasi-cliques) in massive sparse graphs whose vertex set, but not the edge set, fits in RAM are described.
Abstract: We describe techniques that are useful for the detection of dense subgraphs (quasi-cliques) in massive sparse graphs whose vertex set, but not the edge set, fits in RAM. The algorithms rely on efficient semi-external memory algorithms used to preprocess the input and on greedy randomized adaptive search procedures (GRASP) to extract the dense subgraphs. A software platform was put together allowing graphs with hundreds of millions of nodes to be processed. Computational results illustrate the effectiveness of the proposed methods.

355 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the problem of managing the new generation of Agile Earth Observing Satellites (AEOS) and present different methods which have been investigated in order to solve a simplified version of the complete problem.

326 citations


Journal ArticleDOI
TL;DR: In this paper, a greedy algorithm for learning a Gaussian mixture is proposed, which uses a combination of global and local search each time a new component is added to the mixture and achieves solutions superior to EM with k components in terms of the likelihood of a test set.
Abstract: Learning a Gaussian mixture with a local algorithm like EM can be difficult because (i) the true number of mixing components is usually unknown, (ii) there is no generally accepted method for parameter initialization, and (iii) the algorithm can get trapped in one of the many local maxima of the likelihood function. In this paper we propose a greedy algorithm for learning a Gaussian mixture which tries to overcome these limitations. In particular, starting with a single component and adding components sequentially until a maximum number k, the algorithm is capable of achieving solutions superior to EM with k components in terms of the likelihood of a test set. The algorithm is based on recent theoretical results on incremental mixture density estimation, and uses a combination of global and local search each time a new component is added to the mixture.

295 citations


Journal ArticleDOI
TL;DR: In this article, the problem of designing spatially cohesive nature reserve systems that meet biodiversity objectives is formulated as a nonlinear integer programming problem, where the multiobjective function minimises a combination of boundary length, area and failed representation of the biological attributes we are trying to conserve.
Abstract: The problem of designing spatially cohesive nature reserve systems that meet biodiversity objectives is formulated as a nonlinear integer programming problem. The multiobjective function minimises a combination of boundary length, area and failed representation of the biological attributes we are trying to conserve. The task is to reserve a subset of sites that best meet this objective. We use data on the distribution of habitats in the Northern Territory, Australia, to show how simulated annealing and a greedy heuristic algorithm can be used to generate good solutions to such large reserve design problems, and to compare the effectiveness of these methods.

269 citations


Book ChapterDOI
TL;DR: This algorithm uses an idea of cost scaling, a greedy algorithm of Jain, Mahdian and Saberi, and a greedy augmentation procedure of Charikar, Guha and Khuller to solve the uncapacitated metric facility location problem.
Abstract: In this paper we present a 1.52-approximation algorithm for the uncapacitated metric facility location problem. This algorithm uses an idea of cost scaling, a greedy algorithm of Jain, Mahdian and Saberi, and a greedy augmentation procedure of Charikar, Guha and Khuller. We also present a 2.89-approximation for the capacitated metric facility location problem with soft capacities.

Journal ArticleDOI
TL;DR: Computational experiments show that the greedy algorithm and the nearest neighbor algorithm (NN), popular choices for tour construction heuristics, work at acceptable level for the Euclidean TSP, but produce very poor results for the general Symmetric and Asymmetric TSP (STSP and ATSP).

Journal ArticleDOI
TL;DR: An efficient genetic algorithm (GA) to solve the traveling salesman problem with precedence constraints is presented and the key concept is a topological sort (TS), which is defined as an ordering of vertices in a directed graph.

Journal ArticleDOI
TL;DR: An algorithm is presented that builds homogeneous blocks of identically orientated items that generates the desired block arrangements and the solutions provided by the greedy heuristic are improved by a tree search.

Journal ArticleDOI
TL;DR: It is found that topology-informed replica placement methods can achieve average client latencies which are within a factor of 1.1-1.2 of the greedy algorithm, but only if the placement method is designed carefully.

Proceedings ArticleDOI
06 Jan 2002
TL;DR: In this paper, the problem of finding a maximum weight independent set in a t-interval graph was formulated as a problem of scheduling jobs that are given as groups of non-intersecting segments on the real line.
Abstract: We consider the problem of scheduling jobs that are given as groups of non-intersecting segments on the real line. Each job Jj is associated with an interval, Ij, which consists of up to t segments, for some t ≥ 1, a positive weight, wj, and two jobs are in conflict if any of their segments intersect. Such jobs show up in a wide range of applications, including the transmission of continuous-media data, allocation of linear resources (e.g. bandwidth in linear processor arrays), and in computational biology/geometry. The objective is to schedule a subset of non-conflicting jobs of maximum total weight.In a single machine environment, our problem can be formulated as the problem of finding a maximum weight independent set in a t-interval graph (the special case of t = 1 is an ordinary interval graph). We show that, for t ≥ 2, this problem is APX-hard, even for highly restricted instances. Our main result is a 2t-approximation algorithm for general instances, based on a novel fractional version of the Local Ratio technique. Previously, the problem was considered only for proper union graphs, a restricted subclass of t-interval graphs, and the approximation factor achieved was (2t - 1 + 1/2t). A bi-criteria polynomial time approximation scheme (PTAS) is developed for the subclass of t-union graphs.In the online case, we consider uniform weight jobs that consist of at most two segments. We show that when the resulting 2-interval graph is proper, a simple greedy algorithm is 3-competitive, while any randomized algorithm has competitive ratio at least 2.5. For general instances, we give a randomized O(log2R)-competitive (or O((log R)2+e)-competitive) algorithm, where R is the known (unknown) ratio between the longest and the shortest segment in the input sequence.

Journal ArticleDOI
TL;DR: In this article, the authors further improved the performance of i>GFG algorithm by reducing its average hop count, by adding a sooner-back procedure for earlier escape from i>FACE mode.
Abstract: Several localized position based routing algorithms for wireless networks were described recently. In greedy routing algorithm (that has close performance to the shortest path algorithm, if successful), sender or node i>S currently holding the message i>m forwards i>m to one of its neighbors that is the closest to destination. The algorithm fails if i>S does not have any neighbor that is closer to destination than i>S. i>FACE algorithm guarantees the delivery of i>m if the network, modeled by unit graph, is connected. i>GFG algorithm combines greedy and i>FACE algorithms. Greedy algorithm is applied as long as possible, until delivery or a failure. In case of failure, the algorithm switches to i>FACE algorithm until a node closer to destination than last failure node is found, at which point greedy algorithm is applied again. Past traffic does not need to be memorized at nodes. In this paper we further improve the performance of i>GFG algorithm, by reducing its average hop count. First we improve the i>FACE algorithm by adding a sooner-back procedure for earlier escape from i>FACE mode. Then we perform a i>shortcut procedure at each forwarding node i>S. Node i>S uses the local information available to calculate as many hops as possible and forwards the packet to the last known hop directly instead of forwarding it to the next hop. The second improvement is based on the concept of dominating sets. Each node in the network is classified as internal or not, based on geographic position of its neighboring nodes. The network of internal nodes defines a connected dominating set, i.e., and each node must be either internal or directly connected to an internal node. In addition, internal nodes are connected. We apply several existing definitions of internal nodes, namely the concepts of intermediate, inter-gateway and gateway nodes. We propose to run i>GFG routing, enhanced by shortcut procedure, on the dominating set, except possibly the first and last hops. The performance of proposed algorithms is measured by comparing its average hop count with hop count of the basic i>GFG algorithm and the benchmark shortest path algorithm, and very significant improvements were obtained for low degree graphs. More precisely, we obtained localized routing algorithm that guarantees delivery and has very low excess in terms of hop count compared to the shortest path algorithm. The experimental data show that the length of additional path (in excess of shortest path length) can be reduced to about half of that of existing i>GFG algorithm.

Journal ArticleDOI
TL;DR: Two simple randomized approximation algorithms are described, which are guaranteed to deliver feasible schedules with expected objective function value within factors of 1.7451 and 1.6853, respectively, of the optimum of two linear programming relaxations of the problem.
Abstract: We consider the scheduling problem of minimizing the average weighted completion time of n jobs with release dates on a single machine. We first study two linear programming relaxations of the problem, one based on a time-indexed formulation, the other on a completion-time formulation. We show their equivalence by proving that a O(n log n) greedy algorithm leads to optimal solutions to both relaxations. The proof relies on the notion of mean busy times of jobs, a concept which enhances our understanding of these LP relaxations. Based on the greedy solution, we describe two simple randomized approximation algorithms, which are guaranteed to deliver feasible schedules with expected objective function value within factors of 1.7451 and 1.6853, respectively, of the optimum. They are based on the concept of common and independent $\alpha$-points, respectively. The analysis implies in particular that the worst-case relative error of the LP relaxations is at most 1.6853, and we provide instances showing that it is at least $e/(e-1) \approx 1.5819$. Both algorithms may be derandomized; their deterministic versions run in O(n2) time. The randomized algorithms also apply to the on-line setting, in which jobs arrive dynamically over time and one must decide which job to process without knowledge of jobs that will be released afterwards.

Posted Content
Neal E. Young1
TL;DR: In this article, the authors explore how to avoid the time bottleneck for randomized rounding algorithms for packing and covering linear programs (either mixed integer linear programs or linear programs with no negative coefficients).
Abstract: Randomized rounding is a standard method, based on the probabilistic method, for designing combinatorial approximation algorithms. In Raghavan's seminal paper introducing the method (1988), he writes: "The time taken to solve the linear program relaxations of the integer programs dominates the net running time theoretically (and, most likely, in practice as well)." This paper explores how this bottleneck can be avoided for randomized rounding algorithms for packing and covering problems (linear programs, or mixed integer linear programs, having no negative coefficients). The resulting algorithms are greedy algorithms, and are faster and simpler to implement than standard randomized-rounding algorithms. This approach can also be used to understand Lagrangian-relaxation algorithms for packing/covering linear programs: such algorithms can be viewed as as (derandomized) randomized-rounding schemes.

Journal ArticleDOI
Feng Cheng1, Markus Ettl1, Grace Lin1, David D. Yao1
TL;DR: A nonlinear optimization model with multiple constraints, reflecting the service levels offered to different market segments is developed, and an exact algorithm for the important case of demand in each market segment having (at least) one unique component is developed.
Abstract: This study is motivated by a process-reengineering problem in personal computer (PC) manufacturing, i.e., to move from a build-to-stock operation that is centered around end-product inventory towards a configure-to-order (CTO) operation that eliminates endproduct inventory. In fact, CTO has made irrelevant the notion of preconfigured machine types and focuses instead on maintaining the right amount of inventory at the components. CTO appears to be the ideal operational model that provides both mass customization and a quick response time to order fulfillment. To quantify the inventory-service trade-off in the CTO environment, we develop a nonlinear optimization model with multiple constraints, reflecting the service levels offered to different market segments. To solve the optimization problem, we develop an exact algorithm for the important case of demand in each market segment having (at least) one unique component, and a greedy heuristic for the general (nonunique component) case. Furthermore, we show how to use sensitivity analysis, along with simulation, to fine-tune the solutions. The performance of the model and the solution approach is examined by extensive numerical studies on realistic problem data. We present the major findings in applying our model to study the inventory-service impacts in the reengineering of a PC manufacturing process.

Posted Content
TL;DR: In this article, the authors consider the problem of cutting a set of edges on a polyhedral manifold surface, possibly with boundary, to obtain a single topological disk, minimizing either the total number of cut edges or their total length.
Abstract: We consider the problem of cutting a set of edges on a polyhedral manifold surface, possibly with boundary, to obtain a single topological disk, minimizing either the total number of cut edges or their total length. We show that this problem is NP-hard, even for manifolds without boundary and for punctured spheres. We also describe an algorithm with running time n^{O(g+k)}, where n is the combinatorial complexity, g is the genus, and k is the number of boundary components of the input surface. Finally, we describe a greedy algorithm that outputs a O(log^2 g)-approximation of the minimum cut graph in O(g^2 n log n) time.

Book ChapterDOI
18 Nov 2002
TL;DR: The Simulated Annealing scheduler is compared to a Ad-Hoc Greedy scheduler used in earlier experiments and exposes some assumptions built into the Ad-hoc scheduler and some problems with the Performance Model being used.
Abstract: Generating high quality schedules for distributed applications on a Computational Grid is a challenging problem. Some experiments using Simulated Annealing as a scheduling mechanism for a ScaLA-PACK LU solver on a Grid are described. The Simulated Annealing scheduler is compared to a Ad-Hoc Greedy scheduler used in earlier experiments. The Simulated Annealing scheduler exposes some assumptions built into the Ad-Hoc scheduler and some problems with the Performance Model being used.

Book
25 Feb 2002
TL;DR: This book discusses algorithms, graph theory, and the importance of exploration in the solving of optimization problems.
Abstract: 1 Basics I: Graphs.- 1.1 Introduction to graph theory.- 1.2 Excursion: Random graphs.- 2 Basics II: Algorithms.- 2.1 Introduction to algorithms.- 2.2 Excursion: Fibonacci heaps and amortized time.- 3 Basics III: Complexity.- 3.1 Introduction to complexity theory.- 3.2 Excursion: More NP-complete problems.- 4 Special Terminal Sets.- 4.1 The shortest path problem.- 4.2 The minimum spanning tree problem.- 4.3 Excursion: Matroids and the greedy algorithm.- 5 Exact Algorithms.- 5.1 The enumeration algorithm.- 5.2 The Dreyfus-Wagner algorithm.- 5.3 Excursion: Dynamic programming.- 6 Approximation Algorithms.- 6.1 A simple algorithm with performance ratio 2.- 6.2 Improving the time complexity.- 6.3 Excursion: Machine scheduling.- 7 More on Approximation Algorithms.- 7.1 Minimum spanning trees in hypergraphs.- 7.2 Improving the performance ratio I.- 7.3 Excursion: The complexity of optimization problems.- 8 Randomness Helps.- 8.1 Probabilistic complexity classes.- 8.2 Improving the performance ratio II.- 8.3 An almost always optimal algorithm.- 8.4 Excursion: Primality and cryptography.- 9 Limits of Approximability.- 9.1 Reducing optimization problems.- 9.2 APX-completeness.- 9.3 Excursion: Probabilistically checkable proofs.- 10 Geometric Steiner Problems.- 10.1 A characterization of rectilinear Steiner minimum trees.- 10.2 The Steiner ratios.- 10.3 An almost linear time approximation scheme.- 10.4 Excursion: The Euclidean Steiner problem.- Symbol Index.

Journal ArticleDOI
TL;DR: In this article, the authors studied the problem of scheduling activities of several types under the constraint that, at most, a fixed number of activities can be scheduled in any single time slot.
Abstract: We study the problem of scheduling activities of several types under the constraint that, at most, a fixed number of activities can be scheduled in any single time slot. Any given activity type is associated with a service cost and an operating cost that increases linearly with the number of time slots since the last service of this type. The problem is to find an optimal schedule that minimizes the long-run average cost per time slot. Applications of such a model are the scheduling of maintenance service to machines, multi-item replenishment of stock, and minimizing the mean response time in Broadcast Disks. Broadcast Disks recently gained a lot of attention because they were used to model backbone communications in wireless systems, Teletext systems, and Web caching in satellite systems. The first contribution of this paper is the definition of a general model that combines into one several important previous models. We prove that an optimal cyclic schedule for the general problem exists, and we establish the NP-hardness of the problem. Next, we formulate a nonlinear program that relaxes the optimal schedule and serves as a lower bound on the cost of an optimal schedule. We present an efficient algorithm for finding a near-optimal solution to the nonlinear program. We use this solution to obtain several approximation algorithms. 1 A 9/8 approximation for a variant of the problem that models the Broadcast Disks application. The algorithm uses some properties of “Fibonacci sequences.” Using this sequence, we present a 1.57-approximation algorithm for the general problem. 2 A simple randomized algorithm and a simple deterministic greedy algorithm for the problem. We prove that both achieve approximation factor of 2. To the best of our knowledge this is the first worst-case analysis of a widely used greedy heuristic for this problem.

Proceedings Article
01 Aug 2002
TL;DR: A double-loop algorithm, guaranteed to converge to a local minimum of a Bethe free energy is derived, and it is shown that stable fixed points of (damped) expectation propagation correspond to local minima of this free energy, but that the converse need not be the case.
Abstract: We describe expectation propagation for approximate inference in dynamic Bayesian networks as a natural extension of Pearl's exact belief propagation. Expectation propagation is a greedy algorithm, converges in many practical cases, but not always. We derive a double-loop algorithm, guaranteed to converge to a local minimum of a Bethe free energy. Furthermore, we show that stable fixed points of (damped) expectation propagation correspond to local minima of this free energy, but that the converse need not be the case. We illustrate the algorithms by applying t,hem to switching linear dynamical systems and discuss implications for approximate inference in general Bayesian networks.

Journal ArticleDOI
TL;DR: A greedy heuristic and two local search algorithms, 1-opt local search and k-optLocal search, are proposed for the unconstrained binary quadratic programming problem (BQP) and offer a great potential for the incorporation in more sophisticated meta-heuristics.
Abstract: In this paper, a greedy heuristic and two local search algorithms, 1-opt local search and k-opt local search, are proposed for the unconstrained binary quadratic programming problem (BQP) These heuristics are well suited for the incorporation into meta-heuristics such as evolutionary algorithms Their performance is compared for 115 problem instances All methods are capable of producing high quality solutions in short time In particular, the greedy heuristic is able to find near optimum solutions a few percent below the best-known solutions, and the local search procedures are sufficient to find the best-known solutions of all problem instances with n ≤ 100 The k-opt local searches even find the best-known solutions for all problems of size n ≤ 250 and for 11 out of 15 instances of size n e 500 in all runs For larger problems (n e 500, 1000, 2500), the heuristics appear to be capable of finding near optimum solutions quickly Therefore, the proposed heuristics—especially the k-opt local search—offer a great potential for the incorporation in more sophisticated meta-heuristics

Journal ArticleDOI
TL;DR: A heuristics-based approach to the problem, which is referred to as the greedy approach, and describes algorithms to first find such advantageous directions as well as results showing the improvement of performance that can be derived from greedy toolpaths.
Abstract: In earlier work, we introduced the concept of time-optimal toolpaths, modeled the behavior and constraints of machining, and formulated the optimization problem mathematically. The question was by what toolpath it would be possible to machine a surface in minimum time—while considering the kinematic performance of a machine, the speed limits of the motors and the surface finish requirements. The time-optimal problem is a difficult one, and does not generally yield a closed-form solution. Here we present a heuristics-based approach to the problem, which we refer to as the greedy approach. The performance envelope of the machine at a point on the surface is very anisotropic, and material can be removed much more rapidly in some directions than in other directions. The greedy approach seeks the directions of the best performance. We describe algorithms to first find such advantageous directions. We then show how they can be fitted by a continuous vector field. We also show how toolpaths with the proper side-steps can be generated from this field. We end with results showing the improvement of performance that can be derived from greedy toolpaths.

Journal ArticleDOI
TL;DR: An efficient approximation algorithm consisting of two separate but integrated steps: multicast routing and wavelength assignment is developed and it is proved that the problem of optimal wavelength assignment on a multicast tree is not NP-hard.
Abstract: The next generation multimedia applications such as video conferencing and HDTV have raised tremendous challenges on the network design, both in bandwidth and service. As wavelength-division-multiplexing (WDM) networks have emerged as a promising candidate for future networks with large bandwidth, supporting efficient multicast in WDM networks becomes eminent. Different from the IP layer, the cost of multicast at the WDM layer involves not only bandwidth (wavelength) cost, but also wavelength conversion cost and light splitting cost. It is well known that the optimal multicast problem in WDM networks is NP-hard. In this paper, we develop an efficient approximation algorithm consisting of two separate but integrated steps: multicast routing and wavelength assignment. We prove that the problem of optimal wavelength assignment on a multicast tree is not NP-hard; in fact, an optimal wavelength assignment algorithm with complexity of O(NW) is presented. Simulation results have revealed that the optimal wavelength assignment beats greedy algorithms by a large margin in networks using many wavelengths on each link such as dense wavelength-division-multiplexing (DWDM) networks. Our proposed heuristic multicast routing algorithm takes into account both the cost of using wavelength on links and the cost of wavelength conversion. The resulting multicast tree is derived from the optimal lightpaths used for unicast.

Journal Article
TL;DR: This work considers the more general problem of strings being represented by a singly linked list and being able to apply these operations to the pointer associated with a vertex as well as the character associated with the vertex, and shows that this problem is NP-complete.
Abstract: The traditional edit-distance problem is to find the minimum number of insert-character and delete-character (and sometimes change character) operations required to transform one string into another. Here we consider the more general problem of strings being represented by a singly linked list (one character per node) and being able to apply these operations to the pointer associated with a vertex as well as the character associated with the vertex. That is, in O(1) time, not only can characters be inserted or deleted, but also substrings can be moved or deleted. We limit our attention to the ability to move substrings and leave substring deletions for future research. Note that O(1) time substring move operations imply O(1) substring exchange operations as well, a form of transformation that has been of interest in molecular biology. We show that this problem is NP-complete, show that a recursive sequence of moves can be simulated with at most a constant factor increase by a non-recursive sequence, and present a polynomial time greedy algorithm for non-recursive moves with a worst-case log factor approximation to optimal. The development of this greedy algorithm shows how to reduce moves of substrings to moves of characters, and how to convert moves with characters to only insert and deletes of characters.

Journal ArticleDOI
TL;DR: The polyhedral foundation of the PCL framework is developed, based on the structural and algorithmic properties of a new polytope associated with an accessible set system-extended polymatroid, and PCL-indexability is interpreted as a form of the classic economics law of diminishing marginal returns.
Abstract: This paper develops a polyhedral approach to the design, analysis, and computation of dynamic allocation indices for scheduling binary-action (engage/rest) Markovian stochastic projects which can change state when rested (restless bandits (RBs)), based on partial conservation laws (PCLs). This extends previous work by the author [J. Nino-Mora (2001): Restless bandits, partial conservation laws and indexability. Adv. Appl. Probab. 33, 76–98], where PCLs were shown to imply the optimality of index policies with a postulated structure in stochastic scheduling problems, under admissible linear objectives, and they were deployed to obtain simple sufficient conditions for the existence of Whittle's (1988) RB index (indexability), along with an adaptive-greedy index algorithm. The new contributions include: (i) we develop the polyhedral foundation of the PCL framework, based on the structural and algorithmic properties of a new polytope associated with an accessible set system -extended polymatroid}); (ii) we present new dynamic allocation indices for RBs, motivated by an admission control model, which extend Whittle's and have a significantly increased scope; (iii) we deploy PCLs to obtain both sufficient conditions for the existence of the new indices (PCL-indexability), and a new adaptive-greedy index algorithm; (iv) we interpret PCL-indexability as a form of the classic economics law of diminishing marginal returns, and characterize the index as an optimal marginal cost rate; we further solve a related optimal constrained control problem; (v) we carry out a PCL-indexability analysis of the motivating admission control model, under time-discounted and long-run average criteria; this gives, under mild conditions, a new index characterization of optimal threshold policies; and (vi) we apply the latter to present new heuristic index policies for two hard queueing control problems: admission control and routing to parallel queues; and scheduling a multiclass make-to-stock queue with lost sales, both under state-dependent holding cost rates and birth-death dynamics.