scispace - formally typeset
Search or ask a question

Showing papers on "Greedy algorithm published in 2004"


Journal ArticleDOI
TL;DR: This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries and develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal.
Abstract: This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho's basis pursuit (BP) paradigm can recover the optimal representation of an exactly sparse signal. It leverages this theory to show that both OMP and BP succeed for every sparse input signal from a wide class of dictionaries. These quasi-incoherent dictionaries offer a natural generalization of incoherent dictionaries, and the cumulative coherence function is introduced to quantify the level of incoherence. This analysis unifies all the recent results on BP and extends them to OMP. Furthermore, the paper develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal. From there, it argues that OMP is an approximation algorithm for the sparse problem over a quasi-incoherent dictionary. That is, for every input signal, OMP calculates a sparse approximant whose error is only a small factor worse than the minimal error that can be attained with the same number of terms.

3,865 citations


Book ChapterDOI
01 Jan 2004
TL;DR: In the previous three chapters, various classic problem-solving methods, including dynamic programming, branch and bound, and local search algorithms, as well as some modern heuristic methods like simulated annealing and tabu search, were seen to be deterministic.
Abstract: In the previous three chapters we discussed various classic problem-solving methods, including dynamic programming, branch and bound, and local search algorithms, as well as some modern heuristic methods like simulated annealing and tabu search. Some of these techniques were seen to be deterministic. Essentially you “turn the crank” and out pops the answer. For these methods, given a search space and an evaluation function, some would always return the same solution (e.g., dynamic programming), while others could generate different solutions based on the initial configuration or starting point (e.g., a greedy algorithm or the hill-climbing technique). Still other methods were probabilistic, incorporating random variation into the search for optimal solutions. These methods (e.g., simulated annealing) could return different final solutions even when given the same initial configuration. No two trials with these algorithms could be expected to take exactly the same course. Each trial is much like a person’s fingerprint: although there are broad similarities across fingerprints, no two are exactly alike.

416 citations


Proceedings ArticleDOI
26 Apr 2004
TL;DR: In this article, three approximation algorithms for a variation of the set k-cover problem, where the objective is to partition the sensors into covers such that the number of covers that include an area, summed over all areas, is maximized, are presented.
Abstract: Wireless sensor networks (WSNs) are emerging as an effective means for environment monitoring. This paper investigates a strategy for energy efficient monitoring in WSNs that partitions the sensors into covers, and then activates the covers iteratively in a round-robin fashion. This approach takes advantage of the overlap created when many sensors monitor a single area. Our work builds upon previous work by Slijepcevic and Potkonjak (2001), where the model is first formulated. We have designed three approximation algorithms for a variation of the set k-cover problem, where the objective is to partition the sensors into covers such that the number of covers that include an area, summed over all areas, is maximized. The first algorithm is randomized and partitions the sensors, in expectation, within a fraction 1 - (1/e) (/spl sim/ .63) of the optimum. We present two other deterministic approximation algorithms. One is a distributed greedy algorithm with a 1/2 approximation ratio and the other is a centralized greedy algorithm with a 1 - (1/e) approximation ratio. We show that it is NP-complete to guarantee better than 15/16 of the optimal coverage, indicating that all three algorithms perform well with respect to the best approximation algorithm possible in polynomial time, assuming P /spl ne/ NP. Simulations indicate that in practice, the deterministic algorithms perform far above their worst case bounds, consistently covering more than 72% of what is covered by an optimum solution. Simulations also indicate that the increase in longevity is proportional to the amount of overlap amongst the sensors. The algorithms are fast, easy to use, and according to simulations, significantly increase the longevity of sensor networks. The randomized algorithm in particular seems quite practical.

386 citations


Journal ArticleDOI
TL;DR: This work considers the problem of clustering dynamic point sets in a metric space and proposes a model called incremental clustering which is based on a careful analysis of the requirements of the information retrieval application, and which should also be useful in other applications.
Abstract: Motivated by applications such as document and image classification in information retrieval, we consider the problem of clustering dynamic point sets in a metric space. We propose a model called incremental clustering which is based on a careful analysis of the requirements of the information retrieval application, and which should also be useful in other applications. The goal is to efficiently maintain clusters of small diameter as new points are inserted. We analyze several natural greedy algorithms and demonstrate that they perform poorly. We propose new deterministic and randomized incremental clustering algorithms which have a provably good performance, and which we believe should also perform well in practice. We complement our positive results with lower bounds on the performance of incremental algorithms. Finally, we consider the dual clustering problem where the clusters are of fixed diameter, and the goal is to minimize the number of clusters.

331 citations


Journal ArticleDOI
TL;DR: A new decentralized honey bee algorithm which dynamically allocates servers to satisfy request loads, and is compared against an omniscient optimality algorithm, a conventional greedy algorithm, and an algorithm that computes omnisciently the optimal static allocation.
Abstract: Internet centers host services for e-banks, e-auctions and other clients. Hosting centers then must allocate servers among clients to maximize revenue. The limited number of servers, costs of reallocating servers, and unpredictability of requests make server allocation optimization difficult Based on the many similarities between server and honey bee colony forager allocation, we pro pose a new decentralized honey bee algorithm which dynamically allocates servers to satisfy request loads. We compare it against an omniscient optimality algorithm, a conventional greedy algorithm, and an algorithm that computes omnisciently the optimal static allocation. We evaluate performance on simulated request streams and commercial trace data Our algorithm performs better than static or greedy for highly variable request loads, but greedy can outperform it under low variability. Honey bee forager allocation, though suboptimal for static food sources, may possess a counterbalancing responsiveness to food source variability.

289 citations


Proceedings ArticleDOI
01 Jan 2004
TL;DR: An efficient real-time algorithm that solves the data association problem and is capable of initiating and terminating a varying number of tracks, which shows remarkable performance compared to the greedy algorithm and the multiple hypothesis tracker under extreme conditions.
Abstract: In this paper, we consider the general multiple-target tracking problem in which an unknown number of targets appears and disappears at random times and the goal is to find the tracks of targets from noisy observations. We propose an efficient real-time algorithm that solves the data association problem and is capable of initiating and terminating a varying number of tracks. We take the data-oriented, combinatorial optimization approach to the data association problem but avoid the enumeration of tracks by applying a sampling method called Markov chain Monte Carlo (MCMC). The MCMC data association algorithm can be viewed as a "deferred logic" method since its decision about forming a track is based on both current and past observations. At the same time, it can be viewed as an approximation to the optimal Bayesian filter. The algorithm shows remarkable performance compared to the greedy algorithm and the multiple hypothesis tracker (MHT) under extreme conditions, such as a large number of targets in a dense environment, low detection probabilities, and high false alarm rates.

284 citations


Book ChapterDOI
01 Jan 2004
TL;DR: This paper presents a meta-analysed version of the Butenko’s Algorithm, a version of which was previously described in detail in [Bouchut-Boyaval, M3AS (23) 2013].
Abstract: 3 Centralized CDS Construction 335 3.1 Guha and Khuller’s Algorithm . . . . . . . . . . . . . . . . . . . . . 336 3.2 Ruan’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 3.3 Cheng’s Greedy Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 340 3.4 Min’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 3.5 Butenko’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 341

282 citations


Proceedings ArticleDOI
17 Oct 2004
TL;DR: A stochastic variant of the NP-hard 0/1 knapsack problem in which item values are deterministic and item sizes are independent random variables with known, arbitrary distributions is considered.
Abstract: We consider a stochastic variant of the NP-hard 0/1 knapsack problem in which item values are deterministic and item sizes are independent random variables with known, arbitrary distributions. Items are placed in the knapsack sequentially, and the act of placing an item in the knapsack instantiates its size. Our goal is to compute a solution "policy" that maximizes the expected value of items placed in the knapsack, and we consider both non-adaptive policies (that designate a priori a fixed sequence of items to insert) and adaptive policies (that can make dynamic choices based on the instantiated sizes of items placed in the knapsack thus far). We show that adaptivity provides only a constant-factor improvement by demonstrating a greedy non-adaptive algorithm that approximates the optimal adaptive policy within a factor of 7. We also design an adaptive polynomial-time algorithm which approximates the optimal adaptive policy within a factor of 5 + /spl epsiv/, for any constant /spl epsiv/ > 0.

250 citations


Proceedings ArticleDOI
13 Jun 2004
TL;DR: Surprisingly, the NoN-greedy routing algorithm is able to diminish route-lengths to Θ(log n / log log n) hops, which is asymptotically optimal.
Abstract: Several peer-to-peer networks are based upon randomized graph topologies that permit efficient greedy routing, e. g., randomized hypercubes, randomized Chord, skip-graphs and constructions based upon small-world percolation networks. In each of these networks, a node has out-degree Θ(log n), where n denotes the total number of nodes, and greedy routing is known to take O(log n) hops on average. We establish lower-bounds for greedy routing for these networks, and analyze Neighbor-of-Neighbor (NoN)-greedy routing. The idea behind NoN, as the name suggests, is to take a neighbor's neighbors into account for making better routing decisions.The following picture emerges: Deterministic routing networks like hypercubes and Chord have diameter Θ(log n) and greedy routing is optimal. Randomized routing networks like randomized hypercubes, randomized Chord, and constructions based on small-world percolation networks, have diameter Θ(log n / log log n) with high probability. The expected diameter of Skip graphs is also Θ(log n / log log n). In all of these networks, greedy routing fails to find short routes, requiring Ω(log n) hops with high probability. Surprisingly, the NoN-greedy routing algorithm is able to diminish route-lengths to Θ(log n / log log n) hops, which is asymptotically optimal.

234 citations


Journal ArticleDOI
TL;DR: It is proved that the greedy algorithm that drops the earliest packets among all low-value packets is the best greedy algorithm, and the competitive ratio of any on-line algorithm for a uniform bounded-delay buffer is bounded away from 1, independent of the delay size.
Abstract: We consider two types of buffering policies that are used in network switches supporting Quality of Service (QoS). In the FIFO type, packets must be transmitted in the order in which they arrive; the constraint in this case is the limited buffer space. In the bounded-delay type, each packet has a maximum delay time by which it must be transmitted, or otherwise it is lost. We study the case of overloads resulting in packet loss. In our model, each packet has an intrinsic value, and the goal is to maximize the total value of transmitted packets. Our main contribution is a thorough investigation of some natural greedy algorithms in various models. For the FIFO model we prove tight bounds on the competitive ratio of the greedy algorithm that discards packets with the lowest value when an overflow occurs. We also prove that the greedy algorithm that drops the earliest packets among all low-value packets is the best greedy algorithm. This algorithm can be as much as 1.5 times better than the tail-drop greedy policy, which drops the latest lowest-value packets. In the bounded-delay model we show that the competitive ratio of any on-line algorithm for a uniform bounded-delay buffer is bounded away from 1, independent of the delay size. We analyze the greedy algorithm in the general case and in three special cases: delay bound 2, link bandwidth 1, and only two possible packet values. Finally, we consider the off-line scenario. We give efficient optimal algorithms and study the relation between the bounded-delay and FIFO models in this case.

194 citations


Journal ArticleDOI
TL;DR: For the min sum vertex cover version of the problem, it is shown that it can be approximated within a ratio of 2, and is NP-hard to approximate within some constant ρ > 1.
Abstract: The input to the min sum set cover problem is a collection of n sets that jointly cover m elements. The output is a linear order on the sets, namely, in every time step from 1 to n exactly one set is chosen. For every element, this induces a first time step by which it is covered. The objective is to find a linear arrangement of the sets that minimizes the sum of these first time steps over all elements. We show that a greedy algorithm approximates min sum set cover within a ratio of 4. This result was implicit in work of Bar-Noy, Bellare, Halldorsson, Shachnai, and Tamir (1998) on chromatic sums, but we present a simpler proof. We also show that for every e > 0, achieving an approximation ratio of 4 – e is NP-hard. For the min sum vertex cover version of the problem (which comes up as a heuristic for speeding up solvers of semidefinite programs) we show that it can be approximated within a ratio of 2, and is NP-hard to approximate within some constant ρ > 1.

Proceedings ArticleDOI
24 May 2004
TL;DR: This work proposes a new greedy geographic routing algorithm called Bounded Voronoi Greedy Forwarding (BVGF) that allows sensing-covered networks to achieve an asymptotic network dilation lower than 4:62 as long as the communication range is at least twice the sensing range.
Abstract: Greedy geographic routing is attractive in wireless sensor networks due to its efficiency and scalability. However, greedy geographic routing may incur long routing paths or even fail due to routing voids on random network topologies. We study greedy geographic routing in an important class of wireless sensor networks that provide sensing coverage over a geographic area (e.g., surveillance or object tracking systems). Our geometric analysis and simulation results demonstrate that existing greedy geographic routing algorithms can successfully find short routing paths based on local states in sensing-covered networks. In particular, we derive theoretical upper bounds on the network dilation of sensing-covered networks under greedy geographic routing algorithms. Furthermore, we propose a new greedy geographic routing algorithm called Bounded Voronoi Greedy Forwarding (BVGF) that allows sensing-covered networks to achieve an asymptotic network dilation lower than 4:62 as long as the communication range is at least twice the sensing range. Our results show that simple greedy geographic routing is an effective routing scheme in many sensing-covered networks.

Posted Content
TL;DR: A variant of the obvious sequential greedy algorithm, that computes a weighted matching at most a factor 2 away from the maximum, is easily distributed and yields the best known distributed approximation algorithm for this problem so far.
Abstract: Wattenhofer et al. [WW04] derive a complicated distributed algorithm to compute a weighted matching of an arbitrary weighted graph, that is at most a factor 5 away from the maximum weighted matching of that graph. We show that a variant of the obvious sequential greedy algorithm [Pre99], that computes a weighted matching at most a factor 2 away from the maximum, is easily distributed. This yields the best known distributed approximation algorithm for this problem so far.

Journal ArticleDOI
TL;DR: In this paper, a mathematical programming model for optimal highway pavement rehabilitation planning is presented, which minimizes the life cycle cost for a finite horizon by solving the problem of multiple rehabilitation activities on multiple facilities.
Abstract: This paper presents a mathematical programming model for optimal highway pavement rehabilitation planning which minimizes the life-cycle cost for a finite horizon. It extends previous researches in this area by solving the problem of multiple rehabilitation activities on multiple facilities, with realistic empirical models of deterioration and rehabilitation effectiveness. The formulation is based on discrete control theory. A nonlinear pavement performance model and integer decision variables are incorporated into a mixed-integer nonlinear programming (MINLP). Two solution approaches, a branch-and-bound algorithm and a greedy heuristic, are proposed for this model. It is shown that the heuristic results provide a good approximation to the exact optima, but with much lower computational costs.

Proceedings ArticleDOI
15 Nov 2004
TL;DR: It seems that "greedy" algorithms, such as SPAM, SRIDHCR, and TDS, do not perform particularly well for supervised clustering and seem to terminate prematurely too often.
Abstract: This work centers on a novel data mining technique we term supervised clustering. Unlike traditional clustering, supervised clustering assumes that the examples are classified and has the goal of identifying class-uniform clusters that have high probability densities. Four representative-based algorithms for supervised clustering are introduced: a greedy algorithm with random restart, named SRIDHCR, that seeks for solutions by inserting and removing single objects from the current solution, SPAM (a variation of the clustering algorithm PAM), an evolutionary computing algorithm named SCEC, and a fast medoid-based top-down splitting algorithm, named TDS. The four algorithms were evaluated using a benchmark consisting of four UCI machine learning data sets. In general, it seems that "greedy" algorithms, such as SPAM, SRIDHCR, and TDS, do not perform particularly well for supervised clustering and seem to terminate prematurely too often. We also briefly describe the applications of supervised clustering.

Journal ArticleDOI
TL;DR: It is shown that this problem of cutting a subset of the edges of a polyhedral manifold surface, possibly with boundary, to obtain a single topological disk is NP-hard in general, even for manifolds without boundary and for punctured spheres.
Abstract: We consider the problem of cutting a subset of the edges of a polyhedral manifold surface, possibly with boundary, to obtain a single topological disk, minimizing either the total number of cut edges or their total length. We show that this problem is NP-hard in general, even for manifolds without boundary and for punctured spheres. We also describe an algorithm with running time n O(g+k), where n is the combinatorial complexity, g is the genus, and k is the number of boundary components of the input surface. Finally, we describe a greedy algorithm that outputs a O(log2 g)-approximation of the minimum cut graph in O(g 2 n log n) time.

Journal ArticleDOI
TL;DR: The practical message of this paper is that the greedy algorithm should be used with great care, since for many optimization problems its usage seems impractical even for generating a starting solution (that will be improved by a local search or another heuristic).

Journal ArticleDOI
TL;DR: A new greedy algorithm for surface reconstruction from unorganized point sets that achieves topologically correct reconstruction in most cases and can handle surfaces with complex topology, boundaries, and nonuniform sampling.
Abstract: In this paper, we present a new greedy algorithm for surface reconstruction from unorganized point sets. Starting from a seed facet, a piecewise linear surface is grown by adding Delaunay triangles one by one. The most plausible triangles are added first and in such a way as to prevent the appearance of topological singularities. The output is thus guaranteed to be a piecewise linear orientable manifold, possibly with boundary. Experiments show that this method is very fast and achieves topologically correct reconstruction in most cases. Moreover, it can handle surfaces with complex topology, boundaries, and nonuniform sampling.

Journal ArticleDOI
TL;DR: It is proved that the problem of finding a broadcast tree such that the energy cost of the broadcast tree is minimized, and three heuristic algorithms are proposed, namely, shortest path tree heuristic, greedyHeuristic, and node weighted Steiner tree-based heuristic which are centralized algorithms.
Abstract: In this paper, we discuss energy efficient broadcast in ad hoc wireless networks. The problem of our concern is: given an ad hoc wireless network, find a broadcast tree such that the energy cost of the broadcast tree is minimized. Each node in the network is assumed to have a fixed level of transmission power. We first prove that the problem is NP-hard and propose three heuristic algorithms, namely, shortest path tree heuristic, greedy heuristic, and node weighted Steiner tree-based heuristic, which are centralized algorithms. The approximation ratio of the node weighted Steiner tree-based heuristic is proven to be (1 + 2 ln(n - 1)). Extensive simulations have been conducted and the results have demonstrated the efficiency of the proposed algorithms.

Proceedings ArticleDOI
24 Oct 2004
TL;DR: A new stereo algorithm which uses colour segmentation to allow the handling of large untextured regions and precise localization of depth boundaries and results obtained indicate that the proposed algorithm can compete with the state-of-the-art.
Abstract: We propose a new stereo algorithm which uses colour segmentation to allow the handling of large untextured regions and precise localization of depth boundaries. Each segment is modelled as a plane. Robustness of the depth representation is achieved by the use of a layered model. Layers are extracted by mean-shift-based clustering of depth planes. For layer assignment a global cost function is defined. The quality of the disparity map is measured by warping the reference image to the second view and compares it with the real image. Z-buffering enforces visibility and allows the explicit detection of occlusions. An efficient greedy algorithm searches for a local minimum of the cost function. Layer extraction and assignment are alternately applied. Results obtained for benchmark and self-recorded images indicate that the proposed algorithm can compete with the state-of-the-art.

Journal ArticleDOI
01 Aug 2004
TL;DR: A new route network that tries to optimize direct routes only and vertically separate intersecting ones by allocating distinct flight levels, thus leading to a graph coloring problem, featuring good results on real-life instances, which systematically appear to contain large cliques.
Abstract: The aim of Air Traffic Flow Management (ATFM) is to enhance the capacity of the airspace while satisfying Air Traffic Control constraints and airlines requests to optimize their operating costs. This paper presents a design of a new route network that tries to optimize these criteria. The basic idea is to consider direct routes only and vertically separate intersecting ones by allocating distinct flight levels, thus leading to a graph coloring problem. This problem is solved using constraint programming after having found large cliques with a greedy algorithm. These cliques are used to post global constraints and guide the search strategy. With an implementation using FaCiLe, our Functional Constraint Library, optimality is achieved for all instances except the largest one, while the corresponding number of flight levels could fit in the current airspace structure. This graph coloring technique has also been tested on various benchmarks, featuring good results on real-life instances, which systematically appear to contain large cliques.

Journal ArticleDOI
TL;DR: Two heuristic approaches are proposed for the well-knowntraveling salesman problem in which cities correspond to customers providing or requiring known amounts of a product, and the vehicle has a given upper limit capacity.
Abstract: This paper deals with a generalisation of the well-knowntraveling salesman problem (TSP) in which cities correspond to customers providing or requiring known amounts of a product, and the vehicle has a given upper limit capacity. Each customer must be visited exactly once by the vehicle serving the demands while minimising the total travel distance. It is assumed that any unit of product collected from a pickup customer can be delivered to any delivery customer. This problem is calledone-commodity pickup-and-delivery TSP (1-PDTSP). We propose two heuristic approaches for the problem: (1) is based on a greedy algorithm and improved with ak-optimality criterion and (2) is based on a branch-and-cut procedure for finding an optimal local solution. The proposal can easily be used to solve the classical "TSP with pickup-and-delivery," a version studied in literature and involving two commodities. The approaches have been applied to solve hard instances with up to 500 customers.

Proceedings ArticleDOI
19 Jun 2004
TL;DR: In this article, the authors propose a general framework that embeds biclustering methods as local search procedures in an evolutionary algorithm and demonstrate on one prominent example that this approach achieves significant improvements in the quality of the biclusters when compared to the application of the greedy strategy alone.
Abstract: In recent years, several biclustering methods have been suggested to identify local patterns in gene expression data. Most of these algorithms represent greedy strategies that are heuristic in nature: an approximate solutions is found within reasonable time bounds. The quality of biclustering, though, is often considered more important than the computation time required to generate it. Therefore, this paper addresses the question whether additional run-time resources can be exploited in order to improve the outcome of the aforementioned greedy algorithms. To this end, we propose a general framework that embed such biclustering methods as local search procedures in an evolutionary algorithm. We demonstrate on one prominent example that this approach achieves significant improvements in the quality of the biclusters when compared to the application of the greedy strategy alone.

Journal ArticleDOI
TL;DR: An effective heuristic algorithm to solve a scheduling problem that comes from industry, where the workshop is an hybrid flow shop with recirculation, and experiences on instances like industrial ones are computed, and the efficiency of the genetic algorithm is shown.

Journal Article
TL;DR: In this paper, an effective heuristic algorithm to solve a scheduling problem that comes from industry is proposed, where the workshop is an hybrid flow shop with recirculation and the problem is to perform jobs between a release date and a due date, in order to minimize the weighted number of tardy jobs.
Abstract: We propose in this paper an effective heuristic algorithm to solve a scheduling problem that comes from industry. The workshop is an hybrid flow shop with recirculation and the problem is to perform jobs between a release date and a due date, in order to minimize the weighted number of tardy jobs. Firstly, an integer linear programming formulation of the problem is proposed, then a lower bound, a greedy algorithm and a genetic algorithm are described as approximate methods. To evaluate these heuristics, experiences on instances like industrial ones are computed, and show the efficiency of the genetic algorithm.

Book ChapterDOI
26 Jun 2004
TL;DR: It is shown that randomized search heuristics find minimum spanning trees in expected polynomial time without employing the global technique of greedy algorithms.
Abstract: Randomized search heuristics, among them randomized local search and evolutionary algorithms, are applied to problems whose structure is not well understood, as well as to problems in combinatorial optimization The analysis of these randomized search heuristics has been started for some well-known problems, and this approach is followed here for the minimum spanning tree problem After motivating this line of research, it is shown that randomized search heuristics find minimum spanning trees in expected polynomial time without employing the global technique of greedy algorithms

Proceedings ArticleDOI
Jiawei Zhang1
11 Jan 2004
TL;DR: A quasi-greedy algorithm for approximating the classical uncapacitated 2-level facility location problem (2-LFLP) that selects a sub-optimal candidate at each step, and an O(ln (n)-approximation algorithm for the non-metric 2- LFLP, where n is the number of clients, which is the first non-trivial approximation.
Abstract: We propose a quasi-greedy algorithm for approximating the classical uncapacitated 2-level facility location problem (2-LFLP). Our algorithm, unlike the standard greedy algorithm, selects a sub-optimal candidate at each step. It also relates the minimization 2-LFLP problem, in an interesting way, to the maximization version of the single level facility location problem. Another feature of our algorithm is that it combines the technique of randomized rounding with that of dual fitting.This new approach enables us to approximate the metric 2-LFLP in polynomial time with a ratio of 1:77, a significant improvement on the previously known approximation ratios. Moreover, our approach results in a local improvement procedure for the 2-LFLP, which is useful in improving the approximation guarantees for several other multi-level facility location problems.

Proceedings ArticleDOI
06 Dec 2004
TL;DR: This work introduces a strategy for tolerating defective crosspoints and develops a linear-time, greedy algorithm for mapping PLA logic around crosspoint defects, and notes that P-term fanin must be bounded to guarantee low overhead mapping and develops analytical guidelines for bounding fanin.
Abstract: Recent developments suggest both plausible fabrication techniques and viable architectures for building sublithographic programmable logic arrays using molecular-scale wires and switches. Designs at this scale will see much higher defect rates than in conventional lithography. However, these defects need not be an impediment to programmable logic design as this scale. We introduce a strategy for tolerating defective crosspoints and develop a linear-time, greedy algorithm for mapping PLA logic around crosspoint defects. We note that P-term fanin must be bounded to guarantee low overhead mapping and develop analytical guidelines for bounding fanin. We further quantify analytical and empirical mapping overhead rates. Including fanin bounding, our greedy mapping algorithm maps a large set of benchmark designs with 13% average overhead for random junction defect rates as high as 20%.

Journal ArticleDOI
TL;DR: This work proposes a hybrid GA that takes as input the current replica distribution and computes a new one using knowledge about the network attributes and the changes occurred, and evaluates these algorithms with respect to the storage capacity constraint of each site as well as variations in the popularity of objects.

Proceedings ArticleDOI
07 Mar 2004
TL;DR: A faster greedy heuristic for this problem that uses an exponential metric based on the approximation algorithm is developed and shown to perform near-optimally and significantly better than other shortest-path routing approaches, particularly when nodes are heterogeneous in their energy and data availability.
Abstract: We examine the problem of maximizing data collection from an energy-limited store-and-extract wireless sensor network, which is analogous to the maximum lifetime problem of interest in continuous data-gathering sensor networks. One significant difference is that this problem requires attention to "data-awareness" in addition to "energy-awareness." We formulate the maximum data extraction problem as a linear program and present a 1+omega iterative approximation algorithm for it. As a practical distributed implementation we develop a faster greedy heuristic for this problem that uses an exponential metric based on the approximation algorithm. We then show through simulation results that the greedy heuristic incorporating this exponential metric performs near-optimally (within 1 to 20% of optimal, with low overhead) and significantly better than other shortest-path routing approaches, particularly when nodes are heterogeneous in their energy and data availability