scispace - formally typeset
Search or ask a question

Showing papers in "Algorithmica in 2004"


Journal ArticleDOI
TL;DR: A generalized algorithm for the construction of VOs for Search DAGs is developed, and it is proved that the VOs thus constructed are secure, and that they are efficient to compute and verify.
Abstract: Query answers from on-line databases can easily be corrupted by hackers or malicious database publishers. Thus it is important to provide mechanisms which allow clients to trust the results from on-line queries. Authentic publication allows untrusted publishers to answer securely queries from clients on behalf of trusted off-line data owners. Publishers validate answers using hard-to-forge verification objects VOs), which clients can check efficiently. This approach provides greater scalability, by making it easy to add more publishers, and better security, since on-line publishers do not need to be trusted.To make authentic publication attractive, it is important for the VOs to be small, efficient to compute, and efficient to verify. This has lead researchers to develop independently several different schemes for efficient VO computation based on specific data structures. Our goal is to develop a unifying framework for these disparate results, leading to a generalized security result. In this paper we characterize a broad class of data structures which we call Search DAGs, and we develop a generalized algorithm for the construction of VOs for Search DAGs. We prove that the VOs thus constructed are secure, and that they are efficient to compute and verify. We demonstrate how this approach easily captures existing work on simple structures such as binary trees, multi-dimensional range trees, tries, and skip lists. Once these are shown to be Search DAGs, the requisite security and efficiency results immediately follow from our general theorems. Going further, we also use Search DAGs to produce and prove the security of authenticated versions of two complex data models for efficient multi-dimensional range searches. This allows efficient VOs to be computed (size O(log N + T)) for typical one- and two-dimensional range queries, where the query answer is of size T and the database is of size N. We also show I/O-efficient schemes to construct the VOs. For a system with disk blocks of size B, we answer one-dimensional and three-sided range queries and compute the VOs with O(logB N + T/B) I/O operations using linear size data structures.

243 citations


Journal ArticleDOI
TL;DR: For the min sum vertex cover version of the problem, it is shown that it can be approximated within a ratio of 2, and is NP-hard to approximate within some constant ρ > 1.
Abstract: The input to the min sum set cover problem is a collection of n sets that jointly cover m elements. The output is a linear order on the sets, namely, in every time step from 1 to n exactly one set is chosen. For every element, this induces a first time step by which it is covered. The objective is to find a linear arrangement of the sets that minimizes the sum of these first time steps over all elements. We show that a greedy algorithm approximates min sum set cover within a ratio of 4. This result was implicit in work of Bar-Noy, Bellare, Halldorsson, Shachnai, and Tamir (1998) on chromatic sums, but we present a simpler proof. We also show that for every e > 0, achieving an approximation ratio of 4 – e is NP-hard. For the min sum vertex cover version of the problem (which comes up as a heuristic for speeding up solvers of semidefinite programs) we show that it can be approximated within a ratio of 2, and is NP-hard to approximate within some constant ρ > 1.

178 citations


Journal ArticleDOI
TL;DR: This algorithm is “lightweight” in the sense that it uses very small space in addition to the space required by the suffix array itself, and is fast even when the input contains many repetitions: this has been shown by extensive experiments with inputs of size up to 110 Mb.
Abstract: In this paper we describe a new algorithm for building the suffix array of a string. This task is equivalent to the problem of lexicographically sorting all the suffixes of the input string. Our algorithm is based on a new approach called deep–shallow sorting: we use a “shallow” sorter for the suffixes with a short common prefix, and a “deep” sorter for the suffixes with a long common prefix. All the known algorithms for building the suffix array either require a large amount of space or are inefficient when the input string contains many repeated substrings. Our algorithm has been designed to overcome this dichotomy. Our algorithm is “lightweight” in the sense that it uses very small space in addition to the space required by the suffix array itself. At the same time our algorithm is fast even when the input contains many repetitions: this has been shown by extensive experiments with inputs of size up to 110 Mb. The source code of our algorithm, as well as a C library providing a simple API, is available under the GNU GPL.

174 citations


Journal ArticleDOI
TL;DR: The first primal–dual algorithms for these problems are given and achieve the best known approximation guarantees and the results were not combinatorial—they were obtained by solving an exponential size linear rogramming relaxation.
Abstract: We consider the Connected Facility Location problem. We are given a graph $G = (V,E)$ with costs $\{c_e\}$ on the edges, a set of facilities $\F \subseteq V$, and a set of clients $\D \subseteq V$. Facility $i$ has a facility opening cost $f_i$ and client $j$ has $d_j$ units of demand. We are also given a parameter $M\geq 1$. A solution opens some facilities, say $F$, assigns each client $j$ to an open facility $i(j)$, and connects the open facilities by a Steiner tree $T$. The total cost incurred is ${\sum}_{i\in F} f_i+ sum_{j\in\D} d_jc_{i(j)j}+M\sum_{e\in T}c_e$. We want a solution of minimum cost. A special case of this problem is when all opening costs are 0 and facilities may be opened anywhere, i.e., $\F=V$. If we know a facility $v$ that is open, then the problem becomes a special case of the single-sink buy-at-bulk problem with two cable types, also known as the rent-or-buy problem. We give the first primal–dual algorithms for these problems and achieve the best known approximation guarantees. We give an 8.55-approximation algorithm for the connected facility location problem and a 4.55-approximation algorithm for the rent-or-buy problem. Previously the best approximation factors for these problems were 10.66 and 9.001, respectively. Further, these results were not combinatorial—they were obtained by solving an exponential size linear rogramming relaxation. Our algorithm integrates the primal–dual approaches for the facility location problem and the Steiner tree problem. We also consider the connected $k$-median problem and give a constant-factor approximation by using our primal–dual algorithm for connected facility location. We generalize our results to an edge capacitated variant of these problems and give a constant-factor approximation for these variants.

167 citations


Journal ArticleDOI
TL;DR: A framework for an automated generation of exact search tree algorithms for NP-hard problems, based on complicated case distinctions, which may lead to a much simpler process of developing and analyzing these algorithms and improve upper bounds on search tree sizes.
Abstract: We present a framework for an automated generation of exact search tree algorithms for NP-hard problems. The purpose of our approach is twofold—rapid development and improved upper bounds. Many search tree algorithms for various problems in the literature are based on complicated case distinctions. Our approach may lead to a much simpler process of developing and analyzing these algorithms. Moreover, using the sheer computing power of machines it may also lead to improved upper bounds on search tree sizes (i.e., faster exact solving algorithms) in comparison with previously developed “hand-made” search trees. Among others, such an example is given with the NP-complete Cluster Editing problem (also known as Correlation Clustering on complete unweighted graphs), which asks for the minimum number of edge additions and deletions to create a graph which is a disjoint union of cliques. The hand-made search tree for Cluster Editing had worst-case size O(2.27k), which now is improved to O(1.92k) due to our new method. (Herein, k denotes the number of edge modifications allowed.)

118 citations


Journal ArticleDOI
TL;DR: The new algorithm MCS-M combines the extension of L EX M with the simplification of MCS, achieving all the results of LEX M in the same time complexity.
Abstract: We present a new algorithm, called MCS-M, for computing minimal triangulations of graphs. Lex-BFS, a seminal algorithm for recognizing chordal graphs, was the genesis for two other classical algorithms: LEX M and MCS. LEX M extends the fundamental concept used in Lex-BFS, resulting in an algorithm that not only recognizes chordality, but also computes a minimal triangulation of an arbitrary graph. MCS simplifies the fundamental concept used in Lex-BFS, resulting in a simpler algorithm for recognizing chordal graphs. The new algorithm MCS-M combines the extension of LEX M with the simplification of MCS, achieving all the results of LEX M in the same time complexity.

108 citations


Journal ArticleDOI
TL;DR: A polynomial approximation scheme for the problem of scheduling on uniformly related parallel machines for a large class of objective functions that depend only on the machine completion times, including minimizing the lp norm of the vector of completion times is given.
Abstract: We give a polynomial approximation scheme for the problem of scheduling on uniformly related parallel machines for a large class of objective functions that depend only on the machine completion times, including minimizing the lp norm of the vector of completion times. This generalizes and simplifies many previous results in this area.

88 citations


Journal ArticleDOI
Klaus Jansen1
TL;DR: In this paper, the authors studied the problem of scheduling a set of n independent malleable tasks on an arbitrary number m of parallel processors and proposed an asymptotic fully polynomial time approximation scheme.
Abstract: A malleable parallel task is one whose execution time is a function of the number of (identical) processors allotted to it. We study the problem of scheduling a set of n independent malleable tasks on an arbitrary number m of parallel processors and propose an asymptotic fully polynomial time approximation scheme. For any fixed e > 0, the algorithm computes a non-preemptive schedule of length at most (1+e) times the optimum (plus an additive term) and has running time polynomial in n,m and 1 /e.

76 citations


Journal ArticleDOI
TL;DR: This short paper characterized the minor-closed graph families for which the treewidth is bounded by a function of the diameter with a simple proof of Eppstein's characterization, which includes, e.g., planar graphs.
Abstract: Eppstein [5] characterized the minor-closed graph families for which the treewidth is bounded by a function of the diameter, which includes, e.g., planar graphs. This characterization has been used as the basis for several (approximation) algorithms on such graphs (e.g., [2] and [5]–[8]). The proof of Eppstein is complicated. In this short paper we obtain the same characterization with a simple proof. In addition, the relation between treewidth and diameter is slightly better and explicit.

65 citations


Journal ArticleDOI
TL;DR: A new combinatorial approach is presented which generates minimum cycle bases in time O(\max{|E|3,|E||V|2log |V|}) with a space requirement of Θ(|E |2).
Abstract: The minimum cycle basis problem in a graph G = (V,E) is the task to construct a minimum length basis of its cycle vector space. A well-known algorithm by Horton of 1987 needs running time O(|V||E|2.376). We present a new combinatorial approach which generates minimum cycle bases in time O(\max{|E|3,|E||V|2log |V|}) with a space requirement of Θ(|E|2). This method is especially suitable for large sparse graphs of electric engineering applications since there, typically, |E| is close to linear in |V|.

64 citations


Journal ArticleDOI
TL;DR: An algorithm for computing the quartet distance between two unrooted evolutionary trees of n species, where all internal nodes have degree three, in time O(n log n), which is better than the previous best algorithm for the problem.
Abstract: Evolutionary trees describing the relationship for a set of species are central in evolutionary biology, and quantifying differences between evolutionary trees is therefore an important task. The quartet distance is a distance measure between trees previously proposed by Estabrook, McMorris, and Meacham. The quartet distance between two unrooted evolutionary trees is the number of quartet topology differences between the two trees, where a quartet topology is the topological subtree induced by four species. In this paper we present an algorithm for computing the quartet distance between two unrooted evolutionary trees of n species, where all internal nodes have degree three, in time O(n log n. The previous best algorithm for the problem uses time O(n 2).

Journal ArticleDOI
TL;DR: The first non-trivial approximation algorithm is given, having an approximation guarantee of 3 · Hk, where k is the maximum requirement and Hk is the kth harmonic number for the metric facility location problem.
Abstract: We consider a fault tolerant version of the metric facility location problem in which every city, j, is required to be connected to r j facilities. We give the first non-trivial approximation algorithm for this problem, having an approximation guarantee of 3 · H k , where k is the maximum requirement and H k is the kth harmonic number. Our algorithm is along the lines of [2] for the generalized Steiner network problem. It runs in phases, and each phase, using a generalization of the primal–dual algorithm of [5] for the metric facility location problem, reduces the maximum residual requirement by one.

Journal ArticleDOI
TL;DR: A convex, or nonlinear, separable minimization problem with constraints that are dual to the minimum cost network flow problem is considered and it is shown how to reduce this problem to a polynomial number of minimum s,t-cut problems.
Abstract: We consider a convex, or nonlinear, separable minimization problem with constraints that are dual to the minimum cost network flow problem. We show how to reduce this problem to a polynomial number of minimum s,t-cut problems. The solution of the reduced problem utilizes the technique for solving integer programs on monotone inequalities in three variables, and a so-called proximity-scaling technique that reduces a convex problem to its linear objective counterpart. The problem is solved in this case in a logarithmic number of calls, O(log U), to a minimum cut procedure, where U is the range of the variables. For a convex problem on n variables the minimum cut is solved on a graph with O(n2) nodes. Among the consequences of this result is a new cut-based scaling algorithm for the minimum cost network flow problem. When the objective function is an arbitrary nonlinear function we demonstrate that this constrained problem is solved in pseudopolynomial time by applying a minimum cut procedure to a graph on O(nU) nodes.

Journal ArticleDOI
Tetsuo Shibuya1
TL;DR: This paper proposes a new data structure that is a generalization of a parameterized suffix tree (p-suffix tree for short) introduced by Baker, and is the first on-line algorithm for constructing it, which achieves linear time when it is used to analyze RNA and DNA sequences.
Abstract: In molecular biology, it is said that two biological sequences tend to have similar properties if they have similar three-dimensional structures. Hence, it is very important to find not only similar sequences in the string sense, but also structurally similar sequences from databases. In this paper we propose a new data structure that is a generalization of a parameterized suffix tree (p-suffix tree for short) introduced by Baker. We call it the structural suffix tree or s-suffix tree for short. The s-suffix tree can be used for finding structurally related patterns of RNA or single-stranded DNA. Furthermore, we propose an O(n(log|Σ| + log|Π|)) on-line algorithm for constructing it, where n is the sequence length, |Σ| is the size of the normal alphabet, and |Π| is that of the alphabet called “parameter,” which is related to the structure of the sequence. Our algorithm achieves linear time when it is used to analyze RNA and DNA sequences. Furthermore, as an algorithm for constructing the p-suffix tree, it is the first on-line algorithm, though the computing bound of our algorithm is the same as that of Kosaraju’s best-known algorithm. The results of computational experiments using actual RNA and DNA sequences are also given to demonstrate our algorithm’s practicality.

Journal ArticleDOI
TL;DR: It is proved that hardness results for approximating set splitting problems and also instances of satisfiability problems which have no “mixed” clauses, i.e., every clause has either all its literals unnegated or all of them negated, are correct.
Abstract: We prove hardness results for approximating set splitting problems and also instances of satisfiability problems which have no “mixed” clauses, i.e., every clause has either all its literals unnegated or all of them negated. Results of Hastad imply tight hardness results for set splitting when all sets have size exactly $k \ge 4$ elements and also for non-mixed satisfiability problems with exactly $k$ literals in each clause for $k \ge 4$. We consider the case $k=3$. For the MAX E3-SET SPLITTING, problem in which all sets have size exactly 3, we prove an NP-hardness result for approximating within any factor better than ${\frac{19}{20}}$. This result holds even for satisfiable instances of MAX E3-SET SPLITTING, and is based on a PCP construction due to Hastad. For “non-mixed MAX 3SAT,” we give a PCP construction which is a slight variant of the one given by Hastad for MAX 3SAT, and use it to prove the NP-hardness of approximating within a factor better than ${\frac{11}{12}}$.

Journal ArticleDOI
TL;DR: An approximation algorithm with performance ratio (ρST + 2) where ρST is the performance ratio of any approximation algorithm for the minimum Steiner tree problem and improves to 2 when all nodes in the graph are sources.
Abstract: We study a capacitated network design problem with applications in local access network design. Given a network, the problem is to route flow from several sources to a sink and to install capacity on the edges to support the flow at minimum cost. Capacity can be purchased only in multiples of a fixed quantity. All the flow from a source must be routed in a single path to the sink. This NP-hard problem generalizes the Steiner tree problem and also more effectively models the applications traditionally formulated as capacitated tree problems. We present an approximation algorithm with performance ratio (ρST + 2) where ρST is the performance ratio of any approximation algorithm for the minimum Steiner tree problem. When all sources have unit demand, the ratio improves to ρST + 1) and, in particular, to 2 when all nodes in the graph are sources.

Journal ArticleDOI
Evanthia Papadopoulou1
TL;DR: It is shown that the size of the Hausdorff Voronoi diagram is Θ(n + m), where n is the number of points on the convex hulls of the given clusters, and m is theNumber of crucial supporting segments between pairs of crossing clusters.
Abstract: We study the Hausdorff Voronoi diagram of point clusters in the plane, a generalization ofVoronoi diagrams based on the Hausdorff distance function. We derive a tight combinatorial bound on the structural complexity of this diagram and present a plane sweep algorithm for its construction. In particular, we show that the size of the Hausdorff Voronoi diagram is Θ(n + m), where n is the number of points on the convex hulls of the given clusters, and m is the number of crucial supporting segments between pairs of crossing clusters. The plane sweep algorithm generalizes the standard plane sweep paradigm for the construction of Voronoi diagrams with the ability to handle disconnected Hausdorff Voronoi regions. The Hausdorff Voronoi diagram finds direct application in the problem of computing the critical area of a VLSI layout, a measure reflecting the sensitivity of the VLSI design to spot defects during manufacturing.

Journal ArticleDOI
TL;DR: This paper gives a 4-speed, 1-approximation algorithm improving the previous bound of 6-speed-1-app approximation algorithm and a 1/α- speed, polynomial time algorithm with an approximation ratio of 1/(1 – α).
Abstract: In this paper we study the following problem. There are n pages which clients can request at any time. The arrival times of requests for pages are known in advance. Several requests for the same page may arrive at different times. There is a server that needs to compute a good broadcast schedule. Outputting a page satisfies all outstanding requests for the page. The goal is to minimize the average waiting time of a client. This problem has recently been shown to be NP-hard. For any fixed α, 0 < α \le ½, we give a 1/α-speed, polynomial time algorithm with an approximation ratio of 1/(1 – α). For example, setting α = ½ gives a 2-speed, 2-approximation algorithm. In addition, we give a 4-speed, 1-approximation algorithm improving the previous bound of 6-speed, 1-approximation algorithm.

Journal ArticleDOI
TL;DR: In this paper, the authors consider a facility location problem where the objective is to select a given number k of locations from a discrete set of n candidates, such that the average distance between selected locations is maximized.
Abstract: We consider a facility location problem, where the objective is to “disperse” a number of facilities, i.e., select a given number k of locations from a discrete set of n candidates, such that the average distance between selected locations is maximized. In particular, we present algorithmic results for the case where vertices are represented by points in d-dimensional space, and edge weights correspond to rectilinear distances. Problems of this type have been considered before, with the best result being an approximation algorithm with performance ratio 2. For the case where k is fixed, we establish a linear-time algorithm that finds an optimal solution. For the case where k is part of the input, we present a polynomial-time approximation scheme.

Journal ArticleDOI
TL;DR: The priority algorithm framework introduced by Borodin, Nielsen, and Rackoff is extended to define “greedy-like” algorithms for the (uncapacitated) facility location problems and set cover problems, and hence applies to algorithms that are not necessarily polynomial time.
Abstract: We apply and extend the priority algorithm framework introduced by Borodin, Nielsen, and Rackoff to define “greedy-like” algorithms for the (uncapacitated) facility location problems and set cover problems. These problems have been the focus of extensive research from the point of view of approximation algorithms and for both problems greedy-like algorithms have been proposed and analyzed. The priority algorithm definitions are general enough to capture a broad class of algorithms that can be characterized as “greedy-like” while still possible to derive non-trivial lower bounds on the approximability of the problems by algorithms in such a class. Our results are orthogonal to complexity considerations, and hence apply to algorithms that are not necessarily polynomial time.

Journal ArticleDOI
TL;DR: It is proved that sorting 13, 14 and 22 elements requires 34, 38 and 71 comparisons, respectively, which solves a long-standing problem posed by Knuth in his famous book The Art of Computer Programming, Volume 3, Sorting and Searching.
Abstract: We prove that sorting 13, 14 and 22 elements requires 34, 38 and 71 comparisons, respectively. This solves a long-standing problem posed by Knuth in his famous book The Art of Computer Programming, Volume 3, Sorting and Searching. The results are due to an efficient implementation of an algorithm for counting linear extensions of a given partial order. We also present some useful heuristics which allow us to decrease the running time of the implementation.

Journal ArticleDOI
TL;DR: In this article, the authors gave an O(φk · n2) fixed parameter tractable algorithm for the 1-Sided Crossing Minimization problem, where k is the parameter of the problem: the number of allowed edge crossings.
Abstract: We give an O(φk · n2) fixed parameter tractable algorithm for the 1-Sided Crossing Minimization. The constant φ in the running time is the golden ratio φ = (1+√5)/2 ≈ 1.618. The constant k is the parameter of the problem: the number of allowed edge crossings.

Journal ArticleDOI
John H. Reif1, Zheng Sun1
TL;DR: The first known computational complexity hardness result for the 3D version of this problem is provided; the problem is PSPACE hard; and the first known efficient approximation algorithms with bounded error are given.
Abstract: This paper investigates the problem of time-optimum movement planning in two and three dimensions for a point robot which has bounded control velocity through a set of n polygonal regions of given translational flow velocities. This intriguing geometric problem has immediate applications to macro-scale motion planning for ships, submarines, and airplanes in the presence of significant flows of water or air. Also, it is a central motion planning problem for many of the meso-scale and micro-scale robots that have been constructed recently, that have environments with significant flows that affect their movement. In spite of these applications, there is very little literature on this problem, and prior work provided neither an upper bound on its computational complexity nor even a decision algorithm. It can easily be seen that an optimum path for the 2D version of this problem can consist of at least an exponential number of distinct segments through flow regions. We provide the first known computational complexity hardness result for the 3D version of this problem; we show the problem is PSPACE hard. We give the first known decision algorithm for the 2D flow path problem, but this decision algorithm has very high computational complexity. We also give the first known efficient approximation algorithms with bounded error.

Journal ArticleDOI
TL;DR: These results improve the bounds known so far for d = 2 and d = 3, and are the first results with bounds that are not exponential in the dimension.
Abstract: We consider the d-dimensional cube packing problem (d-CPP): given a list L of d-dimensional cubes and (an unlimited quantity of) d-dimensional unit-capacity cubes, called bins, find a packing of L into the minimum number of bins. We present two approximation algorithms for d-CPP, for fixed d. The first algorithm has an asymptotic performance bound that can be made arbitrarily close to 2 – (1/2)d . The second algorithm is an improvement of the first and has an asymptotic performance bound that can be made arbitrarily close to 2 – (2/3)d . To our knowledge, these results improve the bounds known so far for d = 2 and d = 3, and are the first results with bounds that are not exponential in the dimension.

Journal ArticleDOI
TL;DR: The first algorithm for weighted sliding labels is presented, and it is shown how an O(n\log n)-time factor-2 approximation algorithm and a PTAS for fixed-position models can be extended to handle the weighted case.
Abstract: Annotating maps, graphs, and diagrams with pieces of text is an important step in information visualization that is usually referred to as label placement. We define nine label-placement models for labeling points with axis-parallel rectangles given a weight for each point. There are two groups: fixed-position models and slider models. We aim to maximize the weight sum of those points that receive a label. We first compare our models by giving bounds for the ratios between the weights of maximum-weight labelings in different models. Then we present algorithms for labeling n points with unit-height rectangles. We show how an O(n\log n)-time factor-2 approximation algorithm and a PTAS for fixed-position models can be extended to handle the weighted case. Our main contribution is the first algorithm for weighted sliding labels. Its approximation factor is (2+\varepsilon), it runs in O(n 2/\varepsilon) time and uses O(n/\varepsilon) space. We show that other than for fixed-position models even the projection to one dimension remains NP-hard. For slider models we also investigate some special cases, namely (a) the number of different point weights is bounded, (b) all labels are unit squares, and (c) the ratio between maximum and minimum label height is bounded.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of scheduling dynamically arriving jobs in a non-clairvoyant setting, that is, when the size of a job in remains unknown until the job finishes execution.
Abstract: We consider the problem of scheduling dynamically arriving jobs in a non-clairvoyant setting, that is, when the size of a job in remains unknown until the job finishes execution. Our focus is on minimizing the mean slowdown, where the slowdown (also known as stretch) of a job is defined as the ratio of the flow time to the size of the job. We use resource augmentation in terms of allowing a faster processor to the online algorithm to make up for its lack of knowledge of job sizes. Our main result is that the Shortest Elapsed Time First (SETF) algorithm, a close variant of which is used in the Windows NT and Unix operating system scheduling policies, is a $(1+\epsilon)$-speed, $O((1/\epsilon)^5 \log^2 B)$-competitive algorithm for minimizing mean slowdown non-clairvoyantly, when $B$ is the ratio between the largest and smallest job sizes. In a sense, this provides a theoretical justification of the effectiveness of an algorithm widely used in practice. On the other hand, we also show that any $O(1)$-speed algorithm, deterministic or randomized, is $\Omega(\min(n,\log B))$-competitive. The motivation for resource augmentation is supported by an $\Omega(\min(n,B))$ lower bound on the competitive ratio without any speedup. For the static case, i.e., when all jobs arrive at time 0, we show that SETF is $O(\log{B})$ competitive without any resource augmentation and also give a matching $\Omega(\log{B})$ lower bound on the competitiveness.

Journal ArticleDOI
TL;DR: An O(n log n)-time algorithm is obtained when B is unbounded and even better algorithms for m=2 and for k=1 are obtained, improving most of the corresponding previous algorithms for the respective special cases and lead to improved approximation schemes for the general problem.
Abstract: We study the scheduling of a set of n jobs, each characterized by a release (arrival) time and a processing time, for a batch processing machine capable of running at most B jobs at a time. We obtain an O(n log n)-time algorithm when B is unbounded. When there are only m distinct release times and the inputs are integers, we obtain an O(n(BRmax)m-1(2/m)m-3)-time algorithm where Rmax is the difference between the maximum and minimum release times. When there are k distinct processing times and m release times, we obtain an O(n log m + kk+2 Bk+1 m2 log m)-time algorithm. We obtain even better algorithms for m=2 and for k=1. These algorithms improve most of the corresponding previous algorithms for the respective special cases and lead to improved approximation schemes for the general problem.

Journal ArticleDOI
TL;DR: In this paper, mathematical evidence is given that for the problem under study it pays to invest in information systems, by assuming that at the release of a ride, only information about the source is given.
Abstract: In on-line dial-a-ride problems servers are traveling in some metric space to serve requests for rides which are presented over time Each ride is characterized by two points in the metric space, a source, the starting point of the ride, and a destination, the endpoint of the ride Usually it is assumed that at the release of a request, complete information about the ride is known We diverge from this by assuming that at the release of a ride, only information about the source is given At visiting the source, the information about the destination will be made available to the servers For many practical problems, our model is closer to reality However, we feel that the lack of information is often a choice, rather than inherent to the problem: additional information can be obtained, but this requires investments in information systems In this paper we give mathematical evidence that for the problem under study it pays to invest

Journal ArticleDOI
TL;DR: A tree (tour) cover of an edge-weighted graph is a set of edges which forms a tree and covers every other edge in the graph and an approximation algorithm with ratios 3.55 and 5.5 is presented.
Abstract: A tree (tour) cover of an edge-weighted graph is a set of edges which forms a tree (closed walk) and covers every other edge in the graph. Arkin et al. [1] give approximation algorithms with ratios 3.55 (tree cover) and 5.5 (tour cover). We present algorithms with a worst-case ratio of 3 for both problems.

Journal ArticleDOI
TL;DR: A quasi-polynomial time approximation scheme for the Euclidean version of the Degree-Restricted MST Problem is developed by adapting techniques used previously by Arora for approximating TSP, extending Arora’s techniques.
Abstract: We develop a quasi-polynomial time approximation scheme for the Euclidean version of the Degree-Restricted MST Problem by adapting techniques used previously by Arora for approximating TSP. Given n points in the plane, d = 3 or 4, and e > 0, the scheme finds an approximation with cost within 1 + e of the lowest cost spanning tree with the property that all nodes have degree at most d. We also develop a polynomial time approximation scheme for the Euclidean version of the Red–Blue Separation Problem, again extending Arora’s techniques. Given e > 0, the scheme finds an approximation with cost within 1+ e of the cost of the optimum separating polygon of the input nodes, in nearly linear time.