scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Algorithms in 2007"


Journal ArticleDOI
TL;DR: This work proposes a new algorithm for the multiple constant multiplication problem, which produces solutions that require up to 20% less additions and subtractions than the best previously known algorithm and can handle problem sizes as large as 100 32-bit constants in a time acceptable for most applications.
Abstract: A variable can be multiplied by a given set of fixed-point constants using a multiplier block that consists exclusively of additions, subtractions, and shifts. The generation of a multiplier block from the set of constants is known as the multiple constant multiplication (MCM) problem. Finding the optimal solution, namely, the one with the fewest number of additions and subtractions, is known to be NP-complete. We propose a new algorithm for the MCM problem, which produces solutions that require up to 20p less additions and subtractions than the best previously known algorithm. At the same time our algorithm, in contrast to the closest competing algorithm, is not limited by the constant bitwidths. We present our algorithm using a unifying formal framework for the best, graph-based MCM algorithms and provide a detailed runtime analysis and experimental evaluation. We show that our algorithm can handle problem sizes as large as 100 32-bit constants in a time acceptable for most applications. The implementation of the new algorithm is available at www.spiral.net.

421 citations


Journal ArticleDOI
TL;DR: In the cell probe model, the O(lg lg m) additive term can be removed from the space bound, answering a question raised by Fich and Miltersen [1995] and Pagh [2001].
Abstract: We consider the indexable dictionary problem, which consists of storing a set S ⊆ {0,…,m − 1} for some integer m while supporting the operations of rank(x), which returns the number of elements in S that are less than x if x ∈ S, and −1 otherwise; and select(i), which returns the ith smallest element in S. We give a data structure that supports both operations in O(1) time on the RAM model and requires B(n, m) p o(n) p O(lg lg m) bits to store a set of size n, where B(n, m) e ⌊lg (m/n)⌋ is the minimum number of bits required to store any n-element subset from a universe of size m. Previous dictionaries taking this space only supported (yes/no) membership queries in O(1) time. In the cell probe model we can remove the O(lg lg m) additive term in the space bound, answering a question raised by Fich and Miltersen [1995] and Pagh [2001]. We present extensions and applications of our indexable dictionary data structure, including: —an information-theoretically optimal representation of a k-ary cardinal tree that supports standard operations in constant time; —a representation of a multiset of size n from {0,…,m − 1} in B(n, m p n) p o(n) bits that supports (appropriate generalizations of) rank and select operations in constant time; and p O(lg lg m) —a representation of a sequence of n nonnegative integers summing up to m in B(n, m p n) p o(n) bits that supports prefix sum queries in constant time.

415 citations


Journal ArticleDOI
TL;DR: The FM-index is the first that removes the alphabet-size dependance from all query times and the compressed representation of integer sequences with a compression boosting technique to design compressed full-text indexes that scale well with the size of the input alphabet Σ.
Abstract: Given a sequence S = s1s2…sn of integers smaller than r = O(polylog(n)), we show how S can be represented using nH0(S) p o(n) bits, so that we can know any sq, as well as answer rank and select queries on S, in constant time. H0(S) is the zero-order empirical entropy of S and nH0(S) provides an information-theoretic lower bound to the bit storage of any sequence S via a fixed encoding of its symbols. This extends previous results on binary sequences, and improves previous results on general sequences where those queries are answered in O(log r) time. For larger r, we can still represent S in nH0(S) p o(n log r) bits and answer queries in O(log r/log log n) time.Another contribution of this article is to show how to combine our compressed representation of integer sequences with a compression boosting technique to design compressed full-text indexes that scale well with the size of the input alphabet Σ. Specifically, we design a variant of the FM-index that indexes a string T[1, n] within nHk(T) p o(n) bits of storage, where Hk(T) is the kth-order empirical entropy of T. This space bound holds simultaneously for all k ≤ α logvΣvn, constant 0

399 citations


Journal ArticleDOI
TL;DR: SkipGraphs as mentioned in this paper are a distributed data structure based on skip lists that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time.
Abstract: Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, simple and straightforward algorithms can be used to construct a skip graph, insert new nodes into it, search it, and detect and repair errors within it introduced due to node failures.

324 citations


Journal ArticleDOI
TL;DR: This work relaxes the problem so that an additional operation is allowed, namely, substring moves, and approximate the string edit distance upto a factor of O(log n log*n), and results are obtained, which are the first known significantly subquadratic algorithm for a string editdistance problem in which the distance involves nontrivial alignments.
Abstract: The edit distance between two strings S and R is defined to be the minimum number of character inserts, deletes, and changes needed to convert R to S. Given a text string t of length n, and a pattern string p of length m, informally, the string edit distance matching problem is to compute the smallest edit distance between p and substrings of t.We relax the problem so that: (a) we allow an additional operation, namely, substring moves; and (b) we allow approximation of this string edit distance. Our result is a near-linear time deterministic algorithm to produce a factor of O(log n loga n) approximation to the string edit distance with moves. This is the first known significantly subquadratic algorithm for a string edit distance problem in which the distance involves nontrivial alignments. Our results are obtained by embedding strings into L1 vector space using a simplified parsing technique, which we call edit-sensitive parsing (ESP).

258 citations


Journal ArticleDOI
TL;DR: This paper examines two different mechanisms for saving power in battery-operated embedded systems and gives an off line algorithm which is within a factor of three of the optimal algorithm and an online algorithm with a constant competitive ratio.
Abstract: This article examines two different mechanisms for saving power in battery-operated embedded systems. The first strategy is that the system can be placed in a sleep state if it is idle. However, a fixed amount of energy is required to bring the system back into an active state in which it can resume work. The second way in which power savings can be achieved is by varying the speed at which jobs are run. We utilize a power consumption curve P(s) which indicates the power consumption level given a particular speed. We assume that P(s) is convex, nondecreasing, and nonnegative for s ≥ 0. The problem is to schedule arriving jobs in a way that minimizes total energy use and so that each job is completed after its release time and before its deadline. We assume that all jobs can be preempted and resumed at no cost. Although each problem has been considered separately, this is the first theoretical analysis of systems that can use both mechanisms. We give an offline algorithm that is within a factor of 2 of the optimal algorithm. We also give an online algorithm with a constant competitive ratio.

238 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe a new algorithm to enumerate the k shortest simple (loopless) paths in a directed graph and report on its implementation, which is based on a replacement paths algorithm proposed by Hershberger and Suri.
Abstract: We describe a new algorithm to enumerate the k shortest simple (loopless) paths in a directed graph and report on its implementation. Our algorithm is based on a replacement paths algorithm proposed by Hershberger and Suri [2001], and can yield a factor Θ(n) improvement for this problem. But there is a caveat: The fast replacement paths subroutine is known to fail for some directed graphs. However, the failure is easily detected, and so our k shortest paths algorithm optimistically uses the fast subroutine, then switches to a slower but correct algorithm if a failure is detected. Thus, the algorithm achieves its Θ(n) speed advantage only when the optimism is justified. Our empirical results show that the replacement paths failure is a rare phenomenon, and the new algorithm outperforms the current best algorithms; the improvement can be substantial in large graphs. For instance, on GIS map data with about 5,000 nodes and 12,000 edges, our algorithm is 4--8 times faster. In synthetic graphs modeling wireless ad hoc networks, our algorithm is about 20 times faster.

217 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of selecting a low-cost s - t path in a graph where the edge costs are a secret, known only to the various economic agents who own them.
Abstract: We consider the problem of selecting a low-cost s - t path in a graph where the edge costs are a secret, known only to the various economic agents who own them. To solve this problem, Nisan and Ronen applied the celebrated Vickrey-Clarke-Groves (VCG) mechanism, which pays a premium to induce the edges so as to reveal their costs truthfully. We observe that this premium can be unacceptably high. There are simple instances where the mechanism pays Θ(n) times the actual cost of the path, even if there is an alternate path available that costs only (1 p e) times as much. This inspires the frugal path problem, which is to design a mechanism that selects a path and induces truthful cost revelation, without paying such a high premium.This article contributes negative results on the frugal path problem. On two large classes of graphs, including those having three node-disjoint s - t paths, we prove that no reasonable mechanism can always avoid paying a high premium to induce truthtelling. In particular, we introduce a general class of min function mechanisms, and show that all min function mechanisms can be forced to overpay just as badly as VCG. Meanwhile, we prove that every truthful mechanism satisfying some reasonable properties is a min function mechanism.Our results generalize to the problem of hiring a team to complete a task, where the analog of a path in the graph is a subset of the agents constituting a team capable of completing the task.

188 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a deterministic constant competitive online algorithm to schedule a sequence of jobs on a variable-speed processor so as to minimize the total cost consisting of the energy consumption and the total flow time of all jobs.
Abstract: We study scheduling problems in battery-operated computing devices, aiming at schedules with low total energy consumption. While most of the previous work has focused on finding feasible schedules in deadline-based settings, in this article we are interested in schedules that guarantee good response times. More specifically, our goal is to schedule a sequence of jobs on a variable-speed processor so as to minimize the total cost consisting of the energy consumption and the total flow time of all jobs. We first show that when the amount of work, for any job, may take an arbitrary value, then no online algorithm can achieve a constant competitive ratio. Therefore, most of the article is concerned with unit-size jobs. We devise a deterministic constant competitive online algorithm and show that the offline problem can be solved in polynomial time.

167 citations


Journal ArticleDOI
TL;DR: These are randomized embeddings between two metric spaces which preserve the (approximate) nearest-neighbors which are combined with known data structures for Euclidean metrics with low “intrinsic” dimension.
Abstract: In this article we introduce the notion of nearest-neighbor-preserving embeddings. These are randomized embeddings between two metric spaces which preserve the (approximate) nearest-neighbors. We give two examples of such embeddings for Euclidean metrics with low “intrinsic” dimension. Combining the embeddings with known data structures yields the best-known approximate nearest-neighbor data structures for such metrics.

156 citations


Journal ArticleDOI
TL;DR: The worst-case coordination ratio on m parallel links is shown to be Θ(log m/log log log log m), which entirely resolves an question posed recently by Koutsoupias and Papadimitriou [3].
Abstract: We study the problem of traffic routing in noncooperative networks In such networks, users may follow selfish strategies to optimize their own performance measure and therefore, their behavior does not have to lead to optimal performance of the entire network In this article we investigate the worst-case coordination ratio, which is a game-theoretic measure aiming to reflect the price of selfish routingFollowing a line of previous work, we focus on the most basic networks consisting of parallel links with linear latency functions Our main result is that the worst-case coordination ratio on m parallel links of possibly different speeds isΘ(log m/log log log m)In fact, we are able to give an exact description of the worst-case coordination ratio, depending on the number of links and ratio of speed of the fastest link over the speed of the slowest link For example, for the special case in which all m parallel links have the same speed, we can prove that the worst-case coordination ratio is Γ(−1) (m) p Θ(1), with Γ denoting the Gamma (factorial) function Our bounds entirely resolve an open problem posed recently by Koutsoupias and Papadimitriou [1999]

Journal ArticleDOI
TL;DR: The number of steps required to reach a pure Nash equilibrium in a load balancing scenario where each job behaves selfishly and attempts to migrate to a machine which will minimize its cost is studied.
Abstract: We study the number of steps required to reach a pure Nash equilibrium in a load balancing scenario where each job behaves selfishly and attempts to migrate to a machine which will minimize its cost. We consider a variety of load balancing models, including identical, restricted, related, and unrelated machines. Our results have a crucial dependence on the weights assigned to jobs. We consider arbitrary weights, integer weights, k distinct weights, and identical (unit) weights. We look both at an arbitrary schedule (where the only restriction is that a job migrates to a machine which lowers its cost) and specific efficient schedulers (e.g., allowing the largest weight job to move first). A by-product of our results is establishing a connection between various scheduling models and the game-theoretic notion of potential games. We show that load balancing in unrelated machines is a generalized ordinal potential game, load balancing in related machines is a weighted potential game, and load balancing in related machines and unit weight jobs is an exact potential game.

Journal ArticleDOI
TL;DR: In this article, the integrality gap of the natural linear programming relaxation is at most 4 for the case of arbitrary profits and at most 11.542 times that for the unit-demand case.
Abstract: We consider requests for capacity in a given tree network T = (V, E) where each edge e of the tree has some integer capacity ue. Each request f is a node pair with an integer demand df and a profit wf which is obtained if the request is satisfied. The objective is to find a set of demands that can be feasibly routed in the tree and which provides a maximum profit. This generalizes well-known problems, including the knapsack and b-matching problems.When all demands are 1, we have the integer multicommodity flow problem. Garg et al. [1997] had shown that this problem is NP-hard and gave a 2-approximation algorithm for the cardinality case (all profits are 1) via a primal-dual algorithm. Our main result establishes that the integrality gap of the natural linear programming relaxation is at most 4 for the case of arbitrary profits. Our proof is based on coloring paths on trees and this has other applications for wavelength assignment in optical network routing.We then consider the problem with arbitrary demands. When the maximum demand dmax is at most the minimum edge capacity umin, we show that the integrality gap of the LP is at most 48. This result is obtained by showing that the integrality gap for the demand version of such a problem is at most 11.542 times that for the unit-demand case. We use techniques of Kolliopoulos and Stein [2004, 2001] to obtain this. We also obtain, via this method, improved algorithms for line and ring networks. Applications and connections to other combinatorial problems are discussed.

Journal ArticleDOI
TL;DR: The solution to the dictionary matching problem is based on a new compressed representation of a suffix tree, which supports insertion and deletion of a text in optimal space in O, as well as all suffix tree traversal operations, including forward and backward suffix links.
Abstract: Let T be a string with n characters over an alphabet of constant size. A recent breakthrough on compressed indexing allows us to build an index for T in optimal space (i.e., O(n) bits), while supporting very efficient pattern matching [Ferragina and Manzini 2000; Grossi and Vitter 2000]. Yet the compressed nature of such indexes also makes them difficult to update dynamically.This article extends the work on optimal-space indexing to a dynamic collection of texts. Our first result is a compressed solution to the library management problem, where we show an index of O(n) bits for a text collection L of total length n, which can be updated in O(vTv log n) time when a text T is inserted or deleted from L; also, the index supports searching the occurrences of any pattern P in all texts in L in O(vPv log n p occ log2n) time, where occ is the number of occurrences.Our second result is a compressed solution to the dictionary matching problem, where we show an index of O(d) bits for a pattern collection D of total length d, which can be updated in O(vPv log2d) time when a pattern P is inserted or deleted from D; also, the index supports searching the occurrences of all patterns of D in any text T in O((vTv p occ)log2d) time. When compared with the O(d log d)-bit suffix-tree-based solution of Amir et al. [1995], the compact solution increases the query time by roughly a factor of log d only.The solution to the dictionary matching problem is based on a new compressed representation of a suffix tree. Precisely, we give an O(n)-bit representation of a suffix tree for a dynamic collection of texts whose total length is n, which supports insertion and deletion of a text T in O(vTv log2n) time, as well as all suffix tree traversal operations, including forward and backward suffix links. This work can be regarded as a generalization of the compressed representation of static texts. In the study of the aforementioned result, we also derive the first O(n)-bit representation for maintaining n pairs of balanced parentheses in O(log n/log log n) time per operation, matching the time complexity of the previous O(n log n)-bit solution.

Journal ArticleDOI
TL;DR: It is shown that, using this approach, it is possible to construct any family of constant degree graphs in a dynamic environment, though with worse parameters, and it is expected that more distributed data structures could be designed and implemented in aynamic environment.
Abstract: We propose a new approach for constructing P2P networks based on a dynamic decomposition of a continuous space into cells corresponding to servers. We demonstrate the power of this approach by suggesting two new P2P architectures and various algorithms for them. The first serves as a DHT (distributed hash table) and the other is a dynamic expander network. The DHT network, which we call Distance Halving, allows logarithmic routing and load while preserving constant degrees. It offers an optimal tradeoff between degree and path length in the sense that degree d guarantees a path length of O(logdn). Another advantage over previous constructions is its relative simplicity. A major new contribution of this construction is a dynamic caching technique that maintains low load and storage, even under the occurrence of hot spots. Our second construction builds a network that is guaranteed to be an expander. The resulting topologies are simple to maintain and implement. Their simplicity makes it easy to modify and add protocols. A small variation yields a DHT which is robust against random Byzantine faults. Finally we show that, using our approach, it is possible to construct any family of constant degree graphs in a dynamic environment, though with worse parameters. Therefore, we expect that more distributed data structures could be designed and implemented in a dynamic environment.

Journal ArticleDOI
TL;DR: A new measure for the quality of online algorithms, the relative worst order ratio, is used to compare online algorithms directly by taking the ratio of their performances on their respective worst permutations of a worst-case sequence.
Abstract: We define a new measure for the quality of online algorithms, the relative worst order ratio, using ideas from the max/max ratio [Ben-David and Borodin 1994] and from the random order ratio [Kenyon 1996]. The new ratio is used to compare online algorithms directly by taking the ratio of their performances on their respective worst permutations of a worst-case sequence.Two variants of the bin packing problem are considered: the classical bin packing problem, where the goal is to fit all items in as few bins as possible, and the dual bin packing problem, which is the problem of maximizing the number of items packed in a fixed number of bins. Several known algorithms are compared using this new measure, and a new, simple variant of first-fit is proposed for dual bin packing.Many of our results are consistent with those previously obtained with the competitive ratio or the competitive ratio on accommodating sequences, but new separations and easier proofs are found.

Journal ArticleDOI
TL;DR: The first three papers in this issue of TALG are the first of a set of papers that were selected for a special issue on SODA 2002, and all of the selected papers are from the Thirteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA).
Abstract: The first three papers in this issue of TALG are the first of a set of papers that were selected for a special issue on SODA 2002. The remaining selected papers will appear in subsequent issues of TALG. All of the selected papers are from the Thirteenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), held January 6–8, 2002, in San Francisco, California. The papers were selected by the editor with the assistance of nominations from the program committee. The authors were invited to submit journal versions of the SODA paper, which were reviewed under the standard process for this journal. We thank both the authors and the reviewers for their hard work.

Journal ArticleDOI
TL;DR: An algorithm is presented for solving the problem where the text update operation is changing the symbol value of a text location and it is shown that the complexity is not proportional to the number of pattern occurrences, since all new occurrences can be reported in a succinct form.
Abstract: In this article, we address a new version of dynamic pattern matching. The dynamic text and static pattern matching problem is the problem of finding a static pattern in a text that is continuously being updated. The goal is to report all new occurrences of the pattern in the text after each text update. We present an algorithm for solving the problem where the text update operation is changing the symbol value of a text location. Given a text of length n and a pattern of length m, our algorithm preprocesses the text in time O(n log log m), and the pattern in time O(m log m). The extra space used is O(n p m log m). Following each text update, the algorithm deletes all prior occurrences of the pattern that no longer match, and reports all new occurrences of the pattern in the text in O(log log m) time. We note that the complexity is not proportional to the number of pattern occurrences, since all new occurrences can be reported in a succinct form.

Journal ArticleDOI
TL;DR: This work considers the k-traveling repairmen problem to multiple repairmen, and gives a polynomial-time 8.497α-approximation algorithm for this generalization, where α denotes the best achievable approximation factor for the problem of finding the least-cost rooted tree spanning i vertices of a metric.
Abstract: We consider the k-traveling repairmen problem, also known as the minimum latency problem, to multiple repairmen. We give a polynomial-time 8.497α-approximation algorithm for this generalization, where α denotes the best achievable approximation factor for the problem of finding the least-cost rooted tree spanning i vertices of a metric. For the latter problem, a (2 p e)-approximation is known. Our results can be compared with the best-known approximation algorithm using similar techniques for the case k e 1, which is 3.59α. Moreover, recent work of Chaudry et al. [2003] shows how to remove the factor of α, thus improving all of these results by that factor. We are aware of no previous work on the approximability of the present problem. In addition, we give a simple proof of the 3.59α-approximation result that can be more easily extended to the case of multiple repairmen, and may be of independent interest.

Journal ArticleDOI
TL;DR: The first nontrivial result for approximation of factor less than two for stable marriage problem is given, which achieves an approximation ratio of 2/(1 + L−2) for instances in which only men have ties of length at most L.
Abstract: The stable marriage problem has recently been studied in its general setting, where both ties and incomplete lists are allowed. It is NP-hard to find a stable matching of maximum size, while any stable matching is a maximal matching and thus trivially we can obtain a 2-approximation algorithm.In this article, we give the first nontrivial result for approximation of factor less than two. Our algorithm achieves an approximation ratio of 2/(1 p L−2) for instances in which only men have ties of length at most L. When both men and women are allowed to have ties but the lengths are limited to two, then we show a ratio of 13/7(1.1052).

Journal ArticleDOI
TL;DR: A lower bound of Ω(log n/loglog n) is proved for the integrality gap of edge-disjoint cycle packing and the approximability of νe(G) in directed graphs, improving upon the previously known APX-hardness result for this problem.
Abstract: The cycle packing number νe(G) of a graph G is the maximum number of pairwise edge-disjoint cycles in G. Computing νe(G) is an NP-hard problem. We present approximation algorithms for computing νe(G) in both undirected and directed graphs. In the undirected case we analyze a variant of the modified greedy algorithm suggested by Caprara et al. [2003] and show that it has approximation ratio Θ(slog n), where n e vV(G)v. This improves upon the previous O(log n) upper bound for the approximation ratio of this algorithm. In the directed case we present a sn-approximation algorithm. Finally, we give an O(n2/3)-approximation algorithm for the problem of finding a maximum number of edge-disjoint cycles that intersect a specified subset S of vertices. We also study generalizations of these problems. Our approximation ratios are the currently best-known ones and, in addition, provide upper bounds on the integrality gap of standard LP-relaxations of these problems. In addition, we give lower bounds for the integrality gap and approximability of νe(G) in directed graphs. Specifically, we prove a lower bound of Ω(log n/loglog n) for the integrality gap of edge-disjoint cycle packing. We also show that it is quasi-NP-hard to approximate νe(G) within a factor of O(log1 − e n) for any constant e > 0. This improves upon the previously known APX-hardness result for this problem.

Journal ArticleDOI
TL;DR: This work considers the problem for which an error threshold, k, is given, and the goal is to find all locations in for which there exists a bijection π which maps (p) into the appropriate |p mismatched mapped elements.
Abstract: Two equal length strings s and s′, over alphabets Σs and Σs′, parameterize match if there exists a bijection π : Σs r Σs′ such that π (s) = s′, where π (s) is the renaming of each character of s via π. Parameterized matching is the problem of finding all parameterized matches of a pattern string p in a text t, and approximate parameterized matching is the problem of finding at each location a bijection π that maximizes the number of characters that are mapped from p to the appropriate vpv-length substring of t.Parameterized matching was introduced as a model for software duplication detection in software maintenance systems and also has applications in image processing and computational biology. For example, approximate parameterized matching models image searching with variable color maps in the presence of errors.We consider the problem for which an error threshold, k, is given, and the goal is to find all locations in t for which there exists a bijection π which maps p into the appropriate vpv-length substring of t with at most k mismatched mapped elements. Our main result is an algorithm for this problem with O(nk1.5 p mk log m) time complexity, where m = vpv and n=vtv. We also show that when vpv = vtv = m, the problem is equivalent to the maximum matching problem on graphs, yielding a O(m p k1.5) solution.

Journal ArticleDOI
TL;DR: An online 64/33 ≈ 1.939-competitive algorithm is given, the first deterministic algorithm for this problem with competitive ratio below 2.377 and a matching lower bound.
Abstract: We consider the following buffer management problem arising in QoS networks: Packets with specified weights and deadlines arrive at a network switch and need to be forwarded so that the total weight of forwarded packets is maximized. Packets not forwarded before their deadlines are lost. The main result of the article is an online 64/33 a 1.939-competitive algorithm, the first deterministic algorithm for this problem with competitive ratio below 2. For the 2-uniform case we give an algorithm with ratio a 1.377 and a matching lower bound.

Journal ArticleDOI
TL;DR: In this article, an offline algorithm for the unit-fraction bin packing problem with at most H + 1 bins is presented. And for instances in which the window sizes form a divisible sequence, a new NP-hardness proof for the windows scheduling problem is provided.
Abstract: Given is a sequence of n positive integers w1,w2,…,wn that are associated with the items 1,2,…n, respectively. In the windows scheduling problem, the goal is to schedule all the items (equal-length information pages) on broadcasting channels such that the gap between two consecutive appearances of page i on any of the channels is at most wi slots (a slot is the transmission time of one page). In the unit-fractions bin packing problem, the goal is to pack all the items in bins of unit size where the size (width) of item i is 1/wi. The optimization objective is to minimize the number of channels or bins. In the offline setting, the sequence is known in advance, whereas in the online setting, the items arrive in order and assignment decisions are irrevocable. Since a page requires at least 1/wi of a channel's bandwidth, it follows that windows scheduling without migration (i.e., all broadcasts of a page must be from the same channel) is a restricted version of unit-fractions bin packing.Let H = ⌈Σi==1n(1/wi) be the bandwidth lower bound on the required number of bins (channels). The best-known offline algorithm for the windows scheduling problem used H + O(ln H) channels. This article presents an offline algorithm for the unit-fractions bin packing problem with at most H + 1 bins. In the online setting, this article presents algorithms for both problems with H + O(√H) channels or bins, where the one for the unit-fractions bin packing problem is simpler. On the other hand, this article shows that already for the unit-fractions bin packing problem, any online algorithm must use at least H+Ω(ln H) bins. For instances in which the window sizes form a divisible sequence, an optimal online algorithm is presented. Finally, this article includes a new NP-hardness proof for the windows scheduling problem.

Journal ArticleDOI
TL;DR: In this paper, a superlinear lower bound of Ω(m √n) was shown for the replacement path problem in directed graphs, where m = O(n √ n).
Abstract: We prove superlinear lower bounds for some shortest path problems in directed graphs, where no such bounds were previously known. The central problem in our study is the replacement paths problem: Given a directed graph G with non-negative edge weights, and a shortest path P = le1, e2, …, epr between two nodes s and t, compute the shortest path distances from s to t in each of the p graphs obtained from G by deleting one of the edges ei. We show that the replacement paths problem requires Ω(m √n) time in the worst case whenever m = O(n √n). Our construction also implies a similar lower bound on the k shortest simple paths problem for a broad class of algorithms that includes all known algorithms for the problem. To put our lower bound in perspective, we note that both these problems (replacement paths and k shortest simple paths) can be solved in near-linear time for undirected graphs.

Journal ArticleDOI
TL;DR: A new structural theorem on the presence of two-pairs in weakly chordal graphs to develop improved algorithms that reduce the time complexity and space complexity and improve the complexity of the clique and coloring problems.
Abstract: We use a new structural theorem on the presence of two-pairs in weakly chordal graphs to develop improved algorithms. For the recognition problem, we reduce the time complexity from O(mn2) to O(m2) and the space complexity from O(n3) to O(m p n), and also produce a hole or antihole if the input graph is not weakly chordal. For the optimization problems, the complexity of the clique and coloring problems is reduced from O(mn2) to O(n3) and the complexity of the independent set and clique cover problems is improved from O(n4) to O(mn). The space complexity of our optimization algorithms is O(m p n).

Journal ArticleDOI
TL;DR: It is shown that a wide class of linear cost measures in random d-dimensional point quadtrees undergo a change in limit laws: if the dimension d = 1, …, 8, then the limit law is normal; if d ≥ 9 then there is no convergence to a fixed limit law.
Abstract: We show that a wide class of linear cost measures (such as the number of leaves) in random d-dimensional point quadtrees undergo a change in limit laws: If the dimension d = 1, …, 8, then the limit law is normal; if d ≥ 9 then there is no convergence to a fixed limit law. Stronger approximation results such as convergence rates and local limit theorems are also derived for the number of leaves, additional phase changes being unveiled. Our approach is new and very general, and also applicable to other classes of search trees. A brief discussion of Devroye's grid trees (covering m-ary search trees and quadtrees as special cases) is given. We also propose an efficient numerical procedure for computing the constants involved to high precision.

Journal ArticleDOI
Matthew Andrews1, Lisa Zhang1
TL;DR: In this paper, the authors study routing and scheduling in multihop wireless networks, where the adversary is limited by an admissibility condition which forbids the adversary from overloading any wireless node a priori.
Abstract: We study routing and scheduling in multihop wireless networks. When data is transmitted from its source node to its destination node it may go through other wireless nodes as intermediate hops. The data transmission is node constrained, that is, every node can transmit data to at most one neighboring node per time step. The transmission rates are time varying as a result of changing wireless channel conditions.In this article, we assume that data arrivals and transmission rates are governed by an adversary. The power of the adversary is limited by an admissibility condition which forbids the adversary from overloading any wireless node a priori. The node-constrained transmission and time-varying nature of the transmission rates make our model different from and harder than the standard adversarial queueing model which relates to wireline networks.For the case in which the adversary specifies the paths that the data must follow, we design scheduling algorithms that ensure network stability. These algorithms try to give priority to the data that is closest to its source node. However, at each time step only a subset of the data queued at a node is eligible for scheduling. One of our algorithms is fully distributed.For the case in which the adversary does not dictate the data paths, we show how to route data so that the admissibility condition is satisfied. We can then schedule data along the chosen paths using our stable scheduling algorithms.

Journal ArticleDOI
TL;DR: A new data structure, called the permutation tree, is presented, to improve the running time of sorting permutation by transpositions and sorting permutations by block interchanges.
Abstract: In this article, we present a new data structure, called the permutation tree, to improve the running time of sorting permutation by transpositions and sorting permutation by block interchanges. The existing 1.5-approximation algorithm for sorting permutation by transpositions has time complexity O(n3/2 √logn). By means of the permutation tree, we can improve this algorithm to achieve time complexity O(nlogn). We can also improve the algorithm for sorting permutation by block interchanges to take its time complexity from O(n2) down to O(nlogn).

Journal ArticleDOI
TL;DR: A randomized O(log n)-approximation algorithm for packing element-disjoint Steiner trees is presented, thus matching the hardness lower bound and a tight upper bound of O( log n) on the integrality ratio of a natural linear programming relaxation is shown.
Abstract: Given an undirected graph G(V, E) with terminal set T ⊆ V, the problem of packing element-disjoint Steiner trees is to find the maximum number of Steiner trees that are disjoint on the nonterminal nodes and on the edges. The problem is known to be NP-hard to approximate within a factor of Ω(log n), where n denotes vVv. We present a randomized O(log n)-approximation algorithm for this problem, thus matching the hardness lower bound. Moreover, we show a tight upper bound of O(log n) on the integrality ratio of a natural linear programming relaxation.