scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Algorithms in 2013"


Journal ArticleDOI
TL;DR: This article considers a very general setting of the classic secretary problem, in which the goal is to select k secretaries so as to maximize the expectation of a submodular function which defines efficiency of the selected secretarial group based on their overlapping skills, and presents the first constant-competitive algorithm for this case.
Abstract: Online auction is the essence of many modern markets, particularly networked markets, in which information about goods, agents, and outcomes is revealed over a period of time, and the agents must make irrevocable decisions without knowing future information. Optimal stopping theory, especially the classic secretary problem, is a powerful tool for analyzing such online scenarios which generally require optimizing an objective function over the input. The secretary problem and its generalization the multiple-choice secretary problem were under a thorough study in the literature. In this article, we consider a very general setting of the latter problem called the submodular secretary problem, in which the goal is to select k secretaries so as to maximize the expectation of a (not necessarily monotone) submodular function which defines efficiency of the selected secretarial group based on their overlapping skills. We present the first constant-competitive algorithm for this case. In a more general setting in which selected secretaries should form an independent (feasible) set in each of l given matroids as well, we obtain an O(l log2r)-competitive algorithm generalizing several previous results, where r is the maximum rank of the matroids. Another generalization is to consider l knapsack constraints (i.e., a knapsack constraint assigns a nonnegative cost to each secretary, and requires that the total cost of all the secretaries employed be no more than a budget value) instead of the matroid constraints, for which we present an O(l)-competitive algorithm. In a sharp contrast, we show for a more general setting of subadditive secretary problem, there is no o(√n)-competitive algorithm and thus submodular functions are the most general functions to consider for constant-competitiveness in our setting. We complement this result by giving a matching O(√n)-competitive algorithm for the subadditive case. At the end, we consider some special cases of our general setting as well.

155 citations


Journal ArticleDOI
TL;DR: A method for reducing the treewidth of a graph while preserving all of its minimal separators up to a certain fixed size is presented, and this technique turns out to be relevant for H-coloring problems as well as cardinality constrained variants of the classical H- Coloring problem.
Abstract: We present a method for reducing the treewidth of a graph while preserving all of its minimal s-t separators up to a certain fixed size k. This technique allows us to solve s-tCut and Multicut problems with various additional restrictions (e.g., the vertices being removed from the graph form an independent set or induce a connected graph) in linear time for every fixed number k of removed vertices.Our results have applications for problems that are not directly defined by separators, but the known solution methods depend on some variant of separation. For example, we can solve similarly restricted generalizations of Bipartization (delete at most k vertices from G to make it bipartite) in almost linear time for every fixed number k of removed vertices. These results answer a number of open questions in the area of parameterized complexity. Furthermore, our technique turns out to be relevant for (H, C, K)- and (H, C,≤K)-coloring problems as well, which are cardinality constrained variants of the classical H-coloring problem. We make progress in the classification of the parameterized complexity of these problems by identifying new cases that can be solved in almost linear time for every fixed cardinality bound.

129 citations


Journal ArticleDOI
TL;DR: This article initiates a theoretical investigation into online scheduling problems with speed scaling where the allowable speeds may be discrete, and the power function may be arbitrary, and develops algorithmic analysis techniques for this setting.
Abstract: This article initiates a theoretical investigation into online scheduling problems with speed scaling where the allowable speeds may be discrete, and the power function may be arbitrary, and develops algorithmic analysis techniques for this setting. We show that a natural algorithm, which uses Shortest Remaining Processing Time for scheduling and sets the power to be one more than the number of unfinished jobs, is 3-competitive for the objective of total flow time plus energy. We also show that another natural algorithm, which uses Highest Density First for scheduling and sets the power to be the fractional weight of the unfinished jobs, is a 2-competitive algorithm for the objective of fractional weighted flow time plus energy.

69 citations


Journal ArticleDOI
TL;DR: This paper presents randomized (Monte Carlo) algorithms for constructing a distance sensitivity oracle of size and the first subcubic-time algorithm for the replacement paths problem when the edge-lengths are small integers.
Abstract: A distance sensitivity oracle of an n-vertex graph G = (V,E) is a data structure that can report shortest paths when edges of the graph fail. A query (u ∈ V, v ∈ V, S ⊆ E) to this oracle returns a shortest u-to-v path in the graph G′ = (V,E s S). We present randomized (Monte Carlo) algorithms for constructing a distance sensitivity oracle of size O(n3−α) for |S| = O(lg n/lg lg n) and any choice of 0

62 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the minimum memory required for rendezvous with simultaneous start depends essentially on the number e of leaves of the tree, and is exponentially less impacted by the number n of nodes.
Abstract: The aim of rendezvous in a graph is meeting of two mobile agents at some node of an unknown anonymous connected graph. In this article, we focus on rendezvous in trees, and, analogously to the efforts that have been made for solving the exploration problem with compact automata, we study the size of memory of mobile agents that permits to solve the rendezvous problem deterministically. We assume that the agents are identical, and move in synchronous rounds.We first show that if the delay between the starting times of the agents is arbitrary, then the lower bound on memory required for rendezvous is Ω(log n) bits, even for the line of length n. This lower bound meets a previously known upper bound of O(log n) bits for rendezvous in arbitrary graphs of size at most n. Our main result is a proof that the amount of memory needed for rendezvous with simultaneous start depends essentially on the number e of leaves of the tree, and is exponentially less impacted by the number n of nodes. Indeed, we present two identical agents with O(log e + log log n) bits of memory that solve the rendezvous problem in all trees with at most n nodes and at most e leaves. Hence, for the class of trees with polylogarithmically many leaves, there is an exponential gap in minimum memory size needed for rendezvous between the scenario with arbitrary delay and the scenario with delay zero. Moreover, we show that our upper bound is optimal by proving that Ω(log e + log log n) bits of memory are required for rendezvous, even in the class of trees with degrees bounded by 3.

61 citations


Journal ArticleDOI
TL;DR: This is the first algorithm to provide planarity-preserving morphs with well-behaved complexity for a significant class of graph drawings.
Abstract: We give an algorithm to morph between two planar orthogonal drawings of a graph, preserving planarity and orthogonality. The morph uses a quadratic number of steps, where each step is a linear morph (a linear interpolation between two drawings). This is the first algorithm to provide planarity-preserving morphs with well-behaved complexity for a significant class of graph drawings. Our method is to morph until each edge is represented by a sequence of segments, with corresponding segments parallel in the two drawings. Then, in a result of independent interest, we morph such parallel planar orthogonal drawings, preserving edge directions and planarity.

44 citations


Journal ArticleDOI
TL;DR: This work analyzes the Matrix Berlekamp/Massey algorithm and gives new proofs of correctness and complexity for the algorithm, which is based on self-contained loop invariants and includes an explicit termination criterion for a given determinantal degree bound of the minimal matrix generator.
Abstract: We analyze the Matrix Berlekamp/Massey algorithm, which generalizes the Berlekamp/Massey algorithm [Massey 1969] for computing linear generators of scalar sequences. The Matrix Berlekamp/Massey algorithm computes a minimal matrix generator of a linearly generated matrix sequence and has been first introduced by Rissanen [1972a], Dickinson et al. [1974], and Coppersmith [1994]. Our version of the algorithm makes no restrictions on the rank and dimensions of the matrix sequence. We also give new proofs of correctness and complexity for the algorithm, which is based on self-contained loop invariants and includes an explicit termination criterion for a given determinantal degree bound of the minimal matrix generator.

24 citations


Journal ArticleDOI
TL;DR: An O(log n)-approximation algorithm is given for the maximum edge-disjoint paths problem when an input graph is either 4-edge-connected planar or Eulerian planar.
Abstract: In this article, we study an approximation algorithm for the maximum edge-disjoint paths problem In this problem, we are given a graph and a collection of pairs of vertices, and the objective is to find the maximum number of pairs that can be connected by edge-disjoint paths We give an O(log n)-approximation algorithm for the maximum edge-disjoint paths problem when an input graph is either 4-edge-connected planar or Eulerian planar This improves an O(log2n)-approximation algorithm given by Kleinberg [2005] for Eulerian planar graphs Our result also generalizes the result by Chekuri et al [2004, 2005] who gave an O(log n)-approximation algorithm for the maximum edge-disjoint paths problem with congestion two when an input graph is planar

16 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of computing a minimum weight cycle in weighted undirected graphs and gave an O(n2 log n(log n + log M) time algorithm for nonnegative real edge weights.
Abstract: This article considers the problem of computing a minimum weight cycle in weighted undirected graphs. Given a weighted undirected graph G = (V,E,w), let C be a minimum weight cycle of G, let w(C) be the weight of C, and let wmax(C) be the weight of the maximum edge of C. We obtain three new approximation algorithms for the minimum weight cycle problem: (1) for integral weights from the range [1,M], an algorithm that reports a cycle of weight at most 4 3w(C) in O(n2 log n(log n + log M)) time; (2) For integral weights from the range [1,M], an algorithm that reports a cycle of weight at most w(C) + wmax(C) in O(n2 log n(log n + log M)) time; (3) For nonnegative real edge weights, an algorithm that for any e > 0 reports a cycle of weight at most (4 3 + e)w(C) in O(1 en2 log n(log log n)) time.In a recent breakthrough, Williams and Williams [2010] showed that a subcubic algorithm, that computes the exact minimum weight cycle in undirected graphs with integral weights from the range [1,M], implies a subcubic algorithm for computing all-pairs shortest paths in directed graphs with integral weights from the range [−M,M]. This implies that in order to get a subcubic algorithm for computing a minimum weight cycle, we have to relax the problem and to consider an approximated solution. Lingas and Lundell [2009] were the first to consider approximation in the context of minimum weight cycle in weighted graphs. They presented a 2-approximation algorithm for integral weights with O(n2 log n(log n + log M)) running time. They also posed, as an open problem, the question whether it is possible to obtain a subcubic algorithm with a c-approximation, where c

8 citations


Journal ArticleDOI
TL;DR: This article characterize all projective list update algorithms and shows that their competitive ratio is never smaller than 1.6 in the partial cost model, and concludes that COMB is a best possible projective algorithm in this model.
Abstract: The list update problem is a classical online problem, with an optimal competitive ratio that is still open, known to be somewhere between 1.5 and 1.6. An algorithm with competitive ratio 1.6, the smallest known to date, is COMB, a randomized combination of BIT and the TIMESTAMP algorithm TS. This and almost all other list update algorithms, like MTF, are projective in the sense that they can be defined by looking only at any pair of list items at a time. Projectivity (also known as “list factoring”) simplifies both the description of the algorithm and its analysis, and so far seems to be the only way to define a good online algorithm for lists of arbitrary length. In this article, we characterize all projective list update algorithms and show that their competitive ratio is never smaller than 1.6 in the partial cost model. Therefore, COMB is a best possible projective algorithm in this model.

6 citations


Journal ArticleDOI
TL;DR: A 5-approximation algorithm for the MinOPSM is devised based on a formulation of the problem as a quadratic, nonseparable set cover problem and an alternative formulation combined with a primal-dual algorithm improves the approximation factor.
Abstract: Finding a largest Order-Preserving SubMatrix, OPSM, is an important problem arising in the discovery of patterns in gene expression Ben-Dor et al formulated the problem in Ben-Dor et al [2003] They further showed that the problem is NP-complete and provided a greedy heuristic for the problem The complement of the OPSM problem, called MinOPSM, is to delete the least number of entries in the matrix so that the remaining submatrix is order preserving We devise a 5-approximation algorithm for the MinOPSM based on a formulation of the problem as a quadratic, nonseparable set cover problem An alternative formulation combined with a primal-dual algorithm improves the approximation factor to 3 The complexity of both algorithms for a matrix of size m × n is O(m2n) We further comment on the related biclustering problem

Journal ArticleDOI
TL;DR: This document details in this document how an unfounded assumption can be amended in an algorithm for the data migration and non-deterministic open shop scheduling problems in the minimum sum version, that was claimed to achieve a 5.06-approximation.
Abstract: In Gandhi et al. [2006], we gave an algorithm for the data migration and non-deterministic open shop scheduling problems in the minimum sum version, that was claimed to achieve a 5.06-approximation. Unfortunately, it was pointed to us by Maxim Sviridenko that the argument contained an unfounded assumption that has eluded all of its readers until now. We detail in this document how this error can be amended. A side effect is an improved approximation ratio of 4.96.