scispace - formally typeset
Search or ask a question

Showing papers in "Algorithmica in 2011"


Journal ArticleDOI
TL;DR: Reductions that show that the incremental and decremental single-source shortest-paths problems, for weighted directed or undirected graphs, are, in a strong sense, at least as hard as the static all-pairs shortest- Paths problem.
Abstract: We obtain the following results related to dynamic versions of the shortest-paths problem: Reductions that show that the incremental and decremental single-source shortest-paths problems, for weighted directed or undirected graphs, are, in a strong sense, at least as hard as the static all-pairs shortest-paths problem. We also obtain slightly weaker results for the corresponding unweighted problems. A randomized fully-dynamic algorithm for the all-pairs shortest-paths problem in directed unweighted graphs with an amortized update time of $\tilde {O}(m\sqrt{n})$ (we use $\tilde {O}$ to hide small poly-logarithmic factors) and a worst case query time is O(n3/4). A deterministic O(n2log n) time algorithm for constructing an O(log n)-spanner with O(n) edges for any weighted undirected graph on n vertices. The algorithm uses a simple algorithm for incrementally maintaining single-source shortest-paths tree up to a given distance.

200 citations


Journal ArticleDOI
TL;DR: The present paper picks up Hajek's line of thought to prove a drift theorem that is very easy to use in evolutionary computation and shows how previous analyses involving the complicated theorem can be redone in a much simpler and clearer way.
Abstract: Drift analysis is a powerful tool used to bound the optimization time of evolutionary algorithms (EAs). Various previous works apply a drift theorem going back to Hajek in order to show exponential lower bounds on the optimization time of EAs. However, this drift theorem is tedious to read and to apply since it requires two bounds on the moment-generating (exponential) function of the drift. A recent work identifies a specialization of this drift theorem that is much easier to apply. Nevertheless, it is not as simple and not as general as possible. The present paper picks up Hajek’s line of thought to prove a drift theorem that is very easy to use in evolutionary computation. Only two conditions have to be verified, one of which holds for virtually all EAs with standard mutation. The other condition is a bound on what is really relevant, the drift. Applications show how previous analyses involving the complicated theorem can be redone in a much simpler and clearer way. In some cases even improved results may be achieved. Therefore, the simplified theorem is also a didactical contribution to the runtime analysis of EAs.

162 citations


Journal ArticleDOI
TL;DR: Almost completely the impact of selfishness and greediness in load balancing is characterized by presenting new and improved, tight or almost tight bounds on the price of anarchy of selfish load balancing as well as on the competitiveness of the greedy algorithm for online load balancing when the objective is to minimize the total latency of all clients on servers with linear latency functions.
Abstract: We study the load balancing problem in the context of a set of clients each wishing to run a job on a server selected among a subset of permissible servers for the particular client. We consider two different scenarios. In selfish load balancing, each client is selfish in the sense that it chooses, among its permissible servers, to run its job on the server having the smallest latency given the assignments of the jobs of other clients to servers. In online load balancing, clients appear online and, when a client appears, it has to make an irrevocable decision and assign its job to one of its permissible servers. Here, we assume that the clients aim to optimize some global criterion but in an online fashion. A natural local optimization criterion that can be used by each client when making its decision is to assign its job to that server that gives the minimum increase of the global objective. This gives rise to greedy online solutions. The aim of this paper is to determine how much the quality of load balancing is affected by selfishness and greediness. We characterize almost completely the impact of selfishness and greediness in load balancing by presenting new and improved, tight or almost tight bounds on the price of anarchy of selfish load balancing as well as on the competitiveness of the greedy algorithm for online load balancing when the objective is to minimize the total latency of all clients on servers with linear latency functions. In addition, we prove a tight upper bound on the price of stability of linear congestion games.

128 citations


Journal ArticleDOI
TL;DR: This work investigates parameter-independent data reduction methods and finds that effective preprocessing is possible if the number of edge modifications k is smaller than some multiple of $\lvert V\rvert$ , where V is the vertex set of the input graph.
Abstract: The Cluster Editing problem is defined as follows: Given an undirected, loopless graph, we want to find a set of edge modifications (insertions and deletions) of minimum cardinality, such that the modified graph consists of disjoint cliques. We present empirical results for this problem using exact methods from fixed-parameter algorithmics and linear programming. We investigate parameter-independent data reduction methods and find that effective preprocessing is possible if the number of edge modifications k is smaller than some multiple of $\lvert V\rvert$, where V is the vertex set of the input graph. In particular, combining parameter-dependent data reduction with lower and upper bounds we can effectively reduce graphs satisfying $k\leq25\lvert V\rvert$. In addition to the fastest known fixed-parameter branching strategy for the problem, we investigate an integer linear program (ILP) formulation of the problem using a cutting plane approach. Our results indicate that both approaches are capable of solving large graphs with 1000 vertices and several thousand edge modifications. For the first time, complex and very large graphs such as biological instances allow for an exact solution, using a combination of the above techniques. (A preliminary version of this paper appeared under the title “Exact algorithms for cluster editing: Evaluation and experiments” in the Proceedings of the 7th Workshop on Experimental Algorithms, WEA 2008, in: LNCS, vol. 5038, Springer, pp. 289–302.)

97 citations


Journal ArticleDOI
TL;DR: In this article, a polynomial-time data reduction procedure that reduces a problem instance to an equivalent algebraically represented problem with O(9r k 2) variables is presented.
Abstract: We present an exact algorithm that decides, for every fixed r≥2 in time $O(m)+2^{O(k^{2})}$ whether a given multiset of m clauses of size r admits a truth assignment that satisfies at least ((2r −1)m+k)/2r clauses. Thus Max-r-Sat is fixed-parameter tractable when parameterized by the number of satisfied clauses above the tight lower bound (1−2−r )m. This solves an open problem of Mahajan et al. (J. Comput. Syst. Sci. 75(2):137–153, 2009). Our algorithm is based on a polynomial-time data reduction procedure that reduces a problem instance to an equivalent algebraically represented problem with O(9r k 2) variables. This is done by representing the instance as an appropriate polynomial, and by applying a probabilistic argument combined with some simple tools from Harmonic analysis to show that if the polynomial cannot be reduced to one of size O(9r k 2), then there is a truth assignment satisfying the required number of clauses. We introduce a new notion of bikernelization from a parameterized problem to another one and apply it to prove that the above-mentioned parameterized Max-r-Sat admits a polynomial-size kernel. Combining another probabilistic argument with tools from graph matching theory and signed graphs, we show that if an instance of Max-2-Sat with m clauses has at least 3k variables after application of a certain polynomial time reduction rule to it, then there is a truth assignment that satisfies at least (3m+k)/4 clauses. We also outline how the fixed-parameter tractability and polynomial-size kernel results on Max-r-Sat can be extended to more general families of Boolean Constraint Satisfaction Problems.

94 citations


Journal ArticleDOI
TL;DR: A unified framework for approximating problems that can be formulated or interpreted as special cases of generalized partial cover is presented, and the applicability of the method on a diverse collection of covering problems is demonstrated, for some of which the first non-trivial approximability results are obtained.
Abstract: An instance of the generalized partial cover problem consists of a ground set U and a family of subsets ${\mathcal{S}}\subseteq 2^{U}$. Each element e∈U is associated with a profit p(e), whereas each subset $S\in \mathcal{S}$ has a cost c(S). The objective is to find a minimum cost subcollection $\mathcal{S}'\subseteq \mathcal{S}$ such that the combined profit of the elements covered by $\mathcal{S}'$ is at least P, a specified profit bound. In the prize-collecting version of this problem, there is no strict requirement to cover any element; however, if the subsets we pick leave an element e∈U uncovered, we incur a penalty of π(e). The goal is to identify a subcollection $\mathcal{S}'\subseteq \mathcal{S}$ that minimizes the cost of $\mathcal{S}'$ plus the penalties of uncovered elements. Although problem-specific connections between the partial cover and the prize-collecting variants of a given covering problem have been explored and exploited, a more general connection remained open. The main contribution of this paper is to establish a formal relationship between these two variants. As a result, we present a unified framework for approximating problems that can be formulated or interpreted as special cases of generalized partial cover. We demonstrate the applicability of our method on a diverse collection of covering problems, for some of which we obtain the first non-trivial approximability results.

70 citations


Journal ArticleDOI
TL;DR: The Strong Nash equilibria of the bin packing game are studied, and it is shown that a packing is a Strong Nash equilibrium iff it is produced by the Subset Sum algorithm for bin packing.
Abstract: Following recent interest in the study of computer science problems in a game theoretic setting, we consider the well known bin packing problem where the items are controlled by selfish agents. Each agent is charged with a cost according to the fraction of the used bin space its item requires. That is, the cost of the bin is split among the agents, proportionally to their sizes. Thus, the selfish agents prefer their items to be packed in a bin that is as full as possible. The social goal is to minimize the number of the bins used. The social cost in this case is therefore the number of bins used in the packing. A pure Nash equilibrium is a packing where no agent can obtain a smaller cost by unilaterally moving his item to a different bin, while other items remain in their original positions. A Strong Nash equilibrium is a packing where there exists no subset of agents, all agents in which can profit from jointly moving their items to different bins. We say that all agents in a subset profit from moving their items to different bins if all of them have a strictly smaller cost as a result of moving, while the other items remain in their positions. We measure the quality of the equilibria using the standard measures PoA and PoS that are defined as the worst case worst/best asymptotic ratio between the social cost of a (pure) Nash equilibrium and the cost of an optimal packing, respectively. We also consider the recently introduced measures SPoA and SPoS, that are defined similarly to the PoA and the PoS, but consider only Strong Nash equilibria. We give nearly tight lower and upper bounds of 1.6416 and 1.6428, respectively, on the PoA of the bin packing game, improving upon previous result by Bilo. We study the Strong Nash equilibria of the bin packing game, and show that a packing is a Strong Nash equilibrium iff it is produced by the Subset Sum algorithm for bin packing. This characterization implies that the SPoA of the bin packing game equals the approximation ratio of the Subset Sum algorithm, for which an almost tight bound is known. Moreover, the fact that any lower bound instance for the Subset Sum algorithm can be converted by a small modification of the item sizes to a lower bound instance on the SPoS, implies that in the bin packing game SPoA=SPoS. Finally, we address the issue of complexity of computing a Strong Nash packing and show that no polynomial time algorithm exists for finding Strong Nash equilibria, unless P=NP.

54 citations


Journal ArticleDOI
TL;DR: The path hash accumulator is introduced, a new primitive based on cryptographic hashing for efficiently authenticating various properties of structured data represented as paths, including any decomposable query over sequences of elements.
Abstract: Following in the spirit of data structure and algorithm correctness checking, authenticated data structures provide cryptographic proofs that their answers are as accurate as the author intended, even if the data structure is being controlled by a remote untrusted host. In this paper we present efficient techniques for authenticating data structures that represent graphs and collections of geometric objects. We use a data-querying model where a data structure maintained by a trusted source is mirrored at distributed untrusted servers, called responders, with the responders answering queries made by users: when a user queries a responder, along with the answer to the issued query, he receives a cryptographic proof that allows the verification of the answer trusting only a short statement (digest) signed by the source. We introduce the path hash accumulator, a new primitive based on cryptographic hashing for efficiently authenticating various properties of structured data represented as paths, including any decomposable query over sequences of elements. We show how to employ our primitive to authenticate queries about properties of paths in graphs and search queries on multi-catalogs. This allows the design of new, efficient authenticated data structures for fundamental problems on networks, such as path and connectivity queries over graphs, and complex queries on two-dimensional geometric objects, such as intersection and containment queries. By building on our new primitive we achieve efficiency and modularity: our schemes can be easily analyzed in terms of complexity and security and are simple to implement. Our work has applications to the authentication of network management systems and geographic information systems.

54 citations


Journal ArticleDOI
TL;DR: While studying the parameterized complexity of the problem of deleting k vertices to obtain a König-Egerváry graph, a number of interesting structural results on matchings and vertex covers are shown which could be useful in other contexts.
Abstract: A graph is Konig-Egervary if the size of a minimum vertex cover equals that of a maximum matching in the graph. These graphs have been studied extensively from a graph-theoretic point of view. In this paper, we introduce and study the algorithmic complexity of finding Konig-Egervary subgraphs of a given graph. In particular, given a graph G and a nonnegative integer k, we are interested in the following questions: does there exist a set of k vertices (edges) whose deletion makes the graph Konig-Egervary? does there exist a set of k vertices (edges) that induce a Konig-Egervary subgraph? We show that these problems are NP-complete and study their complexity from the points of view of approximation and parameterized complexity. Towards this end, we first study the algorithmic complexity of Above Guarantee Vertex Cover, where one is interested in minimizing the additional number of vertices needed beyond the maximum matching size for the vertex cover. Further, while studying the parameterized complexity of the problem of deleting k vertices to obtain a Konig-Egervary graph, we show a number of interesting structural results on matchings and vertex covers which could be useful in other contexts.

53 citations


Journal ArticleDOI
TL;DR: It is proved that there exist instances of the minimum s-t-cut problem that cannot be solved by standard single-objective evolutionary algorithms in reasonable time and a bi-criteria approach is developed based on the famous maximum-flow minimum-cut theorem that enables evolutionary algorithms to find an optimal solution in expected polynomial time.
Abstract: We study the minimum s-t-cut problem in graphs with costs on the edges in the context of evolutionary algorithms. Minimum cut problems belong to the class of basic network optimization problems that occur as crucial subproblems in many real-world optimization problems and have a variety of applications in several different areas. We prove that there exist instances of the minimum s-t-cut problem that cannot be solved by standard single-objective evolutionary algorithms in reasonable time. On the other hand, we develop a bi-criteria approach based on the famous maximum-flow minimum-cut theorem that enables evolutionary algorithms to find an optimal solution in expected polynomial time.

44 citations


Journal ArticleDOI
TL;DR: An invariance property of this distribution is proved, which is then used to obtain a significantly improved bound on the drift, namely the expected change of a potential function, here the number of bits set correctly.
Abstract: In their seminal article Droste, Jansen, and Wegener (Theor. Comput. Sci. 276:51–82, 2002) consider a basic direct-search heuristic with a global search operator, namely the so-called (1+1) Evolutionary Algorithm ((1+1) EA). They present the first theoretical analysis of the (1+1) EA’s expected runtime for the class of linear functions over the search space {0,1} n . In a rather long and involved proof they show that, for any linear function, the expected runtime is O(nlog n), i.e., that there are two constants c and n′ such that, for n≥n′, the expected number of iterations until a global optimum is generated is bounded above by c⋅nlog 2 n. However, neither c nor n′ are specified—they would be pretty large. Here we reconsider this optimization scenario to demonstrate the potential of an analytical method that makes use of the distribution of the evolving candidate solution over the search space {0,1} n . Actually, an invariance property of this distribution is proved, which is then used to obtain a significantly improved bound on the drift, namely the expected change of a potential function, here the number of bits set correctly. Finally, this better estimate of the drift enables an upper bound on the expected number of iterations of 3.8nlog 2 n+7.6log 2 n for n≥2.

Journal ArticleDOI
TL;DR: This work establishes a trade-off between the size of advice and the best competitive ratio of a broadcasting algorithm for n-node trees, with an approximation factor of O(nε), for an arbitrarily small positive constant ε.
Abstract: We study the problem of the amount of information required to perform fast broadcasting in tree networks. The source located at the root of a tree has to disseminate a message to all nodes. In each round each informed node can transmit to one child. Nodes do not know the topology of the tree but an oracle knowing it can give a string of bits of advice to the source which can then pass it down the tree with the source message. The quality of a broadcasting algorithm with advice is measured by its competitive ratio: the worst case ratio, taken over n-node trees, between the time of this algorithm and the optimal broadcasting time in the given tree. Our goal is to find a trade-off between the size of advice and the best competitive ratio of a broadcasting algorithm for n-node trees. We establish such a trade-off with an approximation factor of O(n e ), for an arbitrarily small positive constant e. This is the first communication problem for which a trade-off between the size of advice and the efficiency of the solution is shown for arbitrary size of advice.

Journal ArticleDOI
TL;DR: In this article, the authors give a non-clairvoyant algorithm LAPS, and show that for every power function of the form P(s)=s α, LAPS is O(1)-competitive; more precisely, the competitive ratio is 8 for α = 2, 13 for α=3, and $\frac{2\alpha^{2}}{\ln\alpha}$ for α>3.
Abstract: We give three results related to online nonclairvoyant speed scaling to minimize total flow time plus energy. We give a nonclairvoyant algorithm LAPS, and show that for every power function of the form P(s)=s α , LAPS is O(1)-competitive; more precisely, the competitive ratio is 8 for α=2, 13 for α=3, and $\frac{2\alpha^{2}}{\ln\alpha}$ for α>3. We then show that there is no constant c, and no deterministic nonclairvoyant algorithm A, such that A is c-competitive for every power function of the form P(s)=s α . So necessarily the achievable competitive ratio increases as the steepness of the power function increases. Finally we show that there is a fixed, very steep, power function for which no nonclairvoyant algorithm can be O(1)-competitive.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of constructing a phylogenetic network consistent with an input set T, where T contains at least one phylogenetic tree on three leaves (a triplet) for each combination of three taxa.
Abstract: A phylogenetic network is a directed acyclic graph that visualizes an evolutionary history containing so-called reticulations such as recombinations, hybridizations or lateral gene transfers. Here we consider the construction of a simplest possible phylogenetic network consistent with an input set T, where T contains at least one phylogenetic tree on three leaves (a triplet) for each combination of three taxa. To quantify the complexity of a network we consider both the total number of reticulations and the number of reticulations per biconnected component, called the level of the network. We give polynomial-time algorithms for constructing a level-1 respectively a level-2 network that contains a minimum number of reticulations and is consistent with T (if such a network exists). In addition, we show that if T is precisely equal to the set of triplets consistent with some network, then we can construct such a network with smallest possible level in time O(|T|k+1), if k is a fixed upper bound on the level of the network.

Journal ArticleDOI
TL;DR: The longest path problem can be solved in polynomial time on interval graphs with a dynamic programming approach and runs in O(n4) time, where n is the number of vertices of the input graph.
Abstract: The longest path problem is the problem of finding a path of maximum length in a graph. Polynomial solutions for this problem are known only for small classes of graphs, while it is NP-hard on general graphs, as it is a generalization of the Hamiltonian path problem. Motivated by the work of Uehara and Uno (Proc. of the 15th Annual International Symp. on Algorithms and Computation (ISAAC), LNCS, vol. 3341, pp. 871–883, 2004), where they left the longest path problem open for the class of interval graphs, in this paper we show that the problem can be solved in polynomial time on interval graphs. The proposed algorithm uses a dynamic programming approach and runs in O(n 4) time, where n is the number of vertices of the input graph.

Journal ArticleDOI
TL;DR: It is shown that dynamic programming can be used to establish an O(3.8730n) algorithm to compute an optimal L(2,1)-labeling, and the improvement is best seen in the first NP-complete case of k=4, where the running time of the algorithm is O(1.3006n).
Abstract: The notion of distance constrained graph labelings, motivated by the Frequency Assignment Problem, reads as follows: A mapping from the vertex set of a graph G=(V,E) into an interval of integers {0,…,k} is an L(2,1)-labeling of G of span k if any two adjacent vertices are mapped onto integers that are at least 2 apart, and every two vertices with a common neighbor are mapped onto distinct integers. It is known that for any fixed k≥4, deciding the existence of such a labeling is an NP-complete problem. We present exact exponential time algorithms that are faster than the naive O *((k+1)n ) algorithm that would try all possible mappings. The improvement is best seen in the first NP-complete case of k=4, where the running time of our algorithm is O(1.3006n ). Furthermore we show that dynamic programming can be used to establish an O(3.8730n ) algorithm to compute an optimal L(2,1)-labeling.

Journal ArticleDOI
TL;DR: This paper provides a mathematical analysis of the convergence of a (1+1)-ES on unimodal spherical objective functions in the presence of noise and proves for a multiplicative noise model that for a positive expected value of the noisy objective function, convergence or divergence happens depending on the infimum of the support of the noise.
Abstract: Noise is present in many real-world continuous optimization problems. Stochastic search algorithms such as Evolution Strategies (ESs) have been proposed as effective search methods in such contexts. In this paper, we provide a mathematical analysis of the convergence of a (1+1)-ES on unimodal spherical objective functions in the presence of noise. We prove for a multiplicative noise model that for a positive expected value of the noisy objective function, convergence or divergence happens depending on the infimum of the support of the noise. Moreover, we investigate convergence rates and show that log-linear convergence is preserved in presence of noise. This result is a strong theoretical foundation of the robustness of ESs with respect to noise.

Journal ArticleDOI
TL;DR: This work shows how to leverage the knowledge of ℛ for faster Delaunay computation and optimally handles a wide variety of inputs, e.g., overlapping disks of different sizes and fat regions.
Abstract: Suppose we want to compute the Delaunay triangulation of a set P whose points are restricted to a collection ℛ of input regions known in advance. Building on recent work by Loffler and Snoeyink, we show how to leverage our knowledge of ℛ for faster Delaunay computation. Our approach needs no fancy machinery and optimally handles a wide variety of inputs, e.g., overlapping disks of different sizes and fat regions.

Journal ArticleDOI
TL;DR: It is shown that the competitive ratio of AVR is at least ((2−δ)α)α/2, where δ is a function of α that approaches zero as α approaches infinity, and that this analysis is significantly simpler and more elementary than the original analysis in Yao et al.
Abstract: Speed scaling is a power management technique that involves dynamically changing the speed of a processor. This gives rise to dual-objective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedule. Yao, Demers, and Shenker (Proc. IEEE Symp. Foundations of Computer Science, pp. 374–382, 1995) considered the problem where the QoS constraint is deadline feasibility and the objective is to minimize the energy used. They proposed an online speed scaling algorithm Average Rate (AVR) that runs each job at a constant speed between its release and its deadline. They showed that the competitive ratio of AVR is at most (2α)α /2 if a processor running at speed s uses power s α . We show the competitive ratio of AVR is at least ((2−δ)α)α /2, where δ is a function of α that approaches zero as α approaches infinity. This shows that the competitive analysis of AVR by Yao, Demers, and Shenker is essentially tight, at least for large α. We also give an alternative proof that the competitive ratio of AVR is at most (2α)α /2 using a potential function argument. We believe that this analysis is significantly simpler and more elementary than the original analysis of AVR in Yao et al. (Proc. IEEE Symp. Foundations of Computer Science, pp. 374–382, 1995).

Journal ArticleDOI
TL;DR: In this paper, it was shown that if Δ ≥ 4 and Δ ≥ 14.5, then χ i (G)≤Δ+2, and if Δ = 3, then the injective chromatic number of G is Δ+1.
Abstract: Let mad (G) denote the maximum average degree (over all subgraphs) of G and let χ i (G) denote the injective chromatic number of G. We prove that if Δ≥4 and $\mathrm{mad}(G)<\frac{14}{5}$, then χ i (G)≤Δ+2. When Δ=3, we show that $\mathrm{mad}(G)<\frac{36}{13}$ implies χ i (G)≤5. In contrast, we give a graph G with Δ=3, $\mathrm{mad}(G)=\frac{36}{13}$, and χ i (G)=6.

Journal ArticleDOI
TL;DR: Lower bounds on the convergence rate of comparison based or selection based algorithms are derived by considering the VC-dimension of the level sets of the fitness functions, and improved lower bounds are obtained by an argument based on the number of sign patterns.
Abstract: We derive lower bounds on the convergence rate of comparison based or selection based algorithms, improving existing results in the continuous setting, and extending them to non-trivial results in the discrete case. This is achieved by considering the VC-dimension of the level sets of the fitness functions; results are then obtained through the use of the shatter function lemma. In the special case of optimization of the sphere function, improved lower bounds are obtained by an argument based on the number of sign patterns.

Journal ArticleDOI
TL;DR: An algorithm that finds out-trees and out-branchings with at least k leaves in directed graphs and undirected graphs that improves over the previously fastest algorithms for these problems with run times of 2O(klog k) poly(n) and O(poly(n)+6.75kpoly(k)) respectively.
Abstract: We present an algorithm that finds out-trees and out-branchings with at least k leaves in directed graphs. These problems are known as Directed Maximum Leaf Out-Tree and Directed Maximum Leaf Out-Branching, respectively, and—in the case of undirected graphs—as Maximum Leaf Spanning Tree. The run time of our algorithm is O(4 k nm) on directed graphs and O(poly(n)+4 k k 2) on undirected graphs. This improves over the previously fastest algorithms for these problems with run times of 2 O(klog k) poly(n) and O(poly(n)+6.75 k poly(k)) respectively.

Journal ArticleDOI
TL;DR: These min-max results are the first of their kind in the study of crossing numbers and improve the approximation factor for the approximation algorithm given by Hliněný and Salazar (Graph Drawing GD’06).
Abstract: A nonplanar graph G is near-planar if it contains an edge e such that G−e is planar. The problem of determining the crossing number of a near-planar graph is exhibited from different combinatorial viewpoints. On the one hand, we develop min-max formulas involving efficiently computable lower and upper bounds. These min-max results are the first of their kind in the study of crossing numbers and improve the approximation factor for the approximation algorithm given by Hliněný and Salazar (Graph Drawing GD’06). On the other hand, we show that it is NP-hard to compute a weighted version of the crossing number for near-planar graphs.

Journal ArticleDOI
TL;DR: This work considers congestion games with linear latency functions in which each player is aware only of a subset of all the other players, modeled by means of a social knowledge graph G in which nodes represent players and there is an edge from i to j if i knows j.
Abstract: We consider congestion games with linear latency functions in which each player is aware only of a subset of all the other players. This is modeled by means of a social knowledge graph G in which nodes represent players and there is an edge from i to j if i knows j. Under the assumption that the payoff of each player is affected only by the strategies of the adjacent ones, we first give a complete characterization of the games possessing pure Nash equilibria. Namely, if the social graph G is undirected, the game is an exact potential game and thus isomorphic to a classical congestion game. As a consequence, it always converges and possesses Nash equilibria. On the other hand, if G is directed an equilibrium is not guaranteed to exist, but the game is always convergent and an equilibrium can be found in polynomial time if G is acyclic, even if finding the best equilibrium remains an intractable problem. We then investigate the impact of the limited knowledge of the players on the performance of the game. More precisely, given a bound on the maximum degree of G, for the convergent cases we provide tight lower and upper bounds on the price of stability and asymptotically tight bounds on the price of anarchy. Such results are determined for four natural social cost functions: total and maximum presumed latencies, that is the ones the players believe to pay due to the fact that they are only aware of the existence of their neighbors, and total and maximum perceived latencies, i.e. actually experienced due to all (and not only the known) players using the same facilities. All the results are then extended to singleton congestion games.

Journal ArticleDOI
TL;DR: The NP-hardness of reoptimization variants of the shortest common superstring problem (SCS) where the local modifications consist of adding or removing a single string is shown and some lower bounds on the approximation ratio are given.
Abstract: A reoptimization problem describes the following scenario: given an instance of an optimization problem together with an optimal solution for it, we want to find a good solution for a locally modified instance. In this paper, we deal with reoptimization variants of the shortest common superstring problem (SCS) where the local modifications consist of adding or removing a single string. We show the NP-hardness of these reoptimization problems and design several approximation algorithms for them. First, we use a technique of iteratively using any SCS algorithm to design an approximation algorithm for the reoptimization variant of adding a string whose approximation ratio is arbitrarily close to 8/5 and another algorithm for deleting a string with a ratio tending to 13/7. Both algorithms significantly improve over the best currently known SCS approximation ratio of 2.5. Additionally, this iteration technique can be used to design an improved SCS approximation algorithm (without reoptimization) if the input instance contains a long string, which might be of independent interest. However, these iterative algorithms are relatively slow. Thus, we present another, faster approximation algorithm for inserting a string which is based on cutting the given optimal solution and achieves an approximation ratio of 11/6. Moreover, we give some lower bounds on the approximation ratio which can be achieved by algorithms that use such cutting strategies.

Journal ArticleDOI
TL;DR: This work can show that many of the well-known crossing number notions are NP-complete even if restricted to cubic graphs, and obtains a new and simpler proof of Hliněný’s result that computing the crossing number of a cubic graph is NP- complete.
Abstract: We show that computing the crossing number and the odd crossing number of a graph with a given rotation system is NP-complete. As a consequence we can show that many of the well-known crossing number notions are NP-complete even if restricted to cubic graphs (with or without rotation system). In particular, we can show that Tutte’s independent odd crossing number is NP-complete, and we obtain a new and simpler proof of Hliněný’s result that computing the crossing number of a cubic graph is NP-complete. We also consider the special case of multigraphs with rotation systems on a fixed number k of vertices. For k=1 we give an O(mlog m) algorithm, where m is the number of edges, and for loopless multigraphs on 2 vertices we present a linear time 2-approximation algorithm. In both cases there are interesting connections to edit-distance problems on (cyclic) strings. For larger k we show how to approximate the crossing number to within a factor of ${k+4\choose4}/5$ in time O(m k log m) on a graph with m edges.

Journal ArticleDOI
TL;DR: Different characterizations for Helly circular-arc graphs are described, including a characterization by forbidden induced subgraphs for the class, which leads to a linear-time recognition algorithm for recognizing graphs of this class.
Abstract: A circular-arc model ℳ is a circle C together with a collection $\mathcal{A}$ of arcs of C. If $\mathcal{A}$ satisfies the Helly Property then ℳ is a Helly circular-arc model. A (Helly) circular-arc graph is the intersection graph of a (Helly) circular-arc model. Circular-arc graphs and their subclasses have been the object of a great deal of attention in the literature. Linear-time recognition algorithms have been described both for the general class and for some of its subclasses. However, for Helly circular-arc graphs, the best recognition algorithm is that by Gavril, whose complexity is O(n 3). In this article, we describe different characterizations for Helly circular-arc graphs, including a characterization by forbidden induced subgraphs for the class. The characterizations lead to a linear-time recognition algorithm for recognizing graphs of this class. The algorithm also produces certificates for a negative answer, by exhibiting a forbidden subgraph of it, within this same bound.

Journal ArticleDOI
TL;DR: In this paper, the Stackelberg Minimum Spanning Tree game (StackMST) was considered, and the first player chooses an assignment of prices to the blue edges, and then buys the cheapest possible minimum spanning tree, using any combination of red and blue edges.
Abstract: We consider a one-round two-player network pricing game, the Stackelberg Minimum Spanning Tree game or StackMST. The game is played on a graph (representing a network), whose edges are colored either red or blue, and where the red edges have a given fixed cost (representing the competitor’s prices). The first player chooses an assignment of prices to the blue edges, and the second player then buys the cheapest possible minimum spanning tree, using any combination of red and blue edges. The goal of the first player is to maximize the total price of purchased blue edges. This game is the minimum spanning tree analog of the well-studied Stackelberg shortest-path game. We analyze the complexity and approximability of the first player’s best strategy in StackMST. In particular, we prove that the problem is APX-hard even if there are only two different red costs, and give an approximation algorithm whose approximation ratio is at most min {k,1+ln b,1+ln W}, where k is the number of distinct red costs, b is the number of blue edges, and W is the maximum ratio between red costs. We also give a natural integer linear programming formulation of the problem, and show that the integrality gap of the fractional relaxation asymptotically matches the approximation guarantee of our algorithm.

Journal ArticleDOI
TL;DR: A hybrid of a simple evolutionary algorithm, the (1+1) EA, with a powerful local search operator known as variable-depth search (VDS) or Kernighan-Lin is considered, which demonstrates the usefulness of hybrid evolutionary algorithms with VDS from a rigorous theoretical perspective.
Abstract: Hybridizing evolutionary algorithms with local search has become a popular trend in recent years. There is empirical evidence for various combinatorial problems where hybrid evolutionary algorithms perform better than plain evolutionary algorithms. Due to the rapid development of a highly active field of research, theory lags far behind and a solid theoretical foundation of hybrid metaheuristics is sorely needed. We are aiming at a theoretical understanding of why and when hybrid evolutionary algorithms are successful in combinatorial optimization. To this end, we consider a hybrid of a simple evolutionary algorithm, the (1+1) EA, with a powerful local search operator known as variable-depth search (VDS) or Kernighan-Lin. Three combinatorial problems are investigated: Mincut, Knapsack, and Maxsat. More precisely, we focus on simply structured problem instances that contain local optima which are very hard to overcome for many common metaheuristics. The plain (1+1) EA, iterated local search, and simulated annealing need exponential time for optimization, with high probability. In sharp contrast, the hybrid algorithm using VDS finds a global optimum in expected polynomial time. These results demonstrate the usefulness of hybrid evolutionary algorithms with VDS from a rigorous theoretical perspective.

Journal ArticleDOI
TL;DR: A 4-approximation algorithm for the problem of placing the fewest guards on a 1.5D terrain so that every point of the terrain is seen by at least one guard improves on the previous best approximation factor of 5.
Abstract: We present a 4-approximation algorithm for the problem of placing the fewest guards on a 1.5D terrain so that every point of the terrain is seen by at least one guard. This improves on the previous best approximation factor of 5 (see King in Proceedings of the 13th Latin American Symposium on Theoretical Informatics, pp. 629–640, 2006). Unlike most of the previous techniques, our method is based on rounding the linear programming relaxation of the corresponding covering problem. Besides the simplicity of the analysis, which mainly relies on decomposing the constraint matrix of the LP into totally balanced matrices, our algorithm, unlike previous work, generalizes to the weighted and partial versions of the basic problem.