scispace - formally typeset
Search or ask a question

Showing papers in "Algorithmica in 2016"


Journal ArticleDOI
TL;DR: In this article, the authors proposed a quantum random walk that not only detects but also finds marked vertices in a graph by interpolation between the random walk and the absorbing walk, whose states are absorbing.
Abstract: We solve an open problem by constructing quantum walks that not only detect but also find marked vertices in a graph. In the case when the marked set $$M$$M consists of a single vertex, the number of steps of the quantum walk is quadratically smaller than the classical hitting time $${{\mathrm{HT}}}(P,M)$$HT(P,M) of any reversible random walk $$P$$P on the graph. In the case of multiple marked elements, the number of steps is given in terms of a related quantity $${\hbox {HT}}^{+}(P,M)$$HT+(P,M) which we call extended hitting time. Our approach is new, simpler and more general than previous ones. We introduce a notion of interpolation between the random walk $$P$$P and the absorbing walk $$P'$$P?, whose marked states are absorbing. Then our quantum walk is simply the quantum analogue of this interpolation. Contrary to previous approaches, our results remain valid when the random walk $$P$$P is not state-transitive. We also provide algorithms in the cases when only approximations or bounds on parameters $$p_M$$pM (the probability of picking a marked vertex from the stationary distribution) and $${\hbox {HT}}^{+}(P,M)$$HT+(P,M) are known.

79 citations


Journal ArticleDOI
TL;DR: It is proved that non-elitist EAs under a set of specific conditions can optimise benchmark functions in expected polynomial time, even when vanishingly little information about the fitness values of individual solutions or populations is available, the first runtime analysis of randomised search heuristics under partial information.
Abstract: Although widely applied in optimisation, relatively little has been proven rigorously about the role and behaviour of populations in randomised search processes. This paper presents a new method to prove upper bounds on the expected optimisation time of population-based randomised search heuristics that use non-elitist selection mechanisms and unary variation operators. Our results follow from a detailed drift analysis of the population dynamics in these heuristics. This analysis shows that the optimisation time depends on the relationship between the strength of the selective pressure and the degree of variation introduced by the variation operator. Given limited variation, a surprisingly weak selective pressure suffices to optimise many functions in expected polynomial time. We derive upper bounds on the expected optimisation time of non-elitist evolutionary algorithms (EA) using various selection mechanisms, including fitness proportionate selection. We show that EAs using fitness proportionate selection can optimise standard benchmark functions in expected polynomial time given a sufficiently low mutation rate. As a second contribution, we consider an optimisation scenario with partial information, where fitness values of solutions are only partially available. We prove that non-elitist EAs under a set of specific conditions can optimise benchmark functions in expected polynomial time, even when vanishingly little information about the fitness values of individual solutions or populations is available. To our knowledge, this is the first runtime analysis of randomised search heuristics under partial information.

75 citations


Journal ArticleDOI
TL;DR: This paper considers an extension of the Hospitals/Residents problem in which each hospital specifies not only an upper bound but also a lower bound on its number of positions, and gives an exponential-time exact algorithm for this problem.
Abstract: The Hospitals/Residents problem is a many-to-one extension of the stable marriage problem. In an instance, each hospital specifies a quota, i.e., an upper bound on the number of positions it provides. It is well-known that in any instance, there exists at least one stable matching, and finding one can be done in polynomial time. In this paper, we consider an extension in which each hospital specifies not only an upper bound but also a lower bound on its number of positions. In this setting, there can be instances that admit no stable matching, but the problem of asking if there is a stable matching is solvable in polynomial time. In case there is no stable matching, we consider the problem of finding a matching that is "as stable as possible", namely, a matching with a minimum number of blocking pairs. We show that this problem is hard to approximate within the ratio of $$(|H|+|R|)^{1-\epsilon }$$(|H|+|R|)1-∈ for any positive constant $$\epsilon $$∈ where $$H$$H and $$R$$R are the sets of hospitals and residents, respectively. We then tackle this hardness from two different angles. First, we give an exponential-time exact algorithm whose running time is $$O((|H||R|)^{t+1})$$O((|H||R|)t+1), where $$t$$t is the number of blocking pairs in an optimal solution. Second, we consider another measure for optimization criteria, i.e., the number of residents who are involved in blocking pairs. We show that this problem is still NP-hard but has a polynomial-time $$\sqrt{|R|}$$|R|-approximation algorithm.

61 citations


Journal ArticleDOI
TL;DR: Stochastic versions of OneMax and LeadingOnes are considered and the performance of evolutionary algorithms with and without populations on these problems are analyzed, finding that even small populations can make evolutionary algorithms perform well for high noise levels, well outside the abilities of the ($1+1$$1-1) EA.
Abstract: We consider stochastic versions of OneMax and LeadingOnes and analyze the performance of evolutionary algorithms with and without populations on these problems. It is known that the ($$1+1$$1+1) EA on OneMax performs well in the presence of very small noise, but poorly for higher noise levels. We extend these results to LeadingOnes and to many different noise models, showing how the application of drift theory can significantly simplify and generalize previous analyses. Most surprisingly, even small populations (of size $$\varTheta (\log n)$$?(logn)) can make evolutionary algorithms perform well for high noise levels, well outside the abilities of the ($$1+1$$1+1) EA. Larger population sizes are even more beneficial; we consider both parent and offspring populations. In this sense, populations are robust in these stochastic settings.

60 citations


Journal ArticleDOI
TL;DR: A linear-time algorithm that takes a graph as input and returns a positive or a negative witness for o1p, then the algorithm computes an embedding and can augment G to a maximal o1P graph.
Abstract: A graph is outer 1-planar (o1p) if it can be drawn in the plane such that all vertices are in the outer face and each edge is crossed at most once. o1p graphs generalize outerplanar graphs, which can be recognized in linear time, and specialize 1-planar graphs, whose recognition is $${NP}$$NP-hard. We explore o1p graphs. Our first main result is a linear-time algorithm that takes a graph as input and returns a positive or a negative witness for o1p. If a graph $$G$$G is o1p, then the algorithm computes an embedding and can augment $$G$$G to a maximal o1p graph. Otherwise, $$G$$G includes one of six minors, which is detected by the recognition algorithm. Secondly, we establish structural properties of o1p graphs. o1p graphs are planar and are subgraphs of planar graphs with a Hamiltonian cycle. They are neither closed under edge contraction nor under subdivision. Several important graph parameters, such as treewidth, colorability, stack number, and queue number, increase by one from outerplanar to o1p graphs. Every o1p graph of size $$n$$n has at most $$\frac{5}{2} n - 4$$52n-4 edges and there are maximal o1p graphs with $$\frac{11}{5} n - \frac{18}{5}$$115n-185 edges, and these bounds are tight. Finally, every o1p graph has a straight-line grid drawing in $$\fancyscript{O}(n^2)$$O(n2) area with all vertices in the outer face, a planar visibility representation in $$\fancyscript{O}(n \log n)$$O(nlogn) area, and a 3D straight-line drawing in linear volume, and these drawings can be constructed in linear time.

56 citations


Journal ArticleDOI
TL;DR: In this paper, a new algorithm for estimating the number of triangles in dynamic graph streams where edges can be both inserted and deleted is presented. But this algorithm does not make any assumptions on the structure of the graph.
Abstract: Estimating the number of triangles in graph streams using a limited amount of memory has become a popular topic in the last decade. Different variations of the problem have been studied, depending on whether the graph edges are provided in an arbitrary order or as incidence lists. However, with a few exceptions, the algorithms have considered insert-only streams. We present a new algorithm estimating the number of triangles in dynamic graph streams where edges can be both inserted and deleted. We show that our algorithm achieves better time and space complexity than previous solutions for various graph classes, for example sparse graphs with a relatively small number of triangles. Also, for graphs with constant transitivity coefficient, a common situation in real graphs, this is the first algorithm achieving constant processing time per edge. The result is achieved by a novel approach combining sampling of vertex triples and sparsification of the input graph. In the course of the analysis of the algorithm we present a lower bound on the number of pairwise independent 2-paths in general graphs which might be of independent interest. At the end of the paper we discuss lower bounds on the space complexity of triangle counting algorithms that make no assumptions on the structure of the graph.

54 citations


Journal ArticleDOI
TL;DR: A new algebraic sieving technique to detect constrained multilinear monomials in multivariate polynomial generating functions given by an evaluation oracle is introduced and shown to show an $$O^*(2^k)$$O∗(2k)-time polynomials space algorithm for the k-sized Graph Motif problem.
Abstract: We introduce a new algebraic sieving technique to detect constrained multilinear monomials in multivariate polynomial generating functions given by an evaluation oracle. The polynomials are assumed to have coefficients from a field of characteristic two. As applications of the technique, we show an $$O^*(2^k)$$O?(2k)-time polynomial space algorithm for the $$k$$k-sized Graph Motif problem. We also introduce a new optimization variant of the problem, called Closest Graph Motif and solve it within the same time bound. The Closest Graph Motif problem encompasses several previously studied optimization variants, like Maximum Graph Motif, Min-Substitute Graph Motif, and Min-Add Graph Motif. Finally, we provide a piece of evidence that our result might be essentially tight: the existence of an $$O^*((2-\epsilon )^k)$$O?((2-∈)k)-time algorithm for the Graph Motif problem implies an $$O((2-\epsilon ')^n)$$O((2-∈?)n)-time algorithm for Set Cover.

52 citations


Journal ArticleDOI
TL;DR: A polynomial time algorithm is presented that gives a 3-approximate solution to the (multi-)knapsack center problem such that one knapsack constraint is satisfied and the others may be violated by at most a factor of $$1+\epsilon $$1-ϵ.
Abstract: In the classic k-center problem, we are given a metric graph, and the objective is to select k nodes as centers such that the maximum distance from any vertex to its closest center is minimized. In this paper, we consider two important generalizations of k-center, the matroid center problem and the knapsack center problem. Both problems are motivated by recent content distribution network applications. Our contributions can be summarized as follows: (1) We consider the matroid center problem in which the centers are required to form an independent set of a given matroid. We show this problem is NP-hard even on a line. We present a 3-approximation algorithm for the problem on general metrics. We also consider the outlier version of the problem where a given number of vertices can be excluded as outliers from the solution. We present a 7-approximation for the outlier version. (2) We consider the (multi-)knapsack center problem in which the centers are required to satisfy one (or more) knapsack constraint(s). It is known that the knapsack center problem with a single knapsack constraint admits a 3-approximation. However, when there are at least two knapsack constraints, we show this problem is not approximable at all. To complement the hardness result, we present a polynomial time algorithm that gives a 3-approximate solution such that one knapsack constraint is satisfied and the others may be violated by at most a factor of $$1+\epsilon $$1+∈. We also obtain a 3-approximation for the outlier version that may violate the knapsack constraint by $$1+\epsilon $$1+∈.

52 citations


Journal ArticleDOI
TL;DR: This work studies a very general graph modification problem that allows all three types of operations and develops an algorithm for chordal editing in time, where and is the number of vertices of .
Abstract: Graph modification problems typically ask for a small set of operations that transforms a given graph to have a certain property. The most commonly considered operations include vertex deletion, edge deletion, and edge addition; for the same property, one can define significantly different versions by allowing different operations. We study a very general graph modification problem that allows all three types of operations: given a graph [InlineEquation not available: see fulltext.] and integers [InlineEquation not available: see fulltext.], and [InlineEquation not available: see fulltext.], the chordal editing problem asks whether [InlineEquation not available: see fulltext.] can be transformed into a chordal graph by at most [InlineEquation not available: see fulltext.] vertex deletions, [InlineEquation not available: see fulltext.] edge deletions, and [InlineEquation not available: see fulltext.] edge additions. Clearly, this problem generalizes both chordal deletion and chordal completion (also known as minimum fill-in). Our main result is an algorithm for chordal editing in time [InlineEquation not available: see fulltext.], where [InlineEquation not available: see fulltext.] and [InlineEquation not available: see fulltext.] is the number of vertices of [InlineEquation not available: see fulltext.]. Therefore, the problem is fixed-parameter tractable parameterized by the total number of allowed operations. Our algorithm is both more efficient and conceptually simpler than the previously known algorithm for the special case chordal deletion.

50 citations


Journal ArticleDOI
TL;DR: This paper investigates the setting of additive drift under the assumption of strong concentration of the “step size” of the process, and shows that under sufficiently strong drift towards the goal the hitting time is superpolynomial with high probability; this corresponds to the well-known Negative Drift Theorem.
Abstract: Recent advances in drift analysis have given us better and better tools for understanding random processes, including the run time of randomized search heuristics. In the setting of multiplicative drift we do not only have excellent bounds on the expected run time, but also more general results showing the strong concentration of the run time. In this paper we investigate the setting of additive drift under the assumption of strong concentration of the "step size" of the process. Under sufficiently strong drift towards the goal we show a strong concentration of the hitting time. In contrast to this, we show that in the presence of small drift a Gambler's-Ruin-like behavior of the process overrides the influence of the drift, leading to a maximal movement of about $$\sqrt{t}$$t steps within t iterations. Finally, in the presence of sufficiently strong negative drift the hitting time is superpolynomial with high probability; this corresponds to the well-known Negative Drift Theorem.

50 citations


Journal ArticleDOI
TL;DR: This work shows that for the two search heuristics Randomized Local Search (RLS) and the (1+1) Evolutionary Algorithm on the two well-studied problems OneMax and LeadingOnes that their expected runtimes indeed deviate by at most a small additive constant from the expected runtime when started in a solution of average fitness.
Abstract: Analyzing the runtime of a Randomized Search Heuristic (RSH) by theoretical means often turns out to be rather tricky even for simple optimization problems. The main reason lies in the randomized nature of these algorithms. Both the optimization routines and the initialization of RSHs are typically based on random samples. It has often been observed, though, that the expected runtime of RSHs does not deviate much from the expected runtime when starting in an initial solution of average fitness. Having this information a priori could greatly simplify runtime analyses, as it reduces the necessity of analyzing the influence of the random initialization. Most runtime bounds in the literature, however, do not profit from this observation and are either too pessimistic or require a rather complex proof. In this work we show for the two search heuristics Randomized Local Search (RLS) and the (1+1) Evolutionary Algorithm on the two well-studied problems OneMax and LeadingOnes that their expected runtimes indeed deviate by at most a small additive constant from the expected runtime when started in a solution of average fitness. For RLS on OneMax, this additive discrepancy is $$-1/2 \pm o(1)$$-1/2±o(1), leading to the first runtime statement for this problem that is precise apart from additive o(1) terms: The expected number of iterations until an optimal search point is found, is $$n H_{n/2} - 1/2 \pm o(1)$$nHn/2-1/2±o(1), where $$H_{n/2}$$Hn/2 denotes the (n / 2)th harmonic number when n is even, and $$H_{n/2}:= (H_{\lfloor n/2 \rfloor } + H_{\lceil n/2 \rceil })/2$$Hn/2:=(H?n/2?+H?n/2?)/2 when n is odd. For this analysis and the one of the (1+1) EA optimizing OneMax we use a coupling technique, which we believe to be interesting also for the study of more complex optimization problems. For the analyses of the LeadingOnes test function, we show that one can express the discrepancy solely via the waiting times for leaving fitness level 0 and 1, which then easily yields the discrepancy, and in particular, without having to deal with so-called free-riders.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the problem is NP-complete on co-bipartite graphs, even if each vertex has weight 1; however, the problem can be solved in polynomial time on interval graphs, provided that an intersection model is given as part of the input graph.
Abstract: The Weighted Vertex Integrity (wVI) problem takes as input an n-vertex graph G, a weight function $$w:V(G)\rightarrow {\mathbb {N}}$$w:V(G)źN, and an integer p. The task is to decide if there exists a set $$X\subseteq V(G)$$X⊆V(G) such that the weight of X plus the weight of a heaviest component of $$G-X$$G-X is at most p. Among other results, we prove that:(1)wVI is NP-complete on co-bipartite graphs, even if each vertex has weight 1;(2)wVI can be solved in $$O(p^{p+1}n)$$O(pp+1n) time;(3)wVI admits a kernel with at most $$p^3$$p3 vertices. Result (1) refutes a conjecture by Ray and Deogun (J Comb Math Comb Comput 16:65---73, 1994) and answers an open question by Ray et al. (Ars Comb 79:77---95, 2006). It also complements a result by Kratsch et al. (Discret Appl Math 77(3):259---270, 1997), stating that the unweighted version of the problem can be solved in polynomial time on co-comparability graphs of bounded dimension, provided that an intersection model of the input graph is given as part of the input. An instance of the Weighted Component Order Connectivity (wCOC) problem consists of an n-vertex graph G, a weight function $$w:V(G)\rightarrow {\mathbb {N}}$$w:V(G)źN, and two integers k and $$\ell $$l, and the task is to decide if there exists a set $$X\subseteq V(G)$$X⊆V(G) such that the weight of X is at most k and the weight of a heaviest component of $$G-X$$G-X is at most $$\ell $$l. In some sense, the wCOC problem can be seen as a refined version of the wVI problem. We obtain several classical and parameterized complexity results on the wCOC problem, uncovering interesting similarities and differences between wCOC and wVI. We prove, among other results, that:(4)wCOC can be solved in $$O(\min \{k,\ell \}\cdot n^3)$$O(min{k,l}·n3) time on interval graphs, while the unweighted version can be solved in $$O(n^2)$$O(n2) time on this graph class;(5)wCOC is W[1]-hard on split graphs when parameterized by k or by $$\ell $$l;(6)wCOC can be solved in $$2^{O(k\log \ell )} n$$2O(klogl)n time;(7)wCOC admits a kernel with at most $$k\ell (k+\ell )+k$$kl(k+l)+k vertices. We also show that result (6) is essentially tight by proving that wCOC cannot be solved in $$2^{o(k \log \ell )}n^{O(1)}$$2o(klogl)nO(1) time, even when restricted to split graphs, unless the Exponential Time Hypothesis fails.

Journal ArticleDOI
TL;DR: A recent technique of recompression, which is applicable to general word equations, is shown to be suitable also in this case and the running time is lowered to $$\mathcal {O}(n)$$O (n) in the RAM model.
Abstract: In this paper we consider word equations with one-variable (and arbitrarily many occurrences of it). A recent technique of recompression, which is applicable to general word equations, is shown to be suitable also in this case. While in general case the recompression is nondeterministic in case of one-variable it becomes deterministic and its running time is $$\mathcal {O}(n + \#_X \log n)$$O(n+#Xlogn), where $$\#_X$&#X is the number of occurrences of the variable in the equation. This matches the previously best algorithm due to Dąbrowski and Plandowski. Then, using a couple of heuristics as well as more detailed time analysis, the running time is lowered to $$\mathcal {O}(n)$$O(n) in the RAM model. Unfortunately, no new properties of solutions are shown.

Journal ArticleDOI
TL;DR: A randomized mechanism, called Equal Cost, is presented, which is group strategyproof and achieves a bounded approximation ratio for all k and n, for any given concave cost function, and is the first mechanism with a bounded approximation ratio for instances with k facilities and any number of agents.
Abstract: We consider k-Facility Location games, where n strategic agents report their locations on the real line and a mechanism maps them to k facilities. Each agent seeks to minimize his connection cost, given by a nonnegative increasing function of his distance to the nearest facility. Departing from previous work, that mostly considers the identity cost function, we are interested in mechanisms without payments that are (group) strategyproof for any given cost function, and achieve a good approximation ratio for the social cost and/or the maximum cost of the agents. We present a randomized mechanism, called Equal Cost, which is group strategyproof and achieves a bounded approximation ratio for all k and n, for any given concave cost function. The approximation ratio is at most 2 for Max Cost and at most n for Social Cost. To the best of our knowledge, this is the first mechanism with a bounded approximation ratio for instances with $$k \ge 3$$kź3 facilities and any number of agents. Our result implies an interesting separation between deterministic mechanisms, whose approximation ratio for Max Cost jumps from 2 to unbounded when k increases from 2 to 3, and randomized mechanisms, whose approximation ratio remains at most 2 for all k. On the negative side, we exclude the possibility of a mechanism with the properties of Equal Cost for strictly convex cost functions. We also present a randomized mechanism, called Pick the Loser, which applies to instances with k facilities and only $$n = k+1$$n=k+1 agents. For any given concave cost function, Pick the Loser is strongly group strategyproof and achieves an approximation ratio of 2 for Social Cost.

Journal ArticleDOI
TL;DR: This work addresses the problem of authenticating the hash table operations, where the goal is to design protocols capable of verifying the correctness of queries and updates performed by the server, thus ensuring the integrity of the remotely stored data across its entire update history.
Abstract: Suppose a client stores $$n$$n elements in a hash table that is outsourced to an untrusted server. We address the problem of authenticating the hash table operations, where the goal is to design protocols capable of verifying the correctness of queries and updates performed by the server, thus ensuring the integrity of the remotely stored data across its entire update history. Solutions to this authentication problem allow the client to gain trust in the operations performed by a faulty or even malicious server that lies outside the administrative control of the client. We present two novel schemes that implement an authenticated hash table. An authenticated hash table exports the basic hash-table functionality for maintaining a dynamic set of elements, coupled with the ability to provide short cryptographic proofs that a given element is a member or not of the current set. By employing efficient algorithmic constructs and cryptographic accumulators as the core security primitive, our schemes provide constant proof size, constant verification time and sublinear query or update time, strictly improving upon previous approaches. Specifically, in our first scheme which is based on the RSA accumulator, the server is able to construct a (non-)membership proof in constant time and perform updates in $$O\left( n^{\epsilon }\log n\right) $$On∈logn time for any fixed constant $$0<\epsilon <1$$0<∈<1. A variation of this scheme achieves a different trade-off, offering constant update time and $$O(n^{\epsilon })$$O(n∈) query time. Our second scheme uses an accumulator based on bilinear pairings to achieve $$O(n^{\epsilon })$$O(n∈) update time at the server while keeping all other complexities constant. A variation of this scheme achieves $$O(n^{\epsilon }\log n)$$O(n∈logn) time for queries and constant update time. An experimental evaluation of both solutions shows their practicality.

Journal ArticleDOI
TL;DR: It is shown that the number of satisfying assignments can be computed in polynomial time for CNF formulas whose incidence graphs have bounded modular treewidth, and the first one to harness this technique for #SAT.
Abstract: We define the modular treewidth of a graph as its treewidth after contraction of modules. This parameter properly generalizes treewidth and is itself properly generalized by clique-width. We show that the number of satisfying assignments can be computed in polynomial time for CNF formulas whose incidence graphs have bounded modular treewidth. Our result generalizes known results for the treewidth of incidence graphs and is incomparable with known results for clique-width (or rank-width) of signed incidence graphs. The contraction of modules is an effective data reduction procedure. Our algorithm is the first one to harness this technique for #SAT. The order of the polynomial bounding the runtime of our algorithm depends on the modular treewidth of the input formula. We show that it is unlikely that this dependency can be avoided by proving that SAT is W[1]-hard when parameterized by the modular incidence treewidth of the given CNF formula.

Journal ArticleDOI
TL;DR: In this article, the authors give a complete solution of the leader election problem for anonymous agents in arbitrary networks, where agents are anonymous (identical), execute the same deterministic algorithm and move in synchronous rounds along links of the network.
Abstract: A team consisting of an unknown number of mobile agents, starting from different nodes of an unknown network, possibly at different times, have to meet at the same node. Agents are anonymous (identical), execute the same deterministic algorithm and move in synchronous rounds along links of the network. An initial configuration of agents is called gatherable if there exists a deterministic algorithm (even dedicated to this particular configuration) that achieves meeting of all agents in one node. Which configurations are gatherable and how to gather all of them deterministically by the same algorithm? We give a complete solution of this gathering problem in arbitrary networks. We characterize all gatherable configurations and give two universal deterministic gathering algorithms, i.e., algorithms that gather all gatherable configurations. The first algorithm works under the assumption that a common upper bound $$N$$N on the size of the network is known to all agents. In this case our algorithm guarantees gathering with detection, i.e., the existence of a round for any gatherable configuration, such that all agents are at the same node and all declare that gathering is accomplished. If no upper bound on the size of the network is known, we show that a universal algorithm for gathering with detection does not exist. Hence, for this harder scenario, we construct a second universal gathering algorithm, which guarantees that, for any gatherable configuration, all agents eventually get to one node and stop, although they cannot tell if gathering is over. The time of the first algorithm is polynomial in the upper bound $$N$$N on the size of the network, and the time of the second algorithm is polynomial in the (unknown) size itself. Our results have an important consequence for the leader election problem for anonymous agents in arbitrary graphs. Leader election is a fundamental symmetry breaking problem in distributed computing. Its goal is to assign, in some common round, value 1 (leader) to one of the entities and value 0 (non-leader) to all others. For anonymous agents in graphs, leader election turns out to be equivalent to gathering with detection. Hence, as a by-product, we obtain a complete solution of the leader election problem for anonymous agents in arbitrary graphs.

Journal ArticleDOI
TL;DR: This paper affirmatively answers the question whether this result can be extended to a class of graphs of degree greater than three and proposes a quadratic-time algorithm based on the book embedding viewpoint of the problem.
Abstract: Back in the eighties, Heath [Algorithms for embedding graphs in books. PhD thesis, University of North Carolina, Chapel Hill, 1985] showed that every 3-planar graph is subhamiltonian and asked whether this result can be extended to a class of graphs of degree greater than three. In this paper we affirmatively answer this question for the class of 4-planar graphs. Our contribution consists of two algorithms: The first one is limited to triconnected graphs, but runs in linear time and uses existing methods for computing hamiltonian cycles in planar graphs. The second one, which solves the general case of the problem, is a quadratic-time algorithm based on the book embedding viewpoint of the problem.

Journal ArticleDOI
TL;DR: It is proved that a (2+1) EA with genotype diversity is able to find the global optimum of the Maze function, previously considered by Kötzing and Molter [9], in polynomial time.
Abstract: We study the behavior of a population-based EA and the Max---Min Ant System (MMAS) on a family of deterministically-changing fitness functions, where, in order to find the global optimum, the algorithms have to find specific local optima within each of a series of phases. In particular, we prove that a (2+1) EA with genotype diversity is able to find the global optimum of the Maze function, previously considered by Kotzing and Molter [9], in polynomial time. This is then generalized to a hierarchy result stating that for every $$\mu $$μ, a ($$\mu $$μ+1) EA with genotype diversity is able to track a Maze function extended over a finite alphabet of $$\mu $$μ symbols, whereas population size $$\mu -1$$μ-1 is not sufficient. Furthermore, we show that MMAS does not require additional modifications to track the optimum of the finite-alphabet Maze functions, and, using a novel drift statement to simplify the analysis, reduce the required phase length of the Maze function.

Journal ArticleDOI
TL;DR: It is proved that the answer is yes, at least for protocols that use a bounded number of rounds, and if a Reverse Newman’s Theorem can be proven in full generality, then full compression of interactive communication and fully-general direct-sum theorems will result.
Abstract: Newman's theorem states that we can take any public-coin communication protocol and convert it into one that uses only private randomness with but a little increase in communication complexity. We consider a reversed scenario in the context of information complexity: can we take a protocol that uses private randomness and convert it into one that only uses public randomness while preserving the information revealed to each player? We prove that the answer is yes, at least for protocols that use a bounded number of rounds. As an application, we prove new direct-sum theorems through the compression of interactive communication in the bounded-round setting. To obtain this application, we prove a new one-shot variant of the Slepian---Wolf coding theorem, interesting in its own right. Furthermore, we show that if a Reverse Newman's Theorem can be proven in full generality, then full compression of interactive communication and fully-general direct-sum theorems will result.

Journal ArticleDOI
TL;DR: In this paper, the authors revisited the matrix problems of sparse null space and matrix sparsification, and showed that they are equivalent, and gave a powerful tool to extend algorithms and heuristics for sparse approximation theory to these problems.
Abstract: We revisit the matrix problems sparse null space and matrix sparsification, and show that they are equivalent. We then proceed to seek algorithms for these problems: we prove the hardness of approximation of these problems, and also give a powerful tool to extend algorithms and heuristics for sparse approximation theory to these problems.

Journal ArticleDOI
TL;DR: It is shown that the k-colouring reconfiguration problem is polynomial-time solvable for k=3, settling an open problem of Cereceda, van den Heuvel and Johnson.
Abstract: The $$k$$k-colouring reconfiguration problem asks whether, for a given graph $$G$$G, two proper $$k$$k-colourings $$\alpha $$ź and $$\beta $$β of $$G$$G, and a positive integer $$\ell $$l, there exists a sequence of at most $$\ell +1$$l+1 proper $$k$$k-colourings of $$G$$G which starts with $$\alpha $$ź and ends with $$\beta $$β and where successive colourings in the sequence differ on exactly one vertex of $$G$$G We give a complete picture of the parameterized complexity of the $$k$$k-colouring reconfiguration problem for each fixed $$k$$k when parameterized by $$\ell $$l First we show that the $$k$$k-colouring reconfiguration problem is polynomial-time solvable for $$k=3$$k=3, settling an open problem of Cereceda, van den Heuvel and Johnson Then, for all $$k \ge 4$$kź4, we show that the $$k$$k-colouring reconfiguration problem, when parameterized by $$\ell $$l, is fixed-parameter tractable (addressing a question of Mouawad, Nishimura, Raman, Simjour and Suzuki) but that it has no polynomial kernel unless the polynomial hierarchy collapses

Journal ArticleDOI
TL;DR: The results prove that Hypergraph 2-Colorability can be solved in polynomial time for hypergraphs whose vertex-hyperedge incidence graphs is $$P_7$$P7-free.
Abstract: Let $$G$$G be a connected $$P_k$$Pk-free graph, $$k \ge 4$$k?4. We show that $$G$$G admits a connected dominating set that induces either a $$P_{k-2}$$Pk-2-free graph or a graph isomorphic to $$P_{k-2}$$Pk-2. In fact, every minimum connected dominating set of $$G$$G has this property. This yields a new characterization for $$P_k$$Pk-free graphs: a graph $$G$$G is $$P_k$$Pk-free if and only if each connected induced subgraph of $$G$$G has a connected dominating set that induces either a $$P_{k-2}$$Pk-2-free graph, or a graph isomorphic to $$C_k$$Ck. We present an efficient algorithm that, given a connected graph $$G$$G, computes a connected dominating set $$X$$X of $$G$$G with the following property: for the minimum $$k$$k such that $$G$$G is $$P_k$$Pk-free, the subgraph induced by $$X$$X is $$P_{k-2}$$Pk-2-free or isomorphic to $$P_{k-2}$$Pk-2. As an application our results, we prove that Hypergraph 2-Colorability can be solved in polynomial time for hypergraphs whose vertex-hyperedge incidence graphs is $$P_7$$P7-free.

Journal ArticleDOI
TL;DR: It is shown that—in contrast to plain randomized communication complexity—every boolean function admits an AM communication protocol where on each yes-input, the distribution of Merlin’s proof leaks no information about the input and moreover, this proof is unique for each outcome of Arthur's randomness.
Abstract: We study whether information complexity can be used to attack the long-standing open problem of proving lower bounds against Arthur---Merlin ($${{\textsf {AM}}}$$AM) communication protocols. Our starting point is to show that--in contrast to plain randomized communication complexity--every boolean function admits an $${{\textsf {AM}}}$$AM communication protocol where on each yes-input, the distribution of Merlin's proof leaks no information about the input and moreover, this proof is unique for each outcome of Arthur's randomness. We posit that these two properties of zero information leakage and unambiguity on yes-inputs are interesting in their own right and worthy of investigation as new avenues toward $${{\textsf {AM}}}$$AM. Zero-information protocols ($${{\textsf {ZAM}}}$$ZAM): Our basic $${{\textsf {ZAM}}}$$ZAM protocol uses exponential communication for some functions, and this raises the question of whether more efficient protocols exist. We prove that all functions in the classical space-bounded complexity classes $${{\textsf {NL}}}$$NL and $${\oplus }{{{\textsf {L}}}}$$źL have polynomial-communication $${{\textsf {ZAM}}}$$ZAM protocols. We also prove that $${{\textsf {ZAM}}}$$ZAM complexity is lower bounded by conondeterministic communication complexity. Unambiguous protocols ($${{\textsf {UAM}}}$$UAM): Our most technically substantial result is a $$\Omega (n)$$Ω(n) lower bound on the $${{\textsf {UAM}}}$$UAM complexity of the $${{\textsf {NP}}}$$NP-complete set-intersection function; the proof uses information complexity arguments in a new, indirect way and overcomes the "zero-information barrier" described above. We also prove that in general, $${{\textsf {UAM}}}$$UAM complexity is lower bounded by the classic discrepancy bound, and we give evidence that it is not generally lower bounded by the classic corruption bound.

Journal ArticleDOI
TL;DR: It is established that graph canonisation, and thus graph isomorphism, is $$\mathsf {FPT}$$FPT when parameterized by elimination distance to bounded degree, extending results of Bouland et al.
Abstract: A commonly studied means of parameterizing graph problems is the deletion distance from triviality (Guo et al., Parameterized and exact computation, Springer, Berlin, pp. 162---173, 2004), which counts vertices that need to be deleted from a graph to place it in some class for which efficient algorithms are known. In the context of graph isomorphism, we define triviality to mean a graph with maximum degree bounded by a constant, as such graph classes admit polynomial-time isomorphism tests. We generalise deletion distance to a measure we call elimination distance to triviality, based on elimination trees or tree-depth decompositions. We establish that graph canonisation, and thus graph isomorphism, is $$\mathsf {FPT}$$FPT when parameterized by elimination distance to bounded degree, extending results of Bouland et al. (Parameterized and exact computation, Springer, Berlin, pp. 218---230, 2012).

Journal ArticleDOI
TL;DR: This work considers d-dimensional lattice path models restricted to the first orthant whose defining step sets exhibit reflective symmetry across every axis and provides explicit asymptotic enumerative formulas for the number of walks of a fixed length.
Abstract: We consider d-dimensional lattice path models restricted to the first orthant whose defining step sets exhibit reflective symmetry across every axis. Given such a model, we provide explicit asymptotic enumerative formulas for the number of walks of a fixed length: the exponential growth is given by the number of distinct steps a model can take, while the sub-exponential growth depends only on the dimension of the underlying lattice and the number of steps moving forward in each coordinate. The generating function of each model is first expressed as the diagonal of a multivariate rational function, then asymptotic expressions are derived by analyzing the singular variety of this rational function. Additionally, we show how to compute subdominant growth, reflect on the difference between rational diagonals and differential equations as data structures for D-finite functions, and show how to determine first order asymptotics for the subset of walks that start and end at the origin.

Journal ArticleDOI
TL;DR: The first approximate distance oracle for sparse directed networks with time-dependent arc-travel-times determined by continuous, piecewise linear, positive functions possessing the FIFO property was presented in this paper.
Abstract: We present the first approximate distance oracle for sparse directed networks with time-dependent arc-travel-times determined by continuous, piecewise linear, positive functions possessing the FIFO property. Our approach precomputes $$(1+\varepsilon )$$(1+?)-approximate distance summaries from selected landmark vertices to all other vertices in the network. Our oracle uses subquadratic space and time preprocessing, and provides two sublinear-time query algorithms that deliver constant and $$(1+\sigma )$$(1+?)-approximate shortest-travel-times, respectively, for arbitrary origin---destination pairs in the network, for any constant $$\sigma > \varepsilon $$?>?. Our oracle is based only on the sparsity of the network, along with two quite natural assumptions about travel-time functions which allow the smooth transition towards asymmetric and time-dependent distance metrics.

Journal ArticleDOI
TL;DR: Algorithms for estimating frequency moments, support size, entropy, and heavy hitters of the original stream, through a single pass over the sampled stream are presented.
Abstract: In many stream monitoring situations, the data arrival rate is so high that it is not even possible to observe each element of the stream. The most common solution is to sub-sample the data stream and use the sample to infer properties and estimate aggregates of the original stream. However, in many cases, the estimation of aggregates on the original stream cannot be accomplished through simply estimating them on the sampled stream, followed by a normalization. We present algorithms for estimating frequency moments, support size, entropy, and heavy hitters of the original stream, through a single pass over the sampled stream.

Journal ArticleDOI
TL;DR: This work reduces Boolean matrix multiplication to several instances of graph collision, and provides an algorithm that takes advantage of the fact that the underlying graph in all of the authors' instances is very dense to find all graph collisions efficiently.
Abstract: The quantum query complexity of Boolean matrix multiplication is typically studied as a function of the matrix dimension, $$n$$n, as well as the number of $$1$$1s in the output, $$\ell $$l. We prove an upper bound of $$\widetilde{\hbox {O}}(n\sqrt{\ell +1})$$O~(nl+1) for all values of $$\ell $$l. This is an improvement over previous algorithms for all values of $$\ell $$l. On the other hand, we show that for any $$\varepsilon < 1$$ź<1 and any $$\ell \le \varepsilon n^2$$l≤źn2, there is an $$\Omega (n\sqrt{\ell })$$Ω(nl) lower bound for this problem, showing that our algorithm is essentially tight. We first reduce Boolean matrix multiplication to several instances of graph collision. We then provide an algorithm that takes advantage of the fact that the underlying graph in all of our instances is very dense to find all graph collisions efficiently. Using similar ideas, we also show that the time complexity of Boolean matrix multiplication is $$\tilde{O}(n\sqrt{\ell +1}+\ell \sqrt{n})$$O~(nl+1+ln).