scispace - formally typeset
Search or ask a question

Showing papers in "SIAM Journal on Computing in 2014"


Journal ArticleDOI
TL;DR: It is shown that “somewhat homomorphic” encryption can be based on $\mathsf{LWE}$, using a new relinearization technique, and the security of the scheme is based on the worst-case hardness of “short vector problems” on arbitrary lattices.
Abstract: A fully homomorphic encryption (FHE) scheme allows anyone to transform an encryption of a message, $m$, into an encryption of any (efficient) function of that message, $f(m)$, without knowing the secret key. We present a leveled FHE scheme that is based solely on the (standard) learning with errors ($\mathsf{LWE}$) assumption. (Leveled FHE schemes are initialized with a bound on the maximal evaluation depth. However, this restriction can be removed by assuming “weak circular security.'') Applying known results on $\mathsf{LWE}$, the security of our scheme is based on the worst-case hardness of “short vector problems” on arbitrary lattices. Our construction improves on previous works in two aspects: 1. We show that “somewhat homomorphic” encryption can be based on $\mathsf{LWE}$, using a new relinearization technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. 2. We deviate from the “squashing paradigm” used in all previous works. We introduce a n...

298 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of maximizing a nonnegative submodular set function over a ground set subject to a variety of packing-type constraints including (multiple) matroid constraints, knapsack constraints, and their intersections was studied.
Abstract: We consider the problem of maximizing a nonnegative submodular set function $f:2^N \rightarrow {\mathbb R}_+$ over a ground set $N$ subject to a variety of packing-type constraints including (multiple) matroid constraints, knapsack constraints, and their intersections. In this paper we develop a general framework that allows us to derive a number of new results, in particular, when $f$ may be a nonmonotone function. Our algorithms are based on (approximately) maximizing the multilinear extension $F$ of $f$ over a polytope $P$ that represents the constraints, and then effectively rounding the fractional solution. Although this approach has been used quite successfully, it has been limited in some important ways. We overcome these limitations as follows. First, we give constant factor approximation algorithms to maximize $F$ over a downward-closed polytope $P$ described by an efficient separation oracle. Previously this was known only for monotone functions. For nonmonotone functions, a constant factor was ...

237 citations


Book ChapterDOI
TL;DR: This chapter shows that randomized encoding preserves the security of many cryptographic primitives and constructs an (information-theoretic) encoding in \(\mathbf{NC}^{0}_{4}\) for any function in NC 1 or even ⊕L/poly.
Abstract: In this chapter we show that randomized encoding preserves the security of many cryptographic primitives. We also construct an (information-theoretic) encoding in \(\mathbf{NC}^{0}_{4}\) for any function in NC 1 or even ⊕L/poly. The combination of these results gives a compiler that takes as an input a code of an NC 1 implementation of some cryptographic primitive and generates an \(\mathbf{NC}^{0}_{4}\) implementation of the same primitive.

174 citations


Journal ArticleDOI
TL;DR: This work considers low-rank reconstruction of a matrix using a subset of its columns and presents asymptotically optimal algorithms for both spectral norm and Frobenius norm reconstruction.
Abstract: We consider low-rank reconstruction of a matrix using a subset of its columns and present asymptotically optimal algorithms for both spectral norm and Frobenius norm reconstruction. The main tools ...

172 citations


Journal ArticleDOI
TL;DR: A Monte Carlo algorithm for Hamiltonicity detection in an $n$-vertex undirected graph running in O(1.657^{n}) time is presented, the first superpolynomial improvement on the worst case runtime for the problem since the O*(2^n) bound was established over 50 years ago.
Abstract: We present a Monte Carlo algorithm for Hamiltonicity detection in an $n$-vertex undirected graph running in $O(1.657^{n})$ time. To the best of our knowledge, this is the first superpolynomial improvement on the worst case runtime for the problem since the $O^*(2^n)$ bound established for the traveling salesman problem (TSP) over 50 years ago [R. Bellman, J. Assoc. Comput. Mach., 9 (1962), pp. 61--63], [M. Held and R. M. Karp, J. Soc. Indust. Appl. Math., 10 (1962), pp. 196--210]. ($O^*(f(n))$ suppresses polylogarithmic functions in $f(n)$). It answers in part the first open problem in Woeginger's 2003 survey on exact algorithms for NP-hard problems. For bipartite graphs, we improve the bound to $O^*(\sqrt{2}^n)\subset O(1.415^{n})$ time. Both the bipartite and the general algorithm can be implemented to use space polynomial in $n$. We combine several recently resurrected ideas to get the results. Our main technical contribution is a new algebraic characterization of Hamiltonian graphs. We introduce an ex...

121 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a general framework for reducing the mechanism design problem for multiple agents to single agent subproblems in the context of Bayesian combinatorial auctions, which can be applied to any setting which roughly satisfies the following assumptions: (i) agents' types are distributed independently (not necessarily identically), (ii) objective function is additively separable over the agents, and (iii) there are no interagent constraints except for the supply constraints.
Abstract: We present a general framework for approximately reducing the mechanism design problem for multiple agents to single agent subproblems in the context of Bayesian combinatorial auctions. Our framework can be applied to any setting which roughly satisfies the following assumptions: (i) agents' types are distributed independently (not necessarily identically), (ii) objective function is additively separable over the agents, and (iii) there are no interagent constraints except for the supply constraints (i.e., that the total allocation of each item should not exceed the supply). Our framework is general in the sense that it makes no direct assumption about agents' valuations, type distributions, or single agent constraints (e.g., budget, incentive compatibility, etc.). We present two generic multiagent mechanisms which use single agent mechanisms as black boxes. If an $\alpha$-approximate single agent mechanism is available for each agent, and assuming no agent ever demands more than $\frac{1}{k}$ of all unit...

118 citations


Journal ArticleDOI
TL;DR: The discrete Frechet distance as mentioned in this paper is the minimum length of a leash required to connect a dog and its owner without backtracking along their respective curves from one endpoint to the other.
Abstract: The Frechet distance measures similarity between two curves $f$ and $g$ that takes into account the ordering of the points along the two curves: Informally, it is the minimum length of a leash required to connect a dog, walking along $f$, and its owner, walking along $g$, as they walk without backtracking along their respective curves from one endpoint to the other. The discrete Frechet distance replaces the dog and its owner by a pair of frogs that can only reside on $m$ and $n$ specific stones, respectively. The stones are in fact sequences of points, typically sampled from the respective curves $f$ and $g$. These frogs hop from one stone to the next without backtracking, and the discrete Frechet distance is the minimum length of a “leash” that connects the frogs and allows them to execute such a sequence of hops from the starting points to the terminal points of their sequences. The discrete Frechet distance can be computed in $O(mn)$ time by a straightforward dynamic programming algorithm. We present ...

118 citations


Journal ArticleDOI
TL;DR: This work presents an optimal, combinatorial $1-1/e$ approximation algorithm for monotone submodular optimization over a matroid constraint, and generalizes to the case where the monot one sub modular function has restricted curvature.
Abstract: We present an optimal, combinatorial $1-1/e$ approximation algorithm for monotone submodular optimization over a matroid constraint. Compared to the continuous greedy algorithm [G. Calinescu et al., IPCO, Springer, Berlin, 2007, pp. 182--196] our algorithm is extremely simple and requires no rounding. It consists of the greedy algorithm followed by a local search. Both phases are run not on the actual objective function, but on a related auxiliary potential function, which is also monotone and submodular. In our previous work on maximum coverage [Y. Filmus and J. Ward, FOCS, IEEE, Piscataway, NJ, 2012, pp. 659--668], the potential function gives more weight to elements covered multiple times. We generalize this approach from coverage functions to arbitrary monotone submodular functions. When the objective function is a coverage function, both definitions of the potential function coincide. Our approach generalizes to the case where the monotone submodular function has restricted curvature. For any curvatu...

91 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of securely verifying the position of a device in the presence of an adversary, and show that secure positioning is impossible in the vanilla model, even if the adversary is computationally bounded.
Abstract: In this paper, we initiate the theoretical study of cryptographic protocols where the identity, or other credentials and inputs, of a party are derived from its geographic location. We start by considering the central task in this setting, i.e., securely verifying the position of a device. Despite much work in this area, we show that in the vanilla (or standard) model, the above task (i.e., of secure positioning) is impossible to achieve, even if we assume that the adversary is computationally bounded. In light of the above impossibility result, we then turn to Dziembowski's bounded retrieval model (a variant of Maurer's bounded storage model) and formalize and construct information theoretically secure protocols for two fundamental tasks: secure positioning and position-based key exchange. We then show that these tasks are in fact universal in this setting---we show how we can use them to realize secure multiparty computation. Our main contribution in this paper is threefold: to place the problem of secu...

89 citations


Journal ArticleDOI
TL;DR: The current state-of-the-art running time for the coloring problem is O(Theta(Delta + 1)-coloring problem as mentioned in this paper, which is the fastest known algorithm for the problem.
Abstract: The distributed $(\Delta + 1)$-coloring problem is one of the most fundamental and well-studied problems in distributed algorithms Starting with the work of Cole and Vishkin in 1986, a long line of gradually improving algorithms has been published The state-of-the-art running time, prior to our work, is $O(\Delta \log \Delta + \log^* n)$, due to Kuhn and Wattenhofer [Proceedings of the $25$th Annual ACM Symposium on Principles of Distributed Computing, Denver, CO, 2006, pp 7--15] Linial [Proceedings of the $28$th Annual IEEE Symposium on Foundation of Computer Science, Los Angeles, CA, 1987, pp 331--335] proved a lower bound of $\frac{1}{2} \log^* n$ for the problem, and Szegedy and Vishwanathan [Proceedings of the 25th Annual ACM Symposium on Theory of Computing, San Diego, CA, 1993, pp 201--207] provided a heuristic argument that shows that algorithms from a wide family of locally iterative algorithms are unlikely to achieve a running time smaller than $\Theta(\Delta \log \Delta)$ We present a de

88 citations


Journal ArticleDOI
TL;DR: The solver is based on repeated applications of the incremental sparsifier that produces a chain of graphs which is then used as input to the recursive preconditioned Chebyshev iteration.
Abstract: We present an algorithm that on input of an $n$-vertex $m$-edge weighted graph $G$ and a value $k$ produces an incremental sparsifier $\hat{G}$ with $n-1 + m/k$ edges, such that the relative condition number of $G$ with $\hat{G}$ is bounded above by $\tilde{O}(k\log^2 n)$, with probability $1-p$ (we use the $\tilde{O}()$ notation to hide a factor of at most $(\log\log n)^4$). The algorithm runs in time $\tilde{O}((m \log{n} + n\log^2{n})\log(1/p)).$ As a result, we obtain an algorithm that on input of an $n\times n$ symmetric diagonally dominant matrix $A$ with $m$ nonzero entries and a vector $b$ computes a vector ${x}$ satisfying $||{x}-A^{+}b||_A<\epsilon ||A^{+}b||_A $, in expected time $\tilde{O}(m\log^2{n}\log(1/\epsilon)).$ The solver is based on repeated applications of the incremental sparsifier that produces a chain of graphs which is then used as input to the recursive preconditioned Chebyshev iteration.

Journal ArticleDOI
TL;DR: The Edge Multicut problem asks if there is a set $S$ of at most ${{p}...$ of pairs of vertices in an undirected graph.
Abstract: Given an undirected graph $G$, a collection $\{(s_1,t_1), \dots, (s_{k},t_{k})\}$ of pairs of vertices, and an integer ${{p}}$, the Edge Multicut problem asks if there is a set $S$ of at most ${{p}...

Journal ArticleDOI
TL;DR: It is shown that a 2-approximate distance oracle requires space $\widetilde{\Omega}(n^2 / \sqrt{\alpha})$.
Abstract: We give the first improvement to the space/approximation trade-off of distance oracles since the seminal result of Thorup and Zwick. For unweighted undirected graphs, our distance oracle has size $O(n^{5/3})$ and, when queried about vertices at distance $d$, returns a path of length at most 2d+1. For weighted undirected graphs with $m=n^2/\alpha$ edges, our distance oracle has size $O(n^2 / \sqrt[3]{\alpha})$ and returns a factor 2 approximation. Based on a plausible conjecture about the hardness of set intersection queries, we show that a 2-approximate distance oracle requires space $\widetilde{\Omega}(n^2 / \sqrt{\alpha})$. For unweighted graphs, this implies a $\widetilde{\Omega}(n^{1.5})$ space lower bound to achieve approximation 2d+1.

Journal ArticleDOI
TL;DR: The garbled circuit construction is a central tool for constant-round secure computation and has several other applications as discussed by the authors, and it has been shown that it is possible to construct garbled circuits over integers from a bounded but possibly exponential range.
Abstract: Yao's garbled circuit construction transforms a boolean circuit $C:\{0,1\}^n\to\{0,1\}^m$ into a “garbled circuit” $\hat{C}$ along with $n$ pairs of $k$-bit keys, one for each input bit, such that $\hat{C}$ together with the $n$ keys corresponding to an input $x$ reveal $C(x)$ and no additional information about $x$. The garbled circuit construction is a central tool for constant-round secure computation and has several other applications. Motivated by these applications, we suggest an efficient arithmetic variant of Yao's original construction. Our construction transforms an arithmetic circuit $C : \mathbb{Z}^n\to\mathbb{Z}^m$ over integers from a bounded (but possibly exponential) range into a garbled circuit $\hat{C}$ along with $n$ affine functions $L_i : \mathbb{Z}\to \mathbb{Z}^k$ such that $\hat{C}$ together with the $n$ integer vectors $L_i(x_i)$ reveal $C(x)$ and no additional information about $x$. The security of our construction relies on the intractability of the learning with errors problem.

Journal ArticleDOI
TL;DR: This work study position-based cryptography in the quantum setting to use the geographical position of a party as its only credential and shows that if adversaries are allowed to share an arbitrarily large entangled quantum state, the task of secure position-verification is impossible.
Abstract: In this work, we study position-based cryptography in the quantum setting. The aim is to use the geographical position of a party as its only credential. On the negative side, we show that if adversaries are allowed to share an arbitrarily large entangled quantum state, the task of secure position-verification is impossible. To this end, we prove the following very general result. Assume that Alice and Bob hold respectively subsystems $A$ and $B$ of a (possibly) unknown quantum state $|\psi\rangle \in {\cal H}_A \otimes {\cal H}_B$. Their goal is to calculate and share a new state $|\varphi\rangle = U|\psi\rangle$, where $U$ is a fixed unitary operation. The question that we ask is how many rounds of mutual communication are needed. It is easy to achieve such a task using two rounds of classical communication, whereas, in general, it is impossible with no communication at all. Surprisingly, in case Alice and Bob share enough entanglement to start with and we allow an arbitrarily small failure probability,...

Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of finding a preemptive schedule of minimum aggregate cost for a set of jobs with arbitrary release time, size, and monotone function specifying the cost incurred when the job is completed.
Abstract: We consider the following general scheduling problem. The input consists of $n$ jobs, each with an arbitrary release time, size, and monotone function specifying the cost incurred when the job is completed at a particular time. The objective is to find a preemptive schedule of minimum aggregate cost. This problem formulation is general enough to include many natural scheduling objectives, such as total weighted flow time, total weighted tardiness, and sum of flow time squared. We give an $O(\log \log P )$ approximation for this problem, where $P$ is the ratio of the maximum to minimum job size. We also give an $O(1)$ approximation in the special case of identical release times. These results are obtained by reducing the scheduling problem to a geometric capacitated set cover problem in two dimensions.

Journal ArticleDOI
TL;DR: Asymptotically tight algorithmic bounds are obtained for Max-Cut and Edge Dominating Set problems on graphs of bounded clique-width that cannot be solved in time for any function of type f(t) n O (t) unless exponential time hypothesis fails.
Abstract: We obtain asymptotically tight algorithmic bounds for Max-Cut and Edge Dominating Set problems on graphs of bounded clique-width. We show that on an $n$-vertex graph of clique-width $t$ both problems (1) cannot be solved in time $f(t)n^{o(t)}$ for any function $f$ of $t$ unless exponential time hypothesis fails, and (2) can be solved in time $n^{O(t)}$.

Journal ArticleDOI
TL;DR: In this paper, the authors describe a data structure that supports access, rank, and select queries, as well as symbol insertions and deletions, on a string S[1,n] over alphabet $[1..\sigma]$ in time O(log n/\log\log n)$, which is optimal even on binary sequences and in the amortized sense.
Abstract: We describe a data structure that supports access, rank, and select queries, as well as symbol insertions and deletions, on a string S[1,n] over alphabet $[1..\sigma]$ in time $O(\log n/\log\log n)$, which is optimal even on binary sequences and in the amortized sense. Our time is worst case for the queries and amortized for the updates. This complexity is better than the best previous ones by a $\Theta(1+\log\sigma/\log\log n)$ factor. We also design a variant where times are worst case, yet rank and updates take $O(\log n)$ time. Our structure uses $nH_0(S)+o(n\log\sigma) + O(\sigma\log n)$ bits, where $H_0(S)$ is the zero-order entropy of $S$. Finally, we pursue various extensions and applications of the result.

Journal ArticleDOI
TL;DR: The first deterministic extractors for sources generated (or sampled) by small circuits of bounded depth are obtained and it is proved that the sources in (1) and (2) are (close to) a convex combination of high-entropy "bit-block"sources.
Abstract: We obtain the first deterministic extractors for sources generated (or sampled) by small circuits of bounded depth. Our main results are (1) we extract $k (k/nd)^{O(1)}$ bits with exponentially small error from $n$-bit sources of min-entropy $k$ that are generated by functions $f : \{0, 1\}^\ell \to \{0, 1\}^n$, where each output bit depends on $\le d$ input bits. In particular, we extract from $\mathrm{NC}^0$ sources, corresponding to $d = O(1)$; (2) we extract $k (k/n^{1+\gamma})^{O(1)}$ bits with superpolynomially small error from $n$-bit sources of min-entropy $k$ that are generated by $\mathrm{poly}(n)$-size $\mathrm{AC}^0$ circuits, for any $\gamma > 0$. As our starting point, we revisit the connection by Trevisan and Vadhan [IEEE Symposium on Foundations of Computer Science, IEEE Computer Society, Los Alamitos, CA, 2000, pp. 32--42] between circuit lower bounds and extractors for sources generated by circuits. We note that such extractors (with very weak parameters) are equivalent to lower bounds f...

Journal ArticleDOI
TL;DR: A constant-factor approximation algorithm for the unsplittable flow problem on a path which improves on the previous best known approximation factor of O(log n), and introduces several novel algorithmic techniques, which might be of independent interest.
Abstract: In the unsplittable flow problem on a path, we are given a capacitated path $P$ and $n$ tasks, each task having a demand, a profit, and start and end vertices. The goal is to compute a maximum profit set of tasks such that, for each edge $e$ of $P$, the total demand of selected tasks that use $e$ does not exceed the capacity of $e$. This is a well-studied problem that has been described under alternative names, such as resource allocation, bandwidth allocation, resource constrained scheduling, temporal knapsack, and interval packing. We present a polynomial time constant-factor approximation algorithm for this problem. This improves on the previous best known approximation ratio of $O(\log n)$. The approximation ratio of our algorithm is $7+\epsilon$ for any $\epsilon>0$. We introduce several novel algorithmic techniques, which might be of independent interest: a framework which reduces the problem to instances with a bounded range of capacities, and a new geometrically inspired dynamic program which solv...

Journal ArticleDOI
TL;DR: The first linear time algorithm for computing a maximum size popular matching in G, a bipartite graph where each vertex ranks its neighbors in a strict order of preference, is shown.
Abstract: Given a bipartite graph $G = (\mathcal{A}\cup\mathcal{B}, E)$ where each vertex ranks its neighbors in a strict order of preference, the problem of computing a stable matching is classical and well studied. A stable matching has size at least $\frac{1}{2}|M_{\max}|$, where $M_{\max}$ is a maximum size matching in $G$, and there are simple examples where this bound is tight. It is known that a stable matching is a minimum size popular matching. A matching $M$ is said to be popular if there is no matching where more vertices are better off than in $M$. In this paper we show the first linear time algorithm for computing a maximum size popular matching in $G$. A maximum size popular matching is guaranteed to have size at least $\frac{2}{3}|M_{\max}|$, and this bound is tight. We then consider the following problem: is there a maximum size matching $M^*$ that is popular within the set of maximum size matchings in $G$, that is, $|M^*| = |M_{\max}|$ and there is no maximum size matching that is more popular than...

Journal ArticleDOI
TL;DR: This paper presents an unconditional construction of a non-malleable extractor with short seeds and obtains the first 2-round privacy amplification protocol for min-entropy rate 1/2 + δ with asymptotically optimal entropy loss and poly-logarithmic communication complexity.
Abstract: Motivated by the classical problem of privacy amplification, Dodis and Wichs [in Proceedings of the 41st Annual ACM Symposium on Theory of Computing, 2009, pp. 601--610] introduced the notion of a nonmalleable extractor, significantly strengthening the notion of a strong extractor. A nonmalleable extractor is a function $\mathsf{nmExt}:\{0,1\}^n\times\{0,1\}^d\to\{0,1\}^m$ that takes two inputs---a weak source $W$ and a uniform (independent) seed $S$---and outputs a string $\mathsf{nmExt}(W,S)$ that is nearly uniform given the seed $S$ as well as the value $\mathsf{nmExt}(W,S')$ for any seed $S' eq S$ that may be determined as an arbitrary function of $S$. The first explicit construction of a nonmalleable extractor was recently provided by Dodis et al. [Privacy Amplification and Non-malleable Extractors via Character Sums, preprint, arXiv:1102.5415 [cs.CR], 2011]. Their extractor works for any weak source with min-entropy rate $1/2+\delta$, where $\delta>0$ is an arbitrary constant and outputs up to a li...

Journal ArticleDOI
TL;DR: A polynomial-time solution of the extension problem, where the input consists of finite simplicial complexes X, Y, plus a subspace A and a (simplicial) map f:A to Y, and the question is the extendability of f to all of X.
Abstract: For several computational problems in homotopy theory, we obtain algorithms with running time polynomial in the input size. In particular, for every fixed k >= 2, there is a polynomial-time algorithm that, for a 1-connected topological space X given as a finite simplicial complex, or more generally, as a simplicial set with polynomial-time homology, computes the kth homotopy group pi(k)(X), as well as the first k stages of a Postnikov system of X. Combined with results of an earlier paper, this yields a polynomial-time computation of [X, Y], i.e., all homotopy classes of continuous mappings X -> Y, under the assumption that Y is (k - 1)-connected and dim X Y, where Y is (k - 1)-connected and dim X Y, and the question is the extendability of f to all of X. The algorithms are based on the notion of a simplicial set with polynomial-time homology, which is an enhancement of the notion of a simplicial set with effective homology developed earlier by Sergeraert and his coworkers. Our polynomial-time algorithms are obtained by showing that simplicial sets with polynomial-time homology are closed under various operations, most notably Cartesian products, twisted Cartesian products, and classifying space. One of the key components is also polynomial-time homology for the Eilenberg-MacLane space K(Z, 1), provided in another recent paper by Krcal, Matousek, and Sergeraert.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the correlation of a depth-d$ unbounded fanin circuit with parity of n variables is at most O(n/(log S)^{d-1})}.
Abstract: We prove that the correlation of a depth-$d$ unbounded fanin circuit of size $S$ with parity of $n$ variables is at most $2^{-\Omega(n/(\log S)^{d-1})}$.

Journal ArticleDOI
TL;DR: This work gives the first black-box reduction from arbitrary approximation algorithms to truthful approximation mechanisms for a non-trivial class of multi-parameter problems, and makes novel use of smoothed analysis, by employing small perturbations as a tool in algorithmic mechanism design.
Abstract: We give the first black-box reduction from approximation algorithms to truthful approximation mechanisms for a non-trivial class of multi-parameter problems. Specifically, we prove that every welfare-maximization problem that admits a fully polynomial-time approximation scheme (FPTAS) and can be encoded as a packing problem also admits a truthful-in-expectation randomized mechanism that is an FPTAS. Our reduction makes novel use of smoothed analysis by employing small perturbations as a tool in algorithmic mechanism design. We develop a “duality” between linear perturbations of the objective function of an optimization problem and of its feasible set, and we use the “primal” and “dual” viewpoints to prove the running time bound and the truthfulness guarantee, respectively, for our mechanism.

Journal ArticleDOI
TL;DR: A common generalization of graph partitioning problems from a min-max perspective is considered, and an O(1) approximation algorithm is given for graphs that exclude any fixed minor and a new procedure for solving the small-set expansion problem is used.
Abstract: We study graph partitioning problems from a min-max perspective, in which an input graph on $n$ vertices should be partitioned into $k$ parts, and the objective is to minimize the maximum number of edges leaving a single part. The two main versions we consider are where the $k$ parts need to be of equal size, and where they must separate a set of $k$ given terminals. We consider a common generalization of these two problems, and design for it an $O(\sqrt{\log n\log k})$ approximation algorithm. This improves over an $O(\log^2 n)$ approximation for the second version due to Svitkina and Tardos [Min-max multiway cut, in APPROX-RANDOM, 2004, Springer, Berlin, 2004], and roughly $O(k\log n)$ approximation for the first version that follows from other previous work. We also give an $O(1)$ approximation algorithm for graphs that exclude any fixed minor. Our algorithm uses a new procedure for solving the small-set expansion problem. In this problem, we are given a graph $G$ and the goal is to find a nonempty set...

Journal ArticleDOI
TL;DR: An extensive study of the power and limits of online reordering for minimum makespan scheduling with reordering buffers is presented and a scheduling algorithm is given that achieves a competitive ratio of 2 with a reordering buffer of size m.
Abstract: In the classic minimum makespan scheduling problem, we are given an input sequence of jobs with processing times. A scheduling algorithm has to assign the jobs to $m$ parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. In this paper, we consider online scheduling algorithms without preemption. However, we do not require that each arriving job has to be assigned immediately to one of the machines. A reordering buffer with limited storage capacity can be used to reorder the input sequence in a restricted fashion so as to schedule the jobs with a smaller makespan. This is a natural extension of lookahead. We present an extensive study of the power and limits of online reordering for minimum makespan scheduling. As a main result, we give, for $m$ identical machines, tight and, in comparison to the problem without reordering, much improved bounds on the competitive ratio for minimum makespan scheduling with reordering buffers. Depending on $m$,...

Journal ArticleDOI
TL;DR: It is shown that no monotone circuit of size $O(n^{k/4})$ solves the k-clique problem with high probability on $\ER(n,p)$ for two sufficiently far-apart threshold functions $p(n)$ and $2n^{-2/(k-1)}$.
Abstract: We present lower and upper bounds showing that the average-case complexity of the $k$-Clique problem on monotone circuits is $n^{k/4 + O(1)}$. Similar bounds for $\mathsf{AC}^0$ circuits were shown in Rossman [Proceedings of the 40th Annual ACM Symposium on Theory of Computing, 2008, pp. 721--730] and Amano [Comput. Complexity, 19 (2010), pp. 183--210].

Journal ArticleDOI
TL;DR: The PPSZ algorithm by Paturi, Pudlak, Saks, and Zane is shown to be the fastest known algorithm for Unique $k$-SAT, where the input formula does not have more than one satisfying assignment, and it is shown that this is also the case for k=3,4.
Abstract: The PPSZ algorithm by Paturi, Pudlak, Saks, and Zane [J. ACM, 52 (2005), pp. 337--364] is the fastest known algorithm for Unique $k$-SAT, where the input formula does not have more than one satisfying assignment. For $k\geq 5$ the same bounds hold for general $k$-SAT. We show that this is also the case for k=3,4, using a slightly modified PPSZ algorithm. We do the analysis by defining a cost for satisfiable conjunctive normal form formulas, which we prove to decrease in each PPSZ step by a certain amount. This improves our previous best bounds with Moser and Scheder [Proceedings of STACS, 2011, pp. 237--248] for 3-SAT to $O(1.308^n)$ and for 4-SAT to $O(1.469^n)$. Furthermore, our analysis is much simpler than the existing analysis of PPSZ for general $k$-SAT.

Journal ArticleDOI
TL;DR: It is shown that, for any $\gamma > 0$, the combinatorial complexity of the union of $n$ locally $\Gamma$-fat objects of constant complexity in the plane is $\frac{n}{\gamma^4} 2^{O(\log^*n)}$.
Abstract: We show that, for any $\gamma > 0$, the combinatorial complexity of the union of $n$ locally $\gamma$-fat objects of constant complexity in the plane is $\frac{n}{\gamma^4} 2^{O(\log^*n)}$. For the special case of $\gamma$-fat triangles, the bound improves to $O(n \log^*{n} + \frac{n}{\gamma}\log^2{\frac{1}{\gamma}})$.