scispace - formally typeset
Search or ask a question

Showing papers in "Electronic Colloquium on Computational Complexity in 2015"


Journal Article
TL;DR: This work surveys the developments of one of its most recent and prolific offsprings, distribution testing, and describes the state of the art for a variety of testing problems.
Abstract: The field of property testing originated in work on program checking, and has evolved into an established and very active research area. In this work, we survey the developments of one of its most recent and prolific offsprings, distribution testing. This subfield, at the junction of property testing and Statistics, is concerned with studying properties of probability distributions. We cover the current status of distribution testing in several settings, starting with the traditional sampling model where the algorithm obtains independent samples from the distribution. We then discuss different recent models, which either grant the testing algorithms more powerful types of queries, or evaluate their performance against that of an information-theoretically optimal “adversary” (for a given number of samples). In each setting, we describe the state of the art for a variety of testing problems. We hope this survey will serve as a self-contained introduction for those considering research in this field. ∗Research supported by NSF CCF-1115703 and NSF CCF-1319788. ACM Classification: G.3, F.2.2 AMS Classification: 68Q32, 68W20, 68Q17, 68Q87

103 citations


Journal Article
TL;DR: In this paper, it was shown that deterministic communication complexity can be superlogarithmic in the partition number of the associated communication matrix, and near-optimal deterministic lower bounds were obtained.
Abstract: We show that deterministic communication complexity can be superlogarithmic in the partition number of the associated communication matrix. We also obtain near-optimal deterministic lower bounds fo...

84 citations


Journal Article
TL;DR: Barak et al. as discussed by the authors constructed an extractor for two independent sources on n bits, each with polylogarithmic min-entropy, with polynomially small error.
Abstract: We explicitly construct an extractor for two independent sources on n bits, each with polylogarithmic min-entropy. Our extractor outputs one bit and has polynomially small error. The best previous extractor, by Bourgain, required each source to have min-entropy .499n. A key ingredient in our construction is an explicit construction of a monotone, almost-balanced Boolean functions that are resilient to coalitions. In fact, our construction is stronger in that it gives an explicit extractor for a generalization of non-oblivious bit-fixing sources on n bits, where some unknown n-q bits are chosen almost polylogarithmic-wise independently, and the remaining q bits are chosen by an adversary as an arbitrary function of the n-q bits. The best previous construction, by Viola, achieved q quadratically smaller than our result. Our explicit two-source extractor directly implies improved constructions of a K-Ramsey graph over N vertices, improving bounds obtained by Barak et al. and matching independent work by Cohen.

79 citations


Journal Article
TL;DR: It is proved that finding a Nash equilibrium of a game is hard, assuming the existence of indistinguishability obfuscation and one-way functions with sub-exponential hardness, by showing how these cryptographic primitives give rise to a hard computational problem that lies in the complexity class PPAD.
Abstract: We prove that finding a Nash equilibrium of a game is hard, assuming the existence of indistinguishability obfuscation and one-way functions with sub-exponential hardness. We do so by showing how these cryptographic primitives give rise to a hard computational problem that lies in the complexity class PPAD, for which finding Nash equilibrium is complete. Previous proposals for basing PPAD-hardness on program obfuscation considered a strong "virtual black-box" notion that is subject to severe limitations and is unlikely to be realizable for the programs in question. In contrast, for indistinguishability obfuscation no such limitations are known, and recently, several candidate constructions of indistinguishability obfuscation were suggested based on different hardness assumptions on multilinear maps. Our result provides further evidence of the intractability of finding a Nash equilibrium, one that is extrinsic to the evidence presented so far.

75 citations


Journal Article
TL;DR: Unless this hypothesis fails, problems such as 3-SUM, APSP and model checking of a large class of first-order graph properties cannot be shown to be SETH-hard using deterministic or zero-error probabilistic reductions.
Abstract: We introduce the Nondeterministic Strong Exponential Time Hypothesis (NSETH) as a natural extension of the Strong Exponential Time Hypothesis (SETH). We show that both refuting and proving NSETH would have interesting consequences. In particular we show that disproving NSETH would give new nontrivial circuit lower bounds. On the other hand, NSETH implies non-reducibility results, i.e. the absence of (deterministic) fine-grained reductions from SAT to a number of problems. As a consequence we conclude that unless this hypothesis fails, problems such as 3-SUM, APSP and model checking of a large class of first-order graph properties cannot be shown to be SETH-hard using deterministic or zero-error probabilistic reductions.

74 citations


Journal Article
TL;DR: A formal framework for studying the relationship between the fundamental resources of memory or communication and the sample complexity of the learning task is introduced, and strong lower bounds on learning parity functions with bounded communication are shown.
Abstract: If a concept class can be represented with a certain amount of memory, can it be efficiently learned with the same amount of memory? What concepts can be efficiently learned by algorithms that extract only a few bits of information from each example? We introduce a formal framework for studying these questions, and investigate the relationship between the fundamental resources of memory or communication and the sample complexity of the learning task. We relate our memory-bounded and communication-bounded learning models to the well-studied statistical query model. This connection can be leveraged to obtain both upper and lower bounds: we show strong lower bounds on learning parity functions with bounded communication, as well as the first upper bounds on solving generic sparse linear regression problems with limited memory.

63 citations



Journal Article
TL;DR: In this paper, the authors provide a nearly complete picture of the relationships among classical communication complexity classes between two classes of communication complexity, namely, ZPP and PSPACE, short of proving lower bounds against classes for which no explicit lower bounds were already known.
Abstract: We prove several results which, together with prior work, provide a nearly-complete picture of the relationships among classical communication complexity classes between $${\mathsf{P}}$$ and $${\mathsf{PSPACE}}$$ , short of proving lower bounds against classes for which no explicit lower bounds were already known. Our article also serves as an up-to-date survey on the state of structural communication complexity. Among our new results we show that $${\mathsf{MA} ot\subseteq \mathsf{ZPP}^{\mathsf{NP}[1]}}$$ , that is, Merlin–Arthur proof systems cannot be simulated by zero-sided error randomized protocols with one $${\mathsf{NP}}$$ query. Here the class $$\mathsf{ZPP}^{\mathsf{NP}[1]}$$ has the property that generalizing it in the slightest ways would make it contain $${\mathsf{AM} \cap \mathsf{coAM}}$$ , for which it is notoriously open to prove any explicit lower bounds. We also prove that $${\mathsf{US} ot\subseteq \mathsf{ZPP}^{\mathsf{NP}[1]}}$$ , where $${\mathsf{US}}$$ is the class whose canonically complete problem is the variant of set-disjointness where yes-instances are uniquely intersecting. We also prove that $${\mathsf{US} ot\subseteq \mathsf{coDP}}$$ , where $${\mathsf{DP}}$$ is the class of differences of two $${\mathsf{NP}}$$ sets. Finally, we explore an intriguing open issue: Are rank-1 matrices inherently more powerful than rectangles in communication complexity? We prove a new separation concerning $${\mathsf{PP}}$$ that sheds light on this issue and strengthens some previously known separations.

51 citations


Journal Article
TL;DR: A (3+1/86)n-o(n) lower bound on the size of a Boolean circuits over the full binary basis for an explicitly defined predicate, namely an affine disperser for sublinear dimension is proved.
Abstract: We consider Boolean circuits over the full binary basis. We prove a (3+1/86)n-o(n) lower bound onthe size of such a circuit for an explicitly definedpredicate, namely an affine disperser for sublinear dimension. This improves the 3n-o(n) bound of Norbert Blum (1984).The proof is based on the gate elimination technique extended with the following three ideas. We generalize the computational model by allowing circuits to contain cycles, this in turn allows us to perform affine substitutions. We use a carefully chosen circuit complexity measure to track the progress of the gate elimination process. Finally, we use quadratic substitutions that may be viewed as delayed affine substitutions.

48 citations


Journal Article
TL;DR: It is shown that randomized communication complexity can be superlogarithmic in the partition number of the associated communication matrix, and near-optimal randomized lower bounds for the Clique vs. Independent Set problem are obtained.
Abstract: We show that randomized communication complexity can be superlogarithmic in the partition number of the associated communication matrix, and we obtain near-optimal randomized lower bounds for the Clique vs. Independent Set problem. These results strengthen the deterministic lower bounds obtained in prior work (Goos, Pitassi, and Watson, FOCS 2015). One of our main technical contributions states that information complexity when the cost is measured with respect to only 1-inputs (or only 0-inputs) is essentially equivalent to information complexity with respect to all inputs.

46 citations


Journal Article
TL;DR: In this article, Goldreich et al. showed that pseudorandomness is achieved if the predicate is (a) $k=\Omega(s)$-resilient, i.e., uncorrelated with any $k$-subset of its inputs, a
Abstract: Suppose that you have $n$ truly random bits $x=(x_1,\ldots,x_n)$ and you wish to use them to generate $m\gg n$ pseudorandom bits $y=(y_1,\ldots, y_m)$ using a local mapping, ie, each $y_i$ should depend on at most $d=O(1)$ bits of $x$ In the polynomial regime of $m=n^s$, $s>1$, the only known solution, originating from [Goldreich, Electronic Colloquium on Computational Complexity (ECCC), 2000], is based on random local functions: Compute $y_i$ by applying some fixed (public) $d$-ary predicate $P$ to a random (public) tuple of distinct input indices $(x_{i_1},\ldots,x_{i_d})$ Our goal in this paper is to understand, for any value of $s$, how the pseudorandomness of the resulting sequence depends on the choice of the underlying predicate We derive the following results: (1) We show that pseudorandomness against $\mathbb{F}_2$-linear adversaries (ie, the distribution $y$ has small bias) is achieved if the predicate is (a) $k=\Omega(s)$-resilient, ie, uncorrelated with any $k$-subset of its inputs, a

Journal Article
TL;DR: In this article, an explicit extractor for independent general weak random sources with min-entropy as small as Ω(log n+O(1) ) was presented.
Abstract: We continue the study of constructing explicit extractors for independent general weak random sources. The ultimate goal is to give a construction that matches what is given by the probabilistic method --- an extractor for two independent $n$-bit weak random sources with min-entropy as small as $\log n+O(1)$. Previously, the best known result in the two-source case is an extractor by Bourgain \cite{Bourgain05}, which works for min-entropy $0.49n$; and the best known result in the general case is an earlier work of the author \cite{Li13b}, which gives an extractor for a constant number of independent sources with min-entropy $\mathsf{polylog(n)}$. However, the constant in the construction of \cite{Li13b} depends on the hidden constant in the best known seeded extractor, and can be large; moreover the error in that construction is only $1/\mathsf{poly(n)}$. In this paper, we make two important improvements over the result in \cite{Li13b}. First, we construct an explicit extractor for \emph{three} independent sources on $n$ bits with min-entropy $k \geq \mathsf{polylog(n)}$. In fact, our extractor works for one independent source with poly-logarithmic min-entropy and another independent block source with two blocks each having poly-logarithmic min-entropy. Thus, our result is nearly optimal, and the next step would be to break the $0.49n$ barrier in two-source extractors. Second, we improve the error of the extractor from $1/\mathsf{poly(n)}$ to $2^{-k^{\Omega(1)}}$, which is almost optimal and crucial for cryptographic applications. Some of the techniques developed here may be of independent interests.

Journal Article
TL;DR: An ω(log n) lower bound on the Conon deterministic communication complexity of the Clique vs. Independent Set problem introduced by Yannakakis is proved and implies super polynomial lower bounds for the Alon - Saks - Seymour conjecture in graph theory.
Abstract: We prove a super logarithmic lower bound on the Conon deterministic communication complexity of the Clique vs. Independent Set problem introduced by Yannakakis (STOC 1988, JCSS 1991). As a corollary, this implies super polynomial lower bounds for the Alon -- Saks -- Seymour conjecture in graph theory. Our approach is to first exhibit a query complexity separation for the decision tree analogue of the UP vs. coNP question -- namely, unambiguous DNF width vs. CNF width -- and then "lift" this separation over to communication complexity using a result from prior work.

Journal Article
TL;DR: The lower bound on the overhead required to obliviously simulate programs, due to Goldreich and Ostrovsky, is revisited and it is proved that for the offline case, showing a lower bound without the above restriction is related to the size of the circuits for sorting.
Abstract: An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e. for every input the observed locations accessed are similarly distributed. Great progress has been made in recent years in minimizing the overhead of ORAM constructions, with the goal of obtaining the smallest overhead possible.We revisit the lower bound on the overhead required to obliviously simulate programs, due to Goldreich and Ostrovsky. While the lower bound is fairly general, including the offline case, when the simulator is given the reads and writes ahead of time, it does assume that the simulator behaves in a "balls and bins" fashion. That is, the simulator must act by shuffling data items around, and is not allowed to have sophisticated encoding of the data.We prove that for the offline case, showing a lower bound without the above restriction is related to the size of the circuits for sorting. Our proof is constructive, and uses a bit-slicing approach which manipulates the bit representations of data in the simulation. This implies that without obtaining yet unknown superlinear lower bounds on the size of such circuits, we cannot hope to get lower bounds on offline (unrestricted) ORAMs.

Journal Article
TL;DR: For arbitrary constraint satisfaction problems, the authors showed that for any odd k and any instance I of the max-kXOR constraint satisfaction problem, there is an efficient algorithm that finds an assignment satisfying at least a 1/2 + Omega(1/sqrt(D)) fraction of I's constraints, where D is a bound on the number of constraints that each variable occurs in.
Abstract: We show that for any odd k and any instance I of the max-kXOR constraint satisfaction problem, there is an efficient algorithm that finds an assignment satisfying at least a 1/2 + Omega(1/sqrt(D)) fraction of I's constraints, where D is a bound on the number of constraints that each variable occurs in. This improves both qualitatively and quantitatively on the recent work of Farhi, Goldstone, and Gutmann (2014), which gave a quantum algorithm to find an assignment satisfying a 1/2 Omega(D^{-3/4}) fraction of the equations. For arbitrary constraint satisfaction problems, we give a similar result for "triangle-free" instances; i.e., an efficient algorithm that finds an assignment satisfying at least a mu + Omega(1/sqrt(degree)) fraction of constraints, where mu is the fraction that would be satisfied by a uniformly random assignment.

Journal Article
TL;DR: It is proved that for all ε ≪ √log(n)/n, the linear-time computable Andreev’s function cannot be computed on a (1/2+ε)-fraction of n-bit inputs by depth-two circuits of o(ε3 n3/2/log3 n) gates, nor can it be computed with o(�3 n5/ 2/log7/2 n) wires.
Abstract: In order to formally understand the power of neural computing, we first need to crack the frontier of threshold circuits with two and three layers, a regime that has been surprisingly intractable to analyze. We prove the first super-linear gate lower bounds and the first super-quadratic wire lower bounds for depth-two linear threshold circuits with arbitrary weights, and depth-three majority circuits computing an explicit function. (1) We prove that for all e ≪ √log(n)/n, the linear-time computable Andreev’s function cannot be computed on a (1/2+e)-fraction of n-bit inputs by depth-two circuits of o(e3 n3/2/log3 n) gates, nor can it be computed with o(e3 n5/2/log7/2 n) wires. This establishes an average-case “size hierarchy” for threshold circuits, as Andreev’s function is computable by uniform depth-two circuits of o(n3) linear threshold gates, and by uniform depth-three circuits of O(n) majority gates. (2) We present a new function in P based on small-biased sets, which we prove cannot be computed by a majority vote of depth-two threshold circuits of o(n3/2/log3 n) gates, nor with o(n5/2/log7/2n) wires. (3) We give tight average-case (gate and wire) complexity results for computing PARITY with depth-two threshold circuits; the answer turns out to be the same as for depth-two majority circuits. The key is a new method for analyzing random restrictions to linear threshold functions. Our main analytical tool is the Littlewood-Offord Lemma from additive combinatorics.

Journal Article
TL;DR: The 2-D Tucker search problem is shown to be PPA-hard under many-one reductions; therefore it is complete for PPA, and the same holds for k- D Tucker for all k ≥ 2.
Abstract: The 2-D Tucker search problem is shown to be PPA-hard under many-one reductions; therefore it is complete for PPA. The same holds for k-D Tucker for all k ≥ 2 . This corrects a claim in the literature that the Tucker search problem is in PPAD.

Journal Article
TL;DR: It is shown that all amenable graphs are compact, and it is proved that recognizing each of these graph classes is P-hard, which gives a first complexity lower bound for recognizing compact graphs.
Abstract: Color refinement is a classical technique used to show that two given graphs G and H are non-isomorphic; it is very efficient, although it does not succeed on all graphs. We call a graph Gamenable to color refinement if the color refinement procedure succeeds in distinguishing G from any non-isomorphic graph H. Babai et al. (SIAM J Comput 9(3):628–635, 1980) have shown that random graphs are amenable with high probability. We determine the exact range of applicability of color refinement by showing that amenable graphs are recognizable in time \({O((n+m)\log n)}\), where n and m denote the number of vertices and the number of edges in the input graph.

Journal Article
TL;DR: In this paper, it was shown that a factor better than 0.99n1/(p+1)/(p + 1)2 is not achievable for a p-pass semi-streaming algorithm, even allowing randomisation.
Abstract: Set cover, over a universe of size n, may be modelled as a data-streaming problem, where the m sets that comprise the instance are to be read one by one A semi-streaming algorithm is allowed only O(npoly{log n, log m}) space to process this stream For each p ≤ 1, we give a very simple deterministic algorithm that makes p passes over the input stream and returns an appropriately certified (p + 1)n1/(p + 1)-approximation to the optimum set cover More importantly, we proceed to show that this approximation factor is essentially tight, by showing that a factor better than 099n1/(p+1)/(p + 1)2 is unachievable for a p-pass semi-streaming algorithm, even allowing randomisation In particular, this implies that achieving a Θ(logn)-approximation requires Ω(log n/log log n) passes, which is tight up to the log logn factorThese results extend to a relaxation of the set cover problem where we are allowed to leave an e fraction of the universe uncovered: the tight bounds on the best approximation factor achievable in p passes turn out to be Θp(min{n1/(p+1), e--1/p})Our lower bounds are based on a construction of a family of high-rank incidence geometries, which may be thought of as vast generalisations of affine planes This construction, based on algebraic techniques, appears flexible enough to find other applications and is therefore interesting in its own right

Journal Article
TL;DR: This work establishes the feasible interpolation technique for all resolution-based QBF systems, whether modelling CDCL or expansion-based solving, which both provides the first general lower bound method for QBF calculi as well as largely extends the scope of classical feasible interpolations.
Abstract: In sharp contrast to classical proof complexity we are currently short of lower bound techniques for QBF proof systems. We establish the feasible interpolation technique for all resolution-based QBF systems, whether modelling CDCL or expansion-based solving. This both provides the first general lower bound method for QBF calculi as well as largely extends the scope of classical feasible interpolation. We apply our technique to obtain new exponential lower bounds to all resolution-based QBF systems for a new class of QBF formulas based on the clique problem. Finally, we show how feasible interpolation relates to the recently established lower bound method based on strategy extraction [7].

Journal Article
TL;DR: The main result is an explicit extractor for the sum of C independent sources for some large enough constant C, where each source has polylogarithmic min-entropy.
Abstract: We propose a new model of weak random sources which we call sumset sources. A sumset source X is the sum of C independent sources, with each source on n bits source having min-entropy k. We show that extractors for this class of sources can be used to give extractors for most classes of weak sources that have been studied previously, including independent sources, affine sources (which generalizes oblivious bit-fixing sources), small space sources, total entropy independent sources, and interleaved sources. This provides a unified approach for randomness extraction. A known extractor for this class of sources, prior to our work, is the Paley graph function introduced by Chor and Goldreich, which works for the sum of 2 independent sources, where one has min-entropy at least 0.51n and the other has logarithmic min-entropy. To the best of our knowledge, the only other known construction is from the work of Kamp, Rao, Vadhan and Zuckerman, which can extract from the sum of exponentially many independent sources. Our main result is an explicit extractor for the sum of C independent sources for some large enough constant C, where each source has polylogarithmic min-entropy. We then use this extractor to obtain improved extractors for other well studied classes of sources including small-space sources, affine sources and interleaved sources.

Journal Article
TL;DR: In this paper, the authors assess whether width lower bounds are effective for resolution calculi for quantified Boolean formulas (QBF) and show that both the relations between size and width as well as between space and width drastically fail in Q-resolution, even in its weaker tree-like version.
Abstract: The groundbreaking paper 'Short proofs are narrow - resolution made simple' by Ben-Sasson and Wigderson (J. ACM 2001) introduces what is today arguably the main technique to obtain resolution lower bounds: to show a lower bound for the width of proofs. Another important measure for resolution is space, and in their fundamental work, Atserias and Dalmau (J. Comput. Syst. Sci. 2008) show that space lower bounds again can be obtained via width lower bounds. Here we assess whether similar techniques are effective for resolution calculi for quantified Boolean formulas (QBF). A mixed picture emerges. Our main results show that both the relations between size and width as well as between space and width drastically fail in Q-resolution, even in its weaker tree-like version. On the other hand, we obtain positive results for the expansion-based resolution systems Forall-Exp+Res and IR-calc, however only in the weak tree-like models. Technically, our negative results rely on showing width lower bounds together with simultaneous upper bounds for size and space. For our positive results we exhibit space and width-preserving simulations between QBF resolution calculi.

Journal Article
TL;DR: This work shows an exponential separation between two well-studied models of algebraic computation, namely read-once oblivious algebraic branching programs (ROABPs) and multilinear depth three circuits and improves upon the quasi-polynomial separation result by Raz and Yehudayoff [2009].
Abstract: We show an exponential separation between two well-studied models of algebraic computation, namely, read-once oblivious algebraic branching programs (ROABPs) and multilinear depth-three circuits. In particular, we show the following: (1) There exists an explicit n-variate polynomial computable by linear sized multilinear depth-three circuits (with only two product gates) such that every ROABP computing it requires 2Ω(n) size. (2) Any multilinear depth-three circuit computing IMMn,d (the iterated matrix multiplication polynomial formed by multiplying d, n × n symbolic matrices) has nΩ(d) size. IMMn,d can be easily computed by a poly(n,d) sized ROABP. (3) Further, the proof of (2) yields an exponential separation between multilinear depth-four and multilinear depth-three circuits: There is an explicit n-variate, degree d polynomial computable by a poly(n) sized multilinear depth-four circuit such that any multilinear depth-three circuit computing it has size nΩ(d). This improves upon the quasi-polynomial separation of Reference [36] between these two models. The hard polynomial in (1) is constructed using a novel application of expander graphs in conjunction with the evaluation dimension measure [15, 33, 34, 36], while (2) is proved via a new adaptation of the dimension of the partial derivatives measure of Reference [32]. Our lower bounds hold over any field.

Journal Article
TL;DR: This work presents a pseudo-deterministic algorithm that, given a prime p, finds a primitive root modulo p in time exp(O( p log p log log p)).
Abstract: Pseudo-deterministic algorithms are randomized search algorithms which output unique solutions (i.e., with high probability they output the same solution on each execution). We present a pseudo-deterministic algorithm that, given a prime p, finds a primitive root modulo p in time exp(O( p log p log log p)). This improves upon the previous best known provable deterministic (and pseudo-deterministic) algorithm which runs in exponential time p 1 4+o(1). Our algorithm matches the problem’s best known running time for Las Vegas algorithms which may output di↵erent primitive roots in di↵erent executions. When the factorization of p 1 is known, as may be the case when generating primes with p 1 in factored form for use in certain applications, we present a pseudo-deterministic polynomial time algorithm for the case that each prime factor of p 1 is either of size at most logc(p) or at least p1/c for some constant c > 0. This is a significant improvement over a result of Gat and Goldwasser [5], which described a polynomial time pseudo-deterministic algorithm when the factorization of p 1 was of the form kq for prime q and k = poly(log p). We remark that the Generalized Riemann Hypothesis (GRH) implies that the smallest primitive root g satisfies g  O(log(p)). Therefore, assuming GRH, given the factorization of p 1, the smallest primitive root can be found and verified deterministically by brute force in polynomial time.

Journal Article
TL;DR: In this paper, a lower bound of n 0.753 was shown for the complexity of the 3-majority tree of height k. This improved over the state-of-the-art in the context of randomised decision tree complexity.
Abstract: We describe a general method of proving degree lower bounds for conical juntas (nonnegative combinations of conjunctions) that compute recursively defined boolean functions. Such lower bounds are known to carry over to communication complexity. We give two applications: AND-OR trees. We show a near-optimal [EQUATION](n0.753...) randomised communication lower bound for the recursive NAND function (a.k.a. AND-OR tree). This answers an open question posed by Beame and Lawry [6, 23]. Majority trees. We show an Ω(2.59k) randomised communication lower bound for the 3-majority tree of height k. This improves over the state-of-the-art already in the context of randomised decision tree complexity.

Journal Article
TL;DR: A deterministic algorithm that counts the number of satisfying assignments for any de Morgan formula F of size at most n3−16e in time 2n−n e · poly(n), for any small constant e > 0.
Abstract: We present a deterministic algorithm that counts the number of satisfying assignments for any de Morgan formula F of size at most n3−16e in time 2n−n e · poly(n), for any small constant e > 0. We do this by derandomizing the randomized algorithm mentioned by Komargodski et al. (FOCS, 2013) and Chen et al. (CCC, 2014). Our result uses the tight “shrinkage in expectation” result of de Morgan formulas by Hastad (SICOMP, 1998) as a black-box, and improves upon the result of Chen et al. (MFCS, 2014) that gave deterministic counting algorithms for de Morgan formulas of size at most n. Our algorithm generalizes to other bases of Boolean gates giving a 2n−n e · poly(n) time counting algorithm for formulas of size at most nΓ+1−O(e), where Γ is the shrinkage exponent for formulas using gates from the basis. ∗Weizmann Institute of Science, Rehovot, Israel. avishay.tal@weizmann.ac.il. Supported by an Adams Fellowship of the Israel Academy of Sciences and Humanities, by an ISF grant and by the I-CORE Program of the Planning and Budgeting Committee. ISSN 1433-8092 Electronic Colloquium on Computational Complexity, Report No. 114 (2015)

Journal Article
TL;DR: Goldreich and Izsak as mentioned in this paper studied negation complexity for Bool-ean functions and showed that one-way functions can be monotone (assuming they exist), but a pseudorandom generator cannot.
Abstract: The study of monotonicity and negation complexity for Bool-ean functions has been prevalent in complexity theory as well as in computational learning theory, but little attention has been given to it in the cryptographic context. Recently, Goldreich and Izsak (2012) have initiated a study of whether cryptographic primitives can be monotone, and showed that one-way functions can be monotone (assuming they exist), but a pseudorandom generator cannot.


Journal Article
TL;DR: The existence of a coin-flipping protocol safe against any nontrivial constant bias (e.g.,.499) implies the existence of one-way functions.
Abstract: We show that the existence of a coin-flipping protocol safe against any nontrivial constant bias (e.g., .499) implies the existence of one-way functions. This improves upon a result of Haitner and Omri (FOCS’11), who proved this implication for protocols with bias √ 2−1/2 − o(1) t .207. Unlike the result of Haitner and Omri, our result also holds for weak coin-flipping protocols.

Journal Article
Gillat Kol1
TL;DR: In this article, the authors studied the interactive compression problem for two-party communication protocols with small information cost and gave a simulation protocol whose communication complexity is bounded by a polynomial in the information cost of the original protocol.
Abstract: We study the interactive compression problem: Given a two-party communication protocol with small information cost, can it be compressed so that the total number of bits communicated is also small? We consider the case where the parties have inputs that are independent of each other, and give a simulation protocol that communicates I^2 * polylog(I) bits, where I is the information cost of the original protocol. Our protocol is the first simulation protocol whose communication complexity is bounded by a polynomial in the information cost of the original protocol.