scispace - formally typeset
Search or ask a question
Book

Boolean Function Complexity: Advances and Frontiers

06 Jan 2012-
TL;DR: In this article, a comprehensive description of basic lower bound arguments, covering many of the gems of this complexity Waterloo that have been discovered over the past several decades, right up to results from the last year or two, is given.
Abstract: Boolean circuit complexity is the combinatorics of computer science and involves many intriguing problems that are easy to state and explain, even for the layman. This book is a comprehensive description of basic lower bound arguments, covering many of the gems of this complexity Waterloo that have been discovered over the past several decades, right up to results from the last year or two. Many open problems, marked as Research Problems, are mentioned along the way. The problems are mainly of combinatorial flavor but their solutions could have great consequences in circuit complexity and computer science. The book will be of interest to graduate students and researchers in the fields of computer science and discrete mathematics.

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI
14 Aug 2016
TL;DR: Under the Decisional Diffie-Hellman DDH assumption, a 2-out-of-2 secret sharing scheme that supports a compact evaluation of branching programs on the shares is presented and a public-key variant that enables homomorphic computation on inputs contributed by multiple clients is presented.
Abstract: Under the Decisional Diffie-Hellman DDH assumption, we present a 2-out-of-2 secret sharing scheme that supports a compact evaluation of branching programs on the shares. More concretely, there is an evaluation algorithm $$\mathsf{Eval}$$ with a single bit of output, such that if an input $$w\in \{0,1\}^n$$ is shared into $$w^0,w^1$$, then for any deterministic branching program P of size S we have that $$\mathsf{Eval}P,w^0\oplus \mathsf{Eval}P,w^1=Pw$$ except with at most $$\delta $$ failure probability. The running time of the sharing algorithm is polynomial in n and the security parameter $$\lambda $$, and that of $$\mathsf{Eval}$$ is polynomial in $$S,\lambda $$, and $$1/\delta $$. This applies as a special case to boolean formulas of size S or boolean circuits of depth $$\log S$$. We also present a public-key variant that enables homomorphic computation on inputs contributed by multiple clients. The above result implies the following DDH-based applications:A secure 2-party computation protocol for evaluating any branching program or formula of size S, where the communication complexity is linear in the input size and only the running time grows with S.A secure 2-party computation protocol for evaluating layered boolean circuits of size S with communication complexity $$OS/\log S$$.A 2-party function secret sharing scheme, as defined by Boyle eti¾?al. Eurocrypt 2015, for general branching programs with inverse polynomial error probability.A 1-round 2-server private information retrieval scheme supporting general searches expressed by branching programs. Prior to our work, similar results could only be achieved using fully homomorphic encryption. We hope that our approach will lead to more practical alternatives to known fully homomorphic encryption schemes in the context of low-communication secure computation.

146 citations

Posted Content
TL;DR: Fully homomorphic encryption (FHE) has been dubbed the holy grail of cryptography, an elusive goal which could solve the IT world's problems of security and trust.
Abstract: Fully homomorphic encryption (FHE) has been dubbed the holy grail of cryptography, an elusive goal which could solve the IT world’s problems of security and trust. Research in the area exploded after 2009 when Craig Gentry showed that FHE can be realised in principle. Since that time considerable progress has been made in finding more practical and more efficient solutions. Whilst research quickly developed, terminology and concepts became diverse and confusing so that today it can be difficult to understand what the achievements of different works actually are. The purpose of this paper is to address three fundamental questions: What is FHE? What can FHE be used for? What is the state of FHE today? As well as surveying the field, we clarify different terminology in use and prove connections between different FHE notions.

136 citations

Journal ArticleDOI
TL;DR: It is shown that positive existential and nonrecursive datalog rewritings, which do not use extra non-logical symbols, suffer an exponential blowup in the worst case, while first-order rewrites can grow superpolynomially unless NP ⊆ P/poly.

97 citations

Proceedings ArticleDOI
03 Nov 2014
TL;DR: Barak, Gentry, Halevi, Raykova, Sahai, and Waters as discussed by the authors improved the number of levels of multilinearity required to construct a core obfuscator to O(l s3.64).
Abstract: In this work, we seek to optimize the efficiency of secure general-purpose obfuscation schemes. We focus on the problem of optimizing the obfuscation of Boolean formulas and branching programs -- this corresponds to optimizing the "core obfuscator" from the work of Garg, Gentry, Halevi, Raykova, Sahai, and Waters (FOCS 2013), and all subsequent works constructing general-purpose obfuscators. This core obfuscator builds upon approximate multilinear maps, where efficiency in proposed instantiations is closely tied to the maximum number of "levels" of multilinearity required. The most efficient previous construction of a core obfuscator, due to Barak, Garg, Kalai, Paneth, and Sahai (Eurocrypt 2014), required the maximum number of levels of multilinearity to be O(l s3.64), where s is the size of the Boolean formula to be obfuscated, and l s is the number of input bits to the formula. In contrast, our construction only requires the maximum number of levels of multilinearity to be roughly l s, or only s when considering a keyed family of formulas, namely a class of functions of the form fz(x)=phi(z,x) where phi is a formula of size s. This results in significant improvements in both the total size of the obfuscation and the running time of evaluating an obfuscated formula. Our efficiency improvement is obtained by generalizing the class of branching programs that can be directly obfuscated. This generalization allows us to achieve a simple simulation of formulas by branching programs while avoiding the use of Barrington's theorem, on which all previous constructions relied. Furthermore, the ability to directly obfuscate general branching programs (without bootstrapping) allows us to efficiently apply our construction to natural function classes that are not known to have polynomial-size formulas.

92 citations

Journal Article
TL;DR: In this paper, it was shown that deterministic communication complexity can be superlogarithmic in the partition number of the associated communication matrix, and near-optimal deterministic lower bounds were obtained.
Abstract: We show that deterministic communication complexity can be superlogarithmic in the partition number of the associated communication matrix. We also obtain near-optimal deterministic lower bounds fo...

84 citations

References
More filters
Book
01 Jan 1977
TL;DR: This book presents an introduction to BCH Codes and Finite Fields, and methods for Combining Codes, and discusses self-dual Codes and Invariant Theory, as well as nonlinear Codes, Hadamard Matrices, Designs and the Golay Code.
Abstract: Linear Codes. Nonlinear Codes, Hadamard Matrices, Designs and the Golay Code. An Introduction to BCH Codes and Finite Fields. Finite Fields. Dual Codes and Their Weight Distribution. Codes, Designs and Perfect Codes. Cyclic Codes. Cyclic Codes: Idempotents and Mattson-Solomon Polynomials. BCH Codes. Reed-Solomon and Justesen Codes. MDS Codes. Alternant, Goppa and Other Generalized BCH Codes. Reed-Muller Codes. First-Order Reed-Muller Codes. Second-Order Reed-Muller, Kerdock and Preparata Codes. Quadratic-Residue Codes. Bounds on the Size of a Code. Methods for Combining Codes. Self-dual Codes and Invariant Theory. The Golay Codes. Association Schemes. Appendix A. Tables of the Best Codes Known. Appendix B. Finite Geometries. Bibliography. Index.

10,083 citations

Journal ArticleDOI
TL;DR: In the present paper, a uniform proof procedure for quantification theory is given which is feasible for use with some rather complicated formulas and which does not ordinarily lead to exponentiation.
Abstract: The hope that mathematical methods employed in the investigation of formal logic would lead to purely computational methods for obtaining mathematical theorems goes back to Leibniz and has been revived by Peano around the turn of the century and by Hilbert's school in the 1920's. Hilbert, noting that all of classical mathematics could be formalized within quantification theory, declared that the problem of finding an algorithm for determining whether or not a given formula of quantification theory is valid was the central problem of mathematical logic. And indeed, at one time it seemed as if investigations of this “decision” problem were on the verge of success. However, it was shown by Church and by Turing that such an algorithm can not exist. This result led to considerable pessimism regarding the possibility of using modern digital computers in deciding significant mathematical questions. However, recently there has been a revival of interest in the whole question. Specifically, it has been realized that while no decision procedure exists for quantification theory there are many proof procedures available—that is, uniform procedures which will ultimately locate a proof for any formula of quantification theory which is valid but which will usually involve seeking “forever” in the case of a formula which is not valid—and that some of these proof procedures could well turn out to be feasible for use with modern computing machinery.Hao Wang [9] and P. C. Gilmore [3] have each produced working programs which employ proof procedures in quantification theory. Gilmore's program employs a form of a basic theorem of mathematical logic due to Herbrand, and Wang's makes use of a formulation of quantification theory related to those studied by Gentzen. However, both programs encounter decisive difficulties with any but the simplest formulas of quantification theory, in connection with methods of doing propositional calculus. Wang's program, because of its use of Gentzen-like methods, involves exponentiation on the total number of truth-functional connectives, whereas Gilmore's program, using normal forms, involves exponentiation on the number of clauses present. Both methods are superior in many cases to truth table methods which involve exponentiation on the total number of variables present, and represent important initial contributions, but both run into difficulty with some fairly simple examples.In the present paper, a uniform proof procedure for quantification theory is given which is feasible for use with some rather complicated formulas and which does not ordinarily lead to exponentiation. The superiority of the present procedure over those previously available is indicated in part by the fact that a formula on which Gilmore's routine for the IBM 704 causes the machine to computer for 21 minutes without obtaining a result was worked successfully by hand computation using the present method in 30 minutes. Cf. §6, below.It should be mentioned that, before it can be hoped to employ proof procedures for quantification theory in obtaining proofs of theorems belonging to “genuine” mathematics, finite axiomatizations, which are “short,” must be obtained for various branches of mathematics. This last question will not be pursued further here; cf., however, Davis and Putnam [2], where one solution to this problem is given for ele

2,743 citations

Journal ArticleDOI
TL;DR: The method yields polynomial algorithms for vertex packing in perfect graphs, for the matching and matroid intersection problems, for optimum covering of directed cuts of a digraph, and for the minimum value of a submodular set function.
Abstract: L. G. Khachiyan recently published a polynomial algorithm to check feasibility of a system of linear inequalities. The method is an adaptation of an algorithm proposed by Shor for non-linear optimization problems. In this paper we show that the method also yields interesting results in combinatorial optimization. Thus it yields polynomial algorithms for vertex packing in perfect graphs; for the matching and matroid intersection problems; for optimum covering of directed cuts of a digraph; for the minimum value of a submodular set function; and for other important combinatorial problems. On the negative side, it yields a proof that weighted fractional chromatic number is NP-hard.

2,170 citations

Book
01 Jan 1996
TL;DR: This chapter surveys the theory of two-party communication complexity and presents results regarding the following models of computation: • Finite automata • Turing machines • Decision trees • Ordered binary decision diagrams • VLSI chips • Networks of threshold gates.
Abstract: In this chapter we survey the theory of two-party communication complexity. This field of theoretical computer science aims at studying the following, seemingly very simple, scenario: There are two players Alice who holds an n-bit string x and Bob who holds an n-bit string y. Their goal is to communicate in order to compute the value of some boolean function f(x, y), while exchanging a number of bits which is as small as possible. In the first part of this survey we present, mainly by giving examples, some of the results (and techniques) developed as part of this theory. We put an emphasis on proving lower bounds on the amount of communication that must be exchanged in the above scenario for certain functions f . In the second part of this survey we will exemplify the wide applicability of the results proved in the first part to other areas of computer science. While it is obvious that there are many applications of the results to problems in which communication is involved (e.g., in distributed systems), we concentrate on applications in which communication does not appear explicitly in the statement of the problems. In particular, we present results regarding the following models of computation: • Finite automata • Turing machines • Decision trees • Ordered binary decision diagrams (OBDDs) • VLSI chips • Networks of threshold gates We provide references to many other issues and applications of communication complexity which are not discussed in this survey.

2,004 citations

Journal ArticleDOI
TL;DR: It is proved that the Shannon zero-error capacity of the pentagon is \sqrt{5} and a well-characterized, and in a sense easily computable, function is introduced which bounds the capacity from above and equals the capacity in a large number of cases.
Abstract: It is proved that the Shannon zero-error capacity of the pentagon is \sqrt{5} . The method is then generalized to obtain upper bounds on the capacity of an arbitrary graph. A well-characterized, and in a sense easily computable, function is introduced which bounds the capacity from above and equals the capacity in a large number of cases. Several results are obtained on the capacity of special graphs; for example, the Petersen graph has capacity four and a self-complementary graph with n points and with a vertex-transitive automorphism group has capacity \sqrt{5} .

1,733 citations