scispace - formally typeset
Search or ask a question
Author

Shachar Lovett

Bio: Shachar Lovett is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Polynomial & Conjecture. The author has an hindex of 28, co-authored 249 publications receiving 3319 citations. Previous affiliations of Shachar Lovett include Princeton University & Weizmann Institute of Science.


Papers
More filters
Proceedings ArticleDOI
31 May 2009
TL;DR: CPA/CCA secure symmetric encryption schemes that remain secure with exponentially hard-to-invert auxiliary input are constructed, based on a new cryptographic assumption, Learning Subspace-with-Noise (LSN), which is related to the well known Learning Parity- with-noise (LPN) assumption.
Abstract: We study the question of designing cryptographic schemes which are secure even if an arbitrary function f(sk) of the secret key is leaked, as long as the secret key sk is still (exponentially) hard to compute from this auxiliary input. This setting of auxiliary input is more general than the more traditional setting, which assumes that some of information about the secret key sk may be leaked, but sk still has high min-entropy left. In particular, we deal with situations where f(sk) information-theoretically determines the entire secret key sk.As our main result, we construct CPA/CCA secure symmetric encryption schemes that remain secure with exponentially hard-to-invert auxiliary input. We give several applications of such schemes. * We construct an average-case obfuscator for the class of point functions, which remains secure with exponentially hard-to-invert auxiliary input, and is reusable. * We construct a reusable and robust extractor that remains secure with exponentially hard-to-invert auxiliary input. Our results rely on a new cryptographic assumption, Learning Subspace-with-Noise (LSN), which is related to the well known Learning Parity-with-Noise (LPN) assumption.

252 citations

Proceedings ArticleDOI
31 May 2014
TL;DR: The first non-malleable code in the split-state model was constructed in this paper, where the inner product function was shown to be close to the convex combination of affine distributions.
Abstract: Non-malleable codes provide a useful and meaningful security guarantee in situations where traditional errorcorrection (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. Although such codes do not exist if the family of "tampering functions" F is completely unrestricted, they are known to exist for many broad tampering families F. One such natural family is the family of tampering functions in the so called split-state model. Here the message m is encoded into two shares L and R, and the attacker is allowed to arbitrarily tamper with L and R individually. The split-state tampering arises in many realistic applications, such as the design of non-malleable secret sharing schemes, motivating the question of designing efficient non-malleable codes in this model. Prior to this work, non-malleable codes in the splitstate model received considerable attention in the literature, but were constructed either (1) in the random oracle model [16], or (2) relied on advanced cryptographic assumptions (such as non-interactive zero-knowledge proofs and leakage-resilient encryption) [26], or (3) could only encode 1-bit messages [14]. As our main result, we build the first efficient, multi-bit, information-theoretically-secure non-malleable code in the split-state model. The heart of our construction uses the following new property of the inner-product function 〈L;R〉 over the vector space Fnp (for a prime p and large enough dimension n): if L and R are uniformly random over Fnp, and f, g: Fnp → Fnp are two arbitrary functions on L and R, then the joint distribution (〈L;R〉, 〈f(L), g(R)〉) is "close" to the convex combination of "affine distributions" {(U, aU + b) --- a, b e Fp}, where U is uniformly random in Fp. In turn, the proof of this surprising property of the inner product function critically relies on some results from additive combinatorics, including the so called Quasi-polynomial Freiman-Ruzsa Theorem which was recently established by Sanders [29] as a step towards resolving the Polynomial Freiman-Ruzsa conjecture [21].

99 citations

Journal ArticleDOI
TL;DR: In this article, a new randomized algorithm was proposed to find a coloring as in Spencer's result based on a restricted random walk called Edge-Walk, which does not appeal to the existential arguments.
Abstract: Minimizing the discrepancy of a set system is a fundamental problem in combinatorics. One of the cornerstones in this area is the celebrated six standard deviations result of Spencer [Trans. Amer. Math. Soc., 289 (1985), pp. 679--706]: In any system of $n$ sets in a universe of size $n$, there always exists a coloring which achieves discrepancy $6\sqrt{n}$. The original proof of Spencer was existential in nature and did not give an efficient algorithm to find such a coloring. Recently, a breakthrough work of Bansal [Proceedings of FOCS, 2010, pp. 3--10] gave an efficient algorithm which finds such a coloring. His algorithm was based on an SDP relaxation of the discrepancy problem and a clever rounding procedure. In this work we give a new randomized algorithm to find a coloring as in Spencer's result based on a restricted random walk we call Edge-Walk. Our algorithm and its analysis use only basic linear algebra and is truly constructive in that it does not appeal to the existential arguments, giving a ne...

98 citations

Proceedings ArticleDOI
20 Oct 2012
TL;DR: A new randomized algorithm to find a coloring as in Spencer's result based on a restricted random walk called Edge-Walk is given, giving a new proof of Spencer's theorem and the {\sl partial coloring lemma].
Abstract: Minimizing the discrepancy of a set system is a fundamental problem in combinatorics. One of the cornerstones in this area is the celebrated six standard deviations result of Spencer (AMS 1985): In any system of $n$ sets in a universe of size $n$, there always exists a coloring which achieves discrepancy $6\sqrt{n}$. The original proof of Spencer was existential in nature, and did not give an efficient algorithm to find such a coloring. Recently, a breakthrough work of Bansal (FOCS 2010) gave an efficient algorithm which finds such a coloring. His algorithm was based on an SDP relaxation of the discrepancy problem and a clever rounding procedure. In this work we give a new randomized algorithm to find a coloring as in Spencer's result based on a restricted random walk we call {\sl Edge-Walk}. Our algorithm and its analysis use only basic linear algebra and is ``truly'' constructive in that it does not appeal to the existential arguments, giving a new proof of Spencer's theorem and the {\sl partial coloring lemma}.

96 citations

Journal Article
TL;DR: In this paper, the communication lower bound for composed functions of the form $f\circ g^n, where f is any boolean function on n inputs and g is a sufficiently hard two-party gadget, was established.
Abstract: We develop a new method to prove communication lower bounds for composed functions of the form $f\circ g^n$, where $f$ is any boolean function on $n$ inputs and $g$ is a sufficiently “hard” two-party gadget. Our main structure theorem states that each rectangle in the communication matrix of $f \circ g^n$ can be simulated by a nonnegative combination of juntas. This is a new formalization for the intuition that each low-communication randomized protocol can only “query” a few inputs of $f$ as encoded by the gadget $g$. Consequently, we characterize the communication complexity of $f\circ g^n$ in all known one-sided (i.e., not closed under complement) zero-communication models by a corresponding query complexity measure of $f$. These models in turn capture important lower bound techniques such as corruption, smooth rectangle bound, relaxed partition bound, and extended discrepancy. As applications, we resolve several open problems from prior work. We show that $\mathsf{SBP}^{\sf cc}$ (a class characterized...

79 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Book ChapterDOI
04 Oct 2019
TL;DR: Permission to copy without fee all or part of this material is granted provided that the copies arc not made or distributed for direct commercial advantage.
Abstract: Usually, a proof of a theorem contains more knowledge than the mere fact that the theorem is true. For instance, to prove that a graph is Hamiltonian it suffices to exhibit a Hamiltonian tour in it; however, this seems to contain more knowledge than the single bit Hamiltonian/non-Hamiltonian.In this paper a computational complexity theory of the “knowledge” contained in a proof is developed. Zero-knowledge proofs are defined as those proofs that convey no additional knowledge other than the correctness of the proposition in question. Examples of zero-knowledge proof systems are given for the languages of quadratic residuosity and 'quadratic nonresiduosity. These are the first examples of zero-knowledge proofs for languages not known to be efficiently recognizable.

1,962 citations

Journal Article
TL;DR: In this paper, the authors consider the question of determining whether a function f has property P or is e-far from any function with property P. In some cases, it is also allowed to query f on instances of its choice.
Abstract: In this paper, we consider the question of determining whether a function f has property P or is e-far from any function with property P. A property testing algorithm is given a sample of the value of f on instances drawn according to some distribution. In some cases, it is also allowed to query f on instances of its choice. We study this question for different properties and establish some connections to problems in learning theory and approximation.In particular, we focus our attention on testing graph properties. Given access to a graph G in the form of being able to query whether an edge exists or not between a pair of vertices, we devise algorithms to test whether the underlying graph has properties such as being bipartite, k-Colorable, or having a p-Clique (clique of density p with respect to the vertex set). Our graph property testing algorithms are probabilistic and make assertions that are correct with high probability, while making a number of queries that is independent of the size of the graph. Moreover, the property testing algorithms can be used to efficiently (i.e., in time linear in the number of vertices) construct partitions of the graph that correspond to the property being tested, if it holds for the input graph.

870 citations

Book
05 Jun 2014
TL;DR: This text gives a thorough overview of Boolean functions, beginning with the most basic definitions and proceeding to advanced topics such as hypercontractivity and isoperimetry, and includes a "highlight application" such as Arrow's theorem from economics.
Abstract: Boolean functions are perhaps the most basic objects of study in theoretical computer science. They also arise in other areas of mathematics, including combinatorics, statistical physics, and mathematical social choice. The field of analysis of Boolean functions seeks to understand them via their Fourier transform and other analytic methods. This text gives a thorough overview of the field, beginning with the most basic definitions and proceeding to advanced topics such as hypercontractivity and isoperimetry. Each chapter includes a "highlight application" such as Arrow's theorem from economics, the Goldreich-Levin algorithm from cryptography/learning theory, Hstad's NP-hardness of approximation results, and "sharp threshold" theorems for random graph properties. The book includes roughly 450 exercises and can be used as the basis of a one-semester graduate course. It should appeal to advanced undergraduates, graduate students, and researchers in computer science theory and related mathematical fields.

867 citations

Journal ArticleDOI
TL;DR: This review begins by reviewing protocols of quantum key distribution based on discrete variable systems, and considers aspects of device independence, satellite challenges, and high rate protocols based on continuous variable systems.
Abstract: Quantum cryptography is arguably the fastest growing area in quantum information science. Novel theoretical protocols are designed on a regular basis, security proofs are constantly improving, and experiments are gradually moving from proof-of-principle lab demonstrations to in-field implementations and technological prototypes. In this paper, we provide both a general introduction and a state-of-the-art description of the recent advances in the field, both theoretical and experimental. We start by reviewing protocols of quantum key distribution based on discrete variable systems. Next we consider aspects of device independence, satellite challenges, and protocols based on continuous-variable systems. We will then discuss the ultimate limits of point-to-point private communications and how quantum repeaters and networks may overcome these restrictions. Finally, we will discuss some aspects of quantum cryptography beyond standard quantum key distribution, including quantum random number generators and quantum digital signatures.

769 citations