scispace - formally typeset
Search or ask a question

Showing papers presented at "Theory of Cryptography Conference in 2016"


Book ChapterDOI
31 Oct 2016
TL;DR: This work presents an alternative formulation of the concept of concentrated differential privacy in terms of the Renyi divergence between the distributions obtained by running an algorithm on neighboring inputs, which proves sharper quantitative results, establishes lower bounds, and raises a few new questions.
Abstract: "Concentrated differential privacy" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Renyi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of "approximate concentrated differential privacy".

524 citations


Book ChapterDOI
31 Oct 2016
TL;DR: In this article, the authors present a multi-party computation protocol in the case of dishonest majority which has very low round complexity, which sits philosophically between Gentry's Fully Homomorphic Encryption based protocol and the SPDZ-BMR protocol of Lindell et al.
Abstract: We present a multi-party computation protocol in the case of dishonest majority which has very low round complexity. Our protocol sits philosophically between Gentry's Fully Homomorphic Encryption based protocol and the SPDZ-BMR protocol of Lindell et al. CRYPTO 2015. Our protocol avoids various inefficiencies of the previous two protocols. Compared to Gentry's protocol we only require Somewhat Homomorphic Encryption SHE. Whilst in comparison to the SPDZ-BMR protocol we require only a quadratic complexity in the number of players as opposed to cubic, we have fewer rounds, and we require less proofs of correctness of ciphertexts. Additionally, we present a variant of our protocol which trades the depth of the garbling circuit computed using SHE for some more multiplications in the offline and online phases.

356 citations


Book ChapterDOI
31 Oct 2016
TL;DR: In this paper, the authors define an interactive oracle proof IOP to be an interactive proof in which the verifier is not required to read the prover's messages in their entirety, and may probabilistically query them.
Abstract: We initiate the study of a proof system model that naturally combines interactive proofs IPs and probabilistically-checkable proofs PCPs, and generalizes interactive PCPs which consist of a PCP followed by an IP. We define an interactive oracle proof IOP to be an interactive proof in which the verifier is not required to read the prover's messages in their entirety; rather, the verifier has oracle access to the prover's messages, and may probabilistically query them. IOPs retain the expressiveness of PCPs, capturing NEXP rather than only PSPACE, and also the flexibility of IPs, allowing multiple rounds of communication with the prover. IOPs have already found several applications, including unconditional zero knowledge [BCGV16], constant-rate constant-query probabilistic checking [BCG+16], and doubly-efficient constant-round IPs for polynomial-time bounded-space computations [RRR16]. We offer two main technical contributions. First, we give a compiler that maps any public-coin IOP into a non-interactive proof in the random oracle model. We prove that the soundness of the resulting proof is tightly characterized by the soundness of the IOP against state restoration attacks, a class of rewinding attacks on the IOP verifier that is reminiscent of, but incomparable to, resetting attacks. Second, we study the notion of state-restoration soundness of an IOP: we prove tight upper and lower bounds in terms of the IOP's standard soundness and round complexity; and describe a simple adversarial strategy that is optimal, in expectation, across all state restoration attacks. Our compiler can be viewed as a generalization of the Fiat---Shamir paradigm for public-coin IPs CRYPTOi¾?'86, and of the "CS proof" constructions of Micali FOCSi¾?'94 and Valiant TCCi¾?'08 for PCPs. Our analysis of the compiler gives, in particular, a unified understanding of these constructions, and also motivates the study of state restoration attacks, not only for IOPs, but also for IPs and PCPs. When applied to known IOP constructions, our compiler implies, e.g., blackbox unconditional ZK proofs in the random oracle model with quasilinear prover and polylogarithmic verifier, improving on a result of [IMSX15].

181 citations


Book ChapterDOI
10 Jan 2016
TL;DR: The Onion ORAM is the first concrete instantiation of a constant bandwidth blowup ORAM under standard assumptions, and proposes novel techniques to achieve security against a malicious server, without resorting to expensive and non-standard techniques such as SNARKs.
Abstract: We present Onion ORAM, an Oblivious RAM (ORAM) with constant worst-case bandwidth blowup that leverages poly-logarithmic server computation to circumvent the logarithmic lower bound on ORAM bandwidth blowup. Our construction does not require fully homomorphic encryption, but employs an additively homomorphic encryption scheme such as the Damgard-Jurik cryptosystem, or alternatively a BGV-style somewhat homomorphic encryption scheme without bootstrapping. At the core of our construction is an ORAM scheme that has “shallow circuit depth” over the entire history of ORAM accesses. We also propose novel techniques to achieve security against a malicious server, without resorting to expensive and non-standard techniques such as SNARKs. To the best of our knowledge, Onion ORAM is the first concrete instantiation of a constant bandwidth blowup ORAM under standard assumptions (even for the semi-honest setting).

153 citations


Book ChapterDOI
31 Oct 2016
TL;DR: In this paper, a hybrid encryption scheme is presented that is chosen ciphertext secure in the quantum random oracle model. But it is not secure against quantum adversaries. And it is a combination of an asymmetric and a symmetric encryption scheme that are secure in a weak sense.
Abstract: In this paper, we present a hybrid encryption scheme that is chosen ciphertext secure in the quantum random oracle model. Our scheme is a combination of an asymmetric and a symmetric encryption scheme that are secure in a weak sense. It is a slight modification of the Fujisaki-Okamoto transform that is secure against classical adversaries. In addition, we modify the OAEP-cryptosystem and prove its security in the quantum random oracle model based on the existence of a partial-domain one-way injective function secure against quantum adversaries.

116 citations


Book ChapterDOI
31 Oct 2016
TL;DR: Two multi-key FHE schemes are constructed, based on LWE assumptions, which are multi-hop for keys: the output of a homomorphic computation on ciphertexts encrypted under a set of keys can be used in further homomorphic computations involving additional keys, and so on.
Abstract: Traditional fully homomorphic encryption FHE schemes only allow computation on data encrypted under a single key. Lopez-Alt, Tromer, and Vaikuntanathan STOC 2012 proposed the notion of multi-key FHE, which allows homomorphic computation on ciphertexts encrypted under different keys, and also gave a construction based on a somewhat nonstandard assumption related to NTRU. More recently, Clear and McGoldrick CRYPTO 2015, followed by Mukherjee and Wichs EUROCRYPT 2016, proposed a multi-key FHE that builds upon the LWE-based FHE of Gentry, Sahai, and Waters CRYPTO 2013. However, unlike the original construction of Lopez-Alt et al., these later LWE-based schemes have the somewhat undesirable property of being "single-hop for keys:" all relevant keys must be known at the start of the homomorphic computation, and the output cannot be usefully combined with ciphertexts encrypted under other keys unless an expensive "bootstrapping" step is performed. In this work we construct two multi-key FHE schemes, based on LWE assumptions, which are multi-hop for keys: the output of a homomorphic computation on ciphertexts encrypted under a set of keys can be used in further homomorphic computation involving additional keys, and so on. Moreover, incorporating ciphertexts associated with new keys is a relatively efficient "native" operation akini¾?to homomorphic multiplication, and does not require bootstrapping in contrast with all other LWE-based solutions. Our systems also have smaller ciphertexts than the previous LWE-based ones; in fact, ciphertexts in our second construction are simply GSW ciphertexts with no auxiliary data.

103 citations


Book ChapterDOI
31 Oct 2016
TL;DR: Gentry et al. as discussed by the authors proposed a weak multilinear map model that is provably secure against all known polynomial-time attacks on GGH13 and proved its security in the weak multi-linear map model.
Abstract: All known candidate indistinguishability obfuscation iO schemes rely on candidate multilinear maps. Until recently, the strongest proofs of security available for iO candidates were in a generic model that only allows "honest" use of the multilinear map. Most notably, in this model the zero-test procedure only reveals whether an encoded element is 0, and nothing more. However, this model is inadequate: there have been several attacks on multilinear maps that exploit extra information revealed by the zero-test procedure. In particular, Miles, Sahai and Zhandry Crypto'16 recently gave a polynomial-time attack on several iO candidates when instantiated with the multilinear maps of Garg, Gentry, and Halevi Eurocrypt'13, and also proposed a new "weak multilinear map model" that captures all known polynomial-time attacks on GGH13. In this work, we give a new iO candidate which can be seen as a small modification or generalization of the original candidate of Garg, Gentry, Halevi, Raykova, Sahai, and Waters FOCS'13. We prove its security in the weak multilinear map model, thus giving the first iO candidate that is provably secure against all known polynomial-time attacks on GGH13. The proof of security relies on a new assumption about the hardness of computing annihilating polynomials, and we show that this assumption is implied by the existence of pseudorandom functions in NC $$^1$$ 1.

99 citations


Book ChapterDOI
10 Jan 2016
TL;DR: In the weaker selective security model, where the adversary is forced to specify its target before seeing the public parameters, full security can be obtained but at the cost of an exponential loss in the security reduction as mentioned in this paper.
Abstract: Previously known functional encryption (FE) schemes for general circuits relied on indistinguishability obfuscation, which in turn either relies on an exponential number of assumptions (basically, one per circuit), or a polynomial set of assumptions, but with an exponential loss in the security reduction. Additionally most of these schemes are proved in the weaker selective security model, where the adversary is forced to specify its target before seeing the public parameters. For these constructions, full security can be obtained but at the cost of an exponential loss in the security reduction.

98 citations


Book ChapterDOI
10 Jan 2016
TL;DR: In this paper, the authors show that learning the secret and distinguishing samples from random strings is at least as hard for LWR as it is for LWE for efficient algorithms if the number of samples is no larger than Oqi ¾?/i¾?Bp, where q is the LWR modulus, p is the rounding modulus and the noise is sampled from any distribution supported over the set.
Abstract: We show the following reductions from the learning with errors problem LWE to the learning with rounding problem LWR: 1 Learning the secret and 2 distinguishing samples from random strings is at least as hard for LWR as it is for LWE for efficient algorithms if the number of samples is no larger than Oqi¾?/i¾?Bp, where q is the LWR modulus, p is the rounding modulus, and the noise is sampled from any distribution supported over the set $$\{-B,\ldots ,B\}$$ . Our second result generalizes a theorem of Alwen, Krenn, Pietrzak, and Wichs CRYPTO 2013 and provides an alternate proof of it. Unlike Alwen et al., we do not impose any number theoretic restrictions on the modulus q. The first result also extends to variants of LWR and LWE over polynomial rings. The above reductions are sample preserving and run in time $$\mathrm {poly}n,q,m$$ . As additional results we show that 3 distinguishing any number of LWR samples from random strings is of equivalent hardness to LWE whose noise distribution is uniform over the integers in the range $$[-q/2p, \dots , q/2p$$ provided q is a multiple of p and 4 the "noise flooding" technique for converting faulty LWE noise to a discrete Gaussian distribution can be applied whenever $$q = \varOmega B\sqrt{m}$$ .

87 citations


Book ChapterDOI
10 Jan 2016
TL;DR: The PRAM model captures modern multi-core architectures and cluster computing models, where several processors execute in parallel and make accesses to shared memory, and provides the “best of both” circuit and RAM models, supporting both cheap random access and parallelism.
Abstract: We initiate the study of cryptography for parallel RAM (PRAM) programs. The PRAM model captures modern multi-core architectures and cluster computing models, where several processors execute in parallel and make accesses to shared memory, and provides the “best of both” circuit and RAM models, supporting both cheap random access and parallelism.

85 citations


Book ChapterDOI
10 Jan 2016
TL;DR: This work constructs trapdoor permutations based on sub-exponential indistinguishability obfuscation and one-way functions, thereby providing the first candidate that is not based on the hardness of factoring, and shows that even highly structured primitives can be potentially based on hardness assumptions with noisy structures.
Abstract: We construct trapdoor permutations based on sub-exponential indistinguishability obfuscation and one-way functions, thereby providing the first candidate that is not based on the hardness of factoring. Our construction shows that even highly structured primitives, such as trapdoor permutations, can be potentially based on hardness assumptions with noisy structures such as those used in candidate constructions of indistinguishability obfuscation. It also suggest a possible way to construct trapdoor permutations that resist quantum attacks, and that their hardness may be based on problems outside the complexity class $$\text{ SZK } $$ i¾?-- indeed, while factoring-based candidates do not possess such security, future constructions of indistinguishability obfuscation might. As a corollary, we eliminate the need to assume trapdoor permutations and injective one-way function in many recent constructions based on indistinguishability obfuscation.

Book ChapterDOI
10 Jan 2016
TL;DR: In this paper, a new relaxed but still information theoretic security property was proposed for predicate encryption in prime order groups, which results in either semi-adaptive or full security depending on the encoding and gives security under SXDH or DLIN.
Abstract: Pair encodings and predicate encodings, recently introduced by Attrapadung [2] and Wee [36] respectively, greatly simplify the process of designing and analyzing predicate and attribute-based encryption schemes. However, they are still somewhat limited in that they are restricted to composite order groups, and the information theoretic properties are not sufficient to argue about many of the schemes. Here we focus on pair encodings, as the more general of the two. We first study the structure of these objects, then propose a new relaxed but still information theoretic security property. Next we show a generic construction for predicate encryption in prime order groups from our new property; it results in either semi-adaptive or full security depending on the encoding, and gives security under SXDH or DLIN. Finally, we demonstrate the range of our new property by using it to design the first semi-adaptively secure CP-ABE scheme with constant size ciphertexts.

Book ChapterDOI
31 Oct 2016
TL;DR: In this article, it was shown that subexponentially secure secret-key functional encryption is powerful enough to construct indistinguishability obfuscation if they additionally assume the existence of subexponential secure plain public-key encryption.
Abstract: Functional encryption lies at the frontiers of current research in cryptography; some variants have been shown sufficiently powerful to yield indistinguishability obfuscation IO while other variants have been constructed from standard assumptions such as LWE. Indeed, most variants have been classified as belonging to either the former or the latter category. However, one mystery that has remained is the case of secret-key functional encryption with an unbounded number of keys and ciphertexts. On the one hand, this primitive is not known to imply anything outside of minicrypt, the land of secret-key crypto, but on the other hand, we do no know how to construct it without the heavy hammers in obfustopia. In this work, we show that subexponentially secure secret-key functional encryption is powerful enough to construct indistinguishability obfuscation if we additionally assume the existence of subexponentially secure plain public-key encryption. In other words, secret-key functional encryption provides a bridge from cryptomania to obfustopia. On the technical side, our result relies on two main components. As our first contribution, we show how to use secret key functional encryption to get "exponentially-efficient indistinguishability obfuscation" XIO, a notion recently introduced by Lin et al. PKC '16 as a relaxation of IO. Lin et al. show how to use XIO and the LWE assumption to build IO. As our second contribution, we improve on this result by replacing its reliance on the LWE assumption with any plain public-key encryption scheme. Lastly, we ask whether secret-key functional encryption can be used to construct public-key encryption itself and therefore take us all the way from minicrypt to obfustopia. A result of Asharov and Segev FOCS '15 shows that this is not the case under black-box constructions, even for exponentially secure functional encryption. We show, through a non-black box construction, that subexponentially secure-key functional encryption indeed leads to public-key encryption. The resulting public-key encryption scheme, however, is at most quasi-polynomially secure, which is insufficient to take us to obfustopia.

Book ChapterDOI
10 Jan 2016
TL;DR: In this article, a classification and review of recently suggested assumptions in the field of theoretical cryptography is presented, based on hardness assumptions that are independent of the cryptographic constructions, as to which are safe and which are not.
Abstract: The mission of theoretical cryptography is to define and construct provably secure cryptographic protocols and schemes. Without proofs of security, cryptographic constructs offer no guarantees whatsoever and no basis for evaluation and comparison. As most security proofs necessarily come in the form of a reduction between the security claim and an intractability assumption, such proofs are ultimately only as good as the assumptions they are based on. Thus, the complexity implications of every assumption we utilize should be of significant substance, and serve as the yard stick for the value of our proposals. Lately, the field of cryptography has seen a sharp increase in the number of new assumptions that are often complex to define and difficult to interpret. At times, these assumptions are hard to untangle from the constructions which utilize them. We believe that the lack of standards of what is accepted as a reasonable cryptographic assumption can be harmful to the credibility of our field. Therefore, there is a great need for measures according to which we classify and compare assumptions, as to which are safe and which are not. In this paper, we propose such a classification and review recently suggested assumptions in this light. This follows the footsteps of Naor Crypto 2003. Our governing principle is relying on hardness assumptions that are independent of the cryptographic constructions.

Book ChapterDOI
10 Jan 2016
TL;DR: This paper considers the natural setting of Oblivious Parallel RAM (OPRAM), recently introduced by Boyle, Chung, and Pass (TCC 2016A), where m clients simultaneously access in parallel the storage server.
Abstract: Oblivious RAM (ORAM) garbles read/write operations by a client (to access a remote storage server or a random-access memory) so that an adversary observing the garbled access sequence cannot infer any information about the original operations, other than their overall number. This paper considers the natural setting of Oblivious Parallel RAM (OPRAM) recently introduced by Boyle, Chung, and Pass (TCC 2016A), where m clients simultaneously access in parallel the storage server. The clients are additionally connected via point-to-point links to coordinate their accesses. However, this additional inter-client communication must also remain oblivious.

Book ChapterDOI
10 Jan 2016
TL;DR: In this paper, the authors construct an adaptively secure functional encryption for Turing machines scheme, based on indistinguishability obfuscation for circuits, which is the first secure garbling scheme for circuits in the standard model.
Abstract: In this work, we construct an adaptively secure functional encryption for Turing machines scheme, based on indistinguishability obfuscation for circuits. Our work places no restrictions on the types of Turing machines that can be associated with each secret key, in the sense that the Turing machines can accept inputs of unbounded length, and there is no limit to the description size or the space complexity of the Turing machines. Prior to our work, only special cases of this result were known, or stronger assumptions were required. More specifically, previous work implicitly achieved selectively secure FE for Turing machines with a-priori bounded input based on indistinguishability obfuscation STOC 2015, or achieved FE for general Turing machines only based on knowledge-type assumptions such as public-coin differing-inputs obfuscation TCC 2015. A consequence of our result is the first constructions of succinct adaptively secure garbling schemes even for circuits in the standard model. Prior succinct garbling schemes even for circuits were only known to be adaptively secure in the random oracle model.

Book ChapterDOI
31 Oct 2016
TL;DR: The study of Access Control Encryption ACE is initiated, a novel cryptographic primitive that allows fine-grained access control, by giving different rights to different users not only in terms of which messages they are allowed to receive, but also which messages the user is allowed to send.
Abstract: We initiate the study of Access Control Encryption ACE, a novel cryptographic primitive that allows fine-grained access control, by giving different rights to different users not only in terms of which messages they are allowed to receive, but also which messages they are allowed to send. Classical examples of security policies for information flow are the well known Bell-Lapadulai¾?[BL73] or Bibai¾?[Bib75] model: in a nutshell, the Bell-Lapadula model assigns roles to every user in the system e.g., public, secret and top-secret. A users' role specifies which messages the user is allowed to receive i.e., the no read-up rule, meaning that users with public clearance should not be able to read messages marked as secret or top-secret but also which messages the user is allowed to send i.e., the no write-down rule, meaning that a malicious user with top-secret clearance should not be able to write messages marked as secret or public. To the best of our knowledge, no existing cryptographic primitive allows for even this simple form of access control, since no existing cryptographic primitive enforces any restriction on what kind of messages one should be able to encrypt. Our contributions are:Introducing and formally defining access control encryption ACE;A construction of ACE with complexity linear in the number of the roles based on classic number theoretic assumptions DDH, Paillier;A construction of ACE with complexity polylogarithmic in the number of roles based on recent results on cryptographic obfuscation;

Book ChapterDOI
10 Jan 2016
TL;DR: Since computing optimal composition exactly is infeasible unless FP=#P, this work gives an approximation algorithm that computes the composition to arbitrary accuracy in polynomial time and shows that computing the optimal composition in general is #P-complete.
Abstract: In the study of differential privacy, composition theorems starting with the original paper of Dwork, McSherry, Nissim, and Smith TCC'06 bound the degradation of privacy when composing several differentially private algorithms. Kairouz, Oh, and Viswanath ICML'15 showed how to compute the optimal bound for composing k arbitrary $$\epsilon ,\delta $$-differentially private algorithms. We characterize the optimal composition for the more general case of k arbitrary $$\epsilon _{1},\delta _{1},\ldots ,\epsilon _{k},\delta _{k}$$-differentially private algorithms where the privacy parameters may for each algorithm in the composition. We show that computing the optimal composition in general is #P-complete. Since computing optimal composition exactly is infeasible unless FP=#P, we give an approximation algorithm that computes the composition to arbitrary accuracy in polynomial time. The algorithm is a modification of Dyer's dynamic programming approach to approximately counting solutions to knapsack problems STOC'03.

Book ChapterDOI
10 Jan 2016
TL;DR: Non-malleable codes as mentioned in this paper are a generalization of classical error-correcting codes where the act of corrupting a codeword is replaced by a tampering adversary.
Abstract: Non-malleable codes are a generalization of classical error-correcting codes where the act of “corrupting” a codeword is replaced by a “tampering” adversary. Non-malleable codes guarantee that the message contained in the tampered codeword is either the original message m, or a completely unrelated one. In the common split-state model, the codeword consists of multiple blocks (or states) and each block is tampered with independently.

Book ChapterDOI
10 Jan 2016
TL;DR: In this paper, Choi et al. proposed a new security notion for non-malleability under chosen-ciphertext self-destruct attacks, called secret-state NMC (SDA).
Abstract: In a seminal paper, Dolev et al.i¾?[15] introduced the notion of non-malleable encryption NM-CPA. This notion is very intriguing since it suffices for many applications of chosen-ciphertext secure encryption IND-CCA, and, yet, can be generically built from semantically secure IND-CPA encryption, as was shown in the seminal works by Pass eti¾?al. [29] and by Choi et al.i¾?[9], the latter of which provided a black-box construction. In this paper we investigate three questions related to NM-CPA security:1.Can the rate of the construction by Choi et al. of NM-CPA from IND-CPA be improved?2.Is it possible to achieve multi-bit NM-CPA security more efficiently from a single-bit NM-CPA scheme than from IND-CPA?3.Is there a notion stronger than NM-CPA that has natural applications and can be achieved from IND-CPA security? We answer all three questions in the positive. First, we improve the rate in the scheme of Choi et al. by a factor $$\mathcal {O}\lambda $$, where $$\lambda $$ is the security parameter. Still, encrypting a message of size $$\mathcal {O}\lambda $$ would require ciphertext and keys of size $$\mathcal {O}\lambda ^2$$ times that of the IND-CPA scheme, even in our improved scheme. Therefore, we show a more efficient domain extension technique for building a $$\lambda $$-bit NM-CPA scheme from a single-bit NM-CPA scheme with keys and ciphertext of size $$\mathcal {O}\lambda $$ times that of the NM-CPA one-bit scheme. To achieve our goal, we define and construct a novel type of continuous non-malleable code NMC, called secret-state NMC, as we show that standard continuous NMCs are not enough for the natural "encode-then-encrypt-bit-by-bit" approach to work. Finally, we introduce a new security notion for public-key encryption that we dub non-malleability under chosen-ciphertext self-destruct attacks NM-SDA. After showing that NM-SDA is a strict strengthening of NM-CPA and allows for more applications, we nevertheless show that both of our results--faster construction from IND-CPA and domain extension from one-bit scheme--also hold for our stronger NM-SDA security. In particular, the notions of IND-CPA, NM-CPA, and NM-SDA security are all equivalent, lying plausibly, strictly? below IND-CCA security.

Book ChapterDOI
10 Jan 2016
TL;DR: This work considers randomized encodings RE that enable encoding a Turing machine and input x into its "randomized encoding" in sublinear, or even polylogarithmic, time in the running-time of $$\varPi x$$, and shows that subexponentially-secure sublinearly compact FE implies iO.
Abstract: We consider randomized encodings RE that enable encoding a Turing machine $$\varPi $$ and input x into its "randomized encoding" $$\hat{\varPi }x$$ in sublinear, or even polylogarithmic, time in the running-time of $$\varPi x$$ , independent of its output length. We refer to the former as sublinear RE and the latter as compact RE. For such efficient RE, the standard simulation-based notion of security is impossible, and we thus consider a weaker distributional indistinguishability-based notion of security: Roughly speaking, we require indistinguishability of $$\hat{\varPi }_0x_0$$ and $$\hat{\varPi }_0x_1$$ as long as $$\varPi _0,x_0$$ and $$\varPi _1,x_1$$ are sampled from some distributions such that $$\varPi _0x_0,\mathsf{Time}{\varPi _0}x_0$$ and $$\varPi _1x_1,\mathsf{Time}{\varPi _1}x_1$$ are indistinguishable. We show the following:Impossibility in the Plain Model: Assuming the existence of subexponentially secure one-way functions, subexponentially-secure sublinear RE does not exists. If additionally assuming subexponentially-secure $$\mathbf{iO } $$ for circuits we can also rule out polynomially-secure sublinear RE. As a consequence, we rule out also puncturable $$\mathbf{iO } $$ for Turing machines even those without inputs.Feasibility in the CRS model and Applications to iO for circuits: Subexponentially-secure sublinear RE in the CRS model and one-way functions imply $$\mathbf{iO } $$ for circuits through a simple construction generalizing GGM's PRF construction. Additionally, any compact even with sublinear compactness functional encryption essentially directly yields a sublinear RE in the CRS model, and as such we get an alternative, modular, and simpler proof of the results of [AJ15, BV15] showing that subexponentially-secure sublinearly compact FE implies iO. We further show other ways of instantiating sublinear RE in the CRS model and thus also iO: under the subexponential LWE assumption, it suffices to have a subexponentially secure FE schemes with just sublinear ciphertext as opposed to having sublinear encryption time.Applications to iO for Unbounded-input Turing machines: Subexponentially-secure compact RE for natural restricted classes of distributions over programs and inputs which are not ruled out by our impossibility result, and for which we can give candidate constructions imply $$\mathbf{iO } $$ for unbounded-input Turing machines. This yields the first construction of $$\mathbf{iO } $$ for unbounded-input Turing machines that does not rely on public-coin differing-input obfuscation.

Book ChapterDOI
10 Jan 2016
TL;DR: In this article, Cramer, Damgard and Schoenmakers devised an OR-composition technique for compound statements that allows to construct highly-efficient proofs for compound statement computations, which has found countless applications as building block for designing efficient protocols.
Abstract: In [18] Cramer, Damgard and Schoenmakers (CDS) devise an OR-composition technique for \(\varSigma \)-protocols that allows to construct highly-efficient proofs for compound statements. Since then, such technique has found countless applications as building block for designing efficient protocols.

Book ChapterDOI
31 Oct 2016
TL;DR: In this article, the authors show that Balloon hash has tighter space-hardness than previously believed and consistent spacehardness throughout its computation, and construct PoS from stacked expander graphs.
Abstract: Recently, proof of space PoS has been suggested as a more egalitarian alternative to the traditional hash-based proof of work. In PoS, a prover proves to a verifier that it has dedicated some specified amount of space. A closely related notion is memory-hard functions MHF, functions that require a lot of memory/space to compute. While making promising progress, existing PoS and MHF have several problems. First, there are large gaps between the desired space-hardness and what can be proven. Second, it has been pointed out that PoS and MHF should require a lot of space not just at some point, but throughout the entire computation/protocol; few proposals considered this issue. Third, the two existing PoS constructions are both based on a class of graphs called superconcentrators, which are either hard to construct or add a logarithmic factor overhead to efficiency. In this paper, we construct PoS from stacked expander graphs. Our constructions are simpler, more efficient and have tighter provable space-hardness than prior works. Our results also apply to a recent MHF called Balloon hash. We show Balloon hash has tighter space-hardness than previously believed and consistent space-hardness throughout its computation.

Book ChapterDOI
31 Oct 2016
TL;DR: In this article, it was shown that Yao's garbled construction is adaptively secure for NC1 circuits without requiring complexity leveraging, where the adversary can choose the input x after seeing the garbled version of the circuit C. The efficiency of the scheme and the security loss of the reduction is captured by a pebbling game over the circuit.
Abstract: A garbling scheme is used to garble a circuit C and an input x in a way that reveals the output Cx but hides everything else. Yao's construction from the 80's is known to achieve selective security, where the adversary chooses the circuit C and the input x in one shot. It has remained as an open problem whether the construction also achieves adaptive security, where the adversary can choose the input x after seeing the garbled version of the circuit C. A recent work of Hemenway et al. CRYPTO'16 modifies Yao's construction and shows that the resulting scheme is adaptively secure. This is done by encrypting the garbled circuit from Yao's construction with a special type of "somewhere equivocal encryption" and giving the key together with the garbled input. The efficiency of the scheme and the security loss of the reduction is captured by a certain pebbling game over the circuit. In this work we prove that Yao's construction itself is already adaptively secure, where the security loss can be captured by the same pebbling game. For example, we show that for circuits of depth d, the security loss of our reduction is $$2^{Od}$$ 2Od, meaning that Yao's construction is adaptively secure for NC1 circuits without requiring complexity leveraging. Our technique is inspired by the "nested hybrids" of Fuchsbauer et al. Asiacrypt'14, CRYPTO'15 and relies on a careful sequence of hybrids where each hybrid involves some limited guessing about the adversary's adaptive choices. Although it doesn't match the parameters achieved by Hemenway et al. in their full generality, the main advantage of our work is to prove the security of Yao's construction as is, without any additional encryption layer.

Book ChapterDOI
10 Jan 2016
TL;DR: In this paper, a new cryptographic primitive called witness pseudorandom functions (witness PRFs) is proposed, which are related to witness encryption, but appear strictly stronger: they can be used for applications such as multi-party key exchange without trusted setup, polynomially many hardcore bits for any one-way function, and several others that were previously only possible using obfuscation.
Abstract: We propose a new cryptographic primitive called witness pseudorandom functions (witness PRFs). Witness PRFs are related to witness encryption, but appear strictly stronger: we show that witness PRFs can be used for applications such as multi-party key exchange without trusted setup, polynomially-many hardcore bits for any one-way function, and several others that were previously only possible using obfuscation. Thus we improve the minimal assumptions required for these applications. Moreover, current candidate obfuscators are far from practical and typically rely on unnatural hardness assumptions about multilinear maps. We give a construction of witness PRFs from multilinear maps that is simpler and much more efficient than current obfuscation candidates, thus bringing several applications of obfuscation closer to practice. Our construction relies on new but very natural hardness assumptions about the underlying maps that appear to be resistant to a recent line of attacks.

Book ChapterDOI
31 Oct 2016
TL;DR: A two-message protocol for delegating RAM computations to an untrusted cloud that is secure assuming super-polynomial hardness of the Learning with Error LWE assumption and security holds even when the delegated computations are chosen adaptively as a function of the data and output of previous computations.
Abstract: In the setting of cloud computing a user wishes to delegate its data, as well as computations over this data, to a cloud provider. Each computation may read and modify the data, and these modifications should persist between computations. Minding the computational resources of the cloud, delegated computations are modeled as RAM programs. In particular, the delegated computations' running time may be sub-linear, or even exponentially smaller than the memory size. We construct a two-message protocol for delegating RAM computations to an untrusted cloud. In our protocol, the user saves a short digest of the delegated data. For every delegated computation, the cloud returns, in addition to the computation's output, the digest of the modified data, and a proof that the output and digest were computed correctly. When delegating a $$\mathsf {T}$$ T-time RAM computation $$M$$ M with security parameter $$k$$ k, the cloud runs in time $$\mathrm {poly}\mathsf {T},k$$ polyT,k and the user in time $$\mathrm {poly}\left| M\right| , \log \mathsf {T}, k$$ polyM,logT,k. Our protocol is secure assuming super-polynomial hardness of the Learning with Error LWE assumption. Security holds even when the delegated computations are chosen adaptively as a function of the data and output of previous computations. We note that RAM delegation schemes are an improved variant of memory delegation schemes [Chung et al. CRYPTO 2011]. In memory delegation, computations are modeled as Turing machines, and therefore, the cloud's work always grows with the size of the delegated data.

Book ChapterDOI
31 Oct 2016
TL;DR: In this article, the authors show that the moral gap between selective and semi-adaptive security is in general much smaller than that between full security and adaptive security for ABE and FE.
Abstract: Semi-adaptive security is a notion of security that lies between selective and adaptive security for Attribute-Based Encryption ABE and Functional Encryption FE systems. In the semi-adaptive model the attacker is forced to disclose the challenge messages before it makes any key queries, but is allowed to see the public parameters. We show how to generically transform any selectively secure ABE or FE scheme into one that is semi-adaptively secure with the only additional assumption being public key encryption, which is already naturally included in almost any scheme of interest. Our technique utilizes a fairly simple application of garbled circuits where instead of encrypting directly, the encryptor creates a garbled circuit that takes as input the public parameters and outputs a ciphertext in the underlying selective scheme. Essentially, the encryption algorithm encrypts without knowing the 'real' public parameters. This allows one to delay giving out the underlying selective parameters until a private key is issued, which connects the semi-adaptive to selective security. The methods used to achieve this result suggest that the moral gap between selective and semi-adaptive security is in general much smaller than that between semi-adaptive and full security. Finally, we show how to extend the above idea to generically bundle a family of functionalities under one set of public parameters. For example, suppose we had an inner product predicate encryption scheme where the length of the vectors was specified at setup and therefore fixed to the public parameters. Using our transformation one could create a system where for a single set of public parameters the vector length is not apriori bounded, but instead is specified by the encryption algorithm. The resulting ciphertext would be compatible with any private key generated to work on the same input length.

Book ChapterDOI
31 Oct 2016
TL;DR: The novel notion of a Proof of Human-work PoH is introduced and the first distributed consensus protocol from hard Artificial Intelligence problems is presented, which uses proofs of human work to develop a password authentication scheme which provably protects users against offline attacks.
Abstract: We introduce the novel notion of a Proof of Human-work PoH and present the first distributed consensus protocol from hard Artificial Intelligence problems. As the name suggests, a PoH is a proof that a human invested a moderate amount of effort to solve some challenge. A PoH puzzle should be moderately hard for a human to solve. However, a PoH puzzle must be hard for a computer to solve, including the computer that generated the puzzle, without sufficient assistance from a human. By contrast, CAPTCHAs are only difficult for other computers to solve -- not for the computer that generated the puzzle. We also require that a PoH be publicly verifiable by a computer without any human assistance and without ever interacting with the agent who generated the proof of human-work. We show how to construct PoH puzzles from indistinguishability obfuscation and from CAPTCHAs. We motivate our ideas with two applications: HumanCoin and passwords. We use PoH puzzles to construct HumanCoin, the first cryptocurrency system with human miners. Second, we use proofs of human work to develop a password authentication scheme which provably protects users against offline attacks.

Book ChapterDOI
31 Oct 2016
TL;DR: In this paper, Chen et al. presented the first RAM delegation scheme that provides both soundness and privacy guarantees in the adaptive setting, where the sequence of delegated RAM programs are chosen adaptively, depending potentially on the encodings of the database and previously chosen programs.
Abstract: We consider the problem of delegating RAM computations over persistent databases. A user wishes to delegate a sequence of computations over a database to a server, where each computation may read and modify the database and the modifications persist between computations. Delegating RAM computations is important as it has the distinct feature that the run-time of computations maybe sub-linear in the size of the database. We present the first RAM delegation scheme that provide both soundness and privacy guarantees in the adaptive setting, where the sequence of delegated RAM programs are chosen adaptively, depending potentially on the encodings of the database and previously chosen programs. Prior works either achieved only adaptive soundness without privacy [Kalai and Paneth, ePrint'15], or only security in the selective setting where all RAM programs are chosen statically [Chen et al. ITCS'16, Canetti and Holmgren ITCS'16]. Our scheme assumes the existence of indistinguishability obfuscation $$\mathsf {i}\mathcal {O}$$iO for circuits and the decisional Diffie-Hellman DDH assumption. However, our techniques are quite general and in particular, might be applicable even in settings where iO is not used. We provide a "security lifting technique" that "lifts" any proof of selective security satisfying certain special properties into a proof of adaptive security, for arbitrary cryptographic schemes. We then apply this technique to the delegation scheme of Chen et al. and its selective security proof, obtaining that their scheme is essentially already adaptively secure. Because of the general approach, we can also easily extend to delegating parallel RAM PRAM computations. We believe that the security lifting technique can potentially find other applications and is of independent interest.

Book ChapterDOI
31 Oct 2016
TL;DR: A highly contrived encryption scheme is constructed which is CPA and even CCA secure but is not IND-SOA secure and is broken in a very obvious sense by a selective opening attack.
Abstract: In a selective opening attack SOA on an encryption scheme, the adversary is given a collection of ciphertexts and she selectively chooses to see some subset of them "opened", meaning that the messages and the encryption randomness are revealed to her. A scheme is SOA secure if the data contained in the unopened ciphertexts remains hidden. A fundamental question is whether every CPA secure scheme is necessarily also SOA secure. The work of Bellare et al. EUROCRYPT'12 gives a partial negative answer by showing that some CPA secure schemes do not satisfy a simulation-based definition of SOA security called SIM-SOA. However, until now, it remained possible that every CPA-secure scheme satisfies an indistinguishability-based definition of SOA security called IND-SOA. In this work, we resolve the above question in the negative and construct a highly contrived encryption scheme which is CPA and even CCA secure but is not IND-SOA secure. In fact, it is broken in a very obvious sense by a selective opening attack as follows. A random value is secret-shared via Shamir's scheme so that any t out of n shares reveal no information about the shared value. The n shares are individually encrypted under a common public key and the n resulting ciphertexts are given to the adversary who selectively chooses to see t of the ciphertexts opened. Counter-intuitively, by the specific properties of our encryption scheme, this suffices for the adversary to completely recover the shared value. Our contrived scheme relies on strong assumptions: public-coin differing inputs obfuscation and a certain type of correlation intractable hash functions. We also extend our negative result to the setting of SOA attacks with key opening IND-SOA-K where the adversary is given a collection of ciphertexts under different public keys and selectively chooses to see some subset of the secret keys.