scispace - formally typeset
Search or ask a question

Showing papers presented at "Theory of Cryptography Conference in 2013"


Book ChapterDOI
03 Mar 2013
TL;DR: A generic, efficient reduction is derived that allows us to apply any differentially private algorithm for bounded-degree graphs to an arbitrary graph, based on analyzing the smooth sensitivity of the 'naive' truncation that simply discards nodes of high degree.
Abstract: We develop algorithms for the private analysis of network data that provide accurate analysis of realistic networks while satisfying stronger privacy guarantees than those of previous work. We present several techniques for designing node differentially private algorithms, that is, algorithms whose output distribution does not change significantly when a node and all its adjacent edges are added to a graph. We also develop methodology for analyzing the accuracy of such algorithms on realistic networks. The main idea behind our techniques is to 'project' (in one of several senses) the input graph onto the set of graphs with maximum degree below a certain threshold. We design projection operators, tailored to specific statistics that have low sensitivity and preserve information about the original statistic. These operators can be viewed as giving a fractional (low-degree) graph that is a solution to an optimization problem described as a maximum flow instance, linear program, or convex program. In addition, we derive a generic, efficient reduction that allows us to apply any differentially private algorithm for bounded-degree graphs to an arbitrary graph. This reduction is based on analyzing the smooth sensitivity of the 'naive' truncation that simply discards nodes of high degree.

321 citations


Book ChapterDOI
03 Mar 2013
TL;DR: Recent constructions of preprocessing SNARGs have achieved attractive features: they are publicly-verifiable, proofs consist of only O(1) encrypted (or encoded) field elements, and verification is via arithmetic circuits of size linear in the NP statement.
Abstract: Succinct non-interactive arguments (SNARGs) enable verifying NP statements with lower complexity than required for classical NP verification. Traditionally, the focus has been on minimizing the length of such arguments; nowadays researches have focused also on minimizing verification time, by drawing motivation from the problem of delegating computation. A common relaxation is a preprocessing SNARG, which allows the verifier to conduct an expensive offline phase that is independent of the statement to be proven later. Recent constructions of preprocessing SNARGs have achieved attractive features: they are publicly-verifiable, proofs consist of only O(1) encrypted (or encoded) field elements, and verification is via arithmetic circuits of size linear in the NP statement. Additionally, these constructions seem to have 'escaped the hegemony' of probabilistically-checkable proofs (PCPs) as a basic building block of succinct arguments.

226 citations


Book ChapterDOI
03 Mar 2013
TL;DR: Signatures of Correct Computation is introduced, a new model for verifying dynamic computations in cloud settings and it is shown that signatures of correct computation imply Publicly Verifiable Computation (PVC), a model recently introduced in several concurrent and independent works.
Abstract: We introduce Signatures of Correct Computation (SCC), a new model for verifying dynamic computations in cloud settings. In the SCC model, a trusted source outsources a function f to an untrusted server, along with a public key for that function (to be used during verification). The server can then produce a succinct signature σ vouching for the correctness of the computation of f, i.e., that some result v is indeed the correct outcome of the function f evaluated on some point a. There are two crucial performance properties that we want to guarantee in an SCC construction: (1) verifying the signature should take asymptotically less time than evaluating the function f; and (2) the public key should be efficiently updated whenever the function changes. We construct SCC schemes (satisfying the above two properties) supporting expressive manipulations over multivariate polynomials, such as polynomial evaluation and differentiation. Our constructions are adaptively secure in the random oracle model and achieve optimal updates, i.e., the function's public key can be updated in time proportional to the number of updated coefficients, without performing a linear-time computation (in the size of the polynomial). We also show that signatures of correct computation imply Publicly Verifiable Computation (PVC), a model recently introduced in several concurrent and independent works. Roughly speaking, in the SCC model, any client can verify the signature σ and be convinced of some computation result, whereas in the PVC model only the client that issued a query (or anyone who trusts this client) can verify that the server returned a valid signature (proof) for the answer to the query. Our techniques can be readily adapted to construct PVC schemes with adaptive security, efficient updates and without the random oracle model.

177 citations


Book ChapterDOI
03 Mar 2013
TL;DR: It is shown that the expected guarantees of synchronous computation can be achieved given functionalities exactly meant to model, respectively, bounded-delay networks and loosely synchronized clocks, and that previous similar models can all be expressed within this new framework.
Abstract: In synchronous networks, protocols can achieve security guarantees that are not possible in an asynchronous world: they can simultaneously achieve input completeness (all honest parties' inputs are included in the computation) and guaranteed termination (honest parties do not 'hang' indefinitely). In practice truly synchronous networks rarely exist, but synchrony can be emulated if channels have (known) bounded latency and parties have loosely synchronized clocks. The widely-used framework of universal composability (UC) is inherently asynchronous, but several approaches for adding synchrony to the framework have been proposed. However, we show that the existing proposals do not provide the expected guarantees. Given this, we propose a novel approach to defining synchrony in the UC framework by introducing functionalities exactly meant to model, respectively, bounded-delay networks and loosely synchronized clocks. We show that the expected guarantees of synchronous computation can be achieved given these functionalities, and that previous similar models can all be expressed within our new framework.

169 citations


Book ChapterDOI
03 Mar 2013
TL;DR: A new method for secure two-party Random Access Memory (RAM) program computation that does not require taking a program and first turning it into a circuit is presented, and the method achieves logarithmic overhead compared to an insecure program execution.
Abstract: We present a new method for secure two-party Random Access Memory (RAM) program computation that does not require taking a program and first turning it into a circuit. The method achieves logarithmic overhead compared to an insecure program execution. In the heart of our construction is a new Oblivious RAM construction where a client interacts with two non-communicating servers. Our two-server Oblivious RAM for n reads/writes requires O(n) memory for the servers, O(1) memory for the client, and O(logn) amortized read/write overhead for data access. The constants in the big-O notation are tiny, and we show that the storage and data access overhead of our solution concretely compares favorably to the state-of-the-art single-server schemes. Our protocol enjoys an important feature from a practical perspective as well. At the heart of almost all previous single-server Oblivious RAM solutions, a crucial but inefficient process known as oblivious sorting was required. In our two-server model, we describe a new technique to bypass oblivious sorting, and show how this can be carefully blended with existing techniques to attain a more practical Oblivious RAM protocol in comparison to all prior work. As alluded above, our two-server Oblivious RAM protocol leads to a novel application in the realm of secure two-party RAM program computation. We observe that in the secure two-party computation, Alice and Bob can play the roles of two non-colluding servers. We show that our Oblivious RAM construction can be composed with an extended version of the Ostrovsky-Shoup compiler to obtain a new method for secure two-party program computation with lower overhead than all existing constructions.

155 citations


Book ChapterDOI
03 Mar 2013
TL;DR: This protocol is the first to obtain these properties for Boolean circuits, and develops new homomorphic authentication schemes based on asymptotically good codes with an additional multiplication property.
Abstract: We present a protocol for securely computing a Boolean circuit C in presence of a dishonest and malicious majority. The protocol is unconditionally secure, assuming a preprocessing functionality that is not given the inputs. For a large number of players the work for each player is the same as computing the circuit in the clear, up to a constant factor. Our protocol is the first to obtain these properties for Boolean circuits. On the technical side, we develop new homomorphic authentication schemes based on asymptotically good codes with an additional multiplication property. We also show a new algorithm for verifying the product of Boolean matrices in quadratic time with exponentially small error probability, where previous methods only achieved constant error.

117 citations


Book ChapterDOI
03 Mar 2013
TL;DR: A general approach for transforming perfectly secure protocols for sender-receiver functionalities in the correlated randomness model into secure protocols in the plain model which offer perfect correctness against a malicious sender is presented.
Abstract: We investigate the extent to which correlated secret randomness can help in secure computation with no honest majority. It is known that correlated randomness can be used to evaluate any circuit of size s with perfect security against semi-honest parties or statistical security against malicious parties, where the communication complexity grows linearly with s. This leaves open two natural questions: (1) Can the communication complexity be made independent of the circuit size? (2) Is it possible to obtain perfect security against malicious parties? We settle the above questions, obtaining both positive and negative results on unconditionally secure computation with correlated randomness. Concretely, we obtain the following results. Minimizing communication. Any multiparty functionality can be realized, with perfect security against semi-honest parties or statistical security against malicious parties, by a protocol in which the number of bits communicated by each party is linear in its input length. Our protocol uses an exponential number of correlated random bits. We give evidence that super-polynomial randomness complexity may be inherent. Perfect security against malicious parties. Any finite 'sender-receiver' functionality, which takes inputs from a sender and a receiver and delivers an output only to the receiver, can be perfectly realized given correlated randomness. In contrast, perfect security is generally impossible for functionalities which deliver outputs to both parties. We also show useful functionalities (such as string equality) for which there are efficient perfectly secure protocols in the correlated randomness model. Perfect correctness in the plain model. We present a general approach for transforming perfectly secure protocols for sender-receiver functionalities in the correlated randomness model into secure protocols in the plain model which offer perfect correctness against a malicious sender. This should be contrasted with the impossibility of perfectly sound zero-knowledge proofs.

114 citations


Book ChapterDOI
03 Mar 2013
TL;DR: This work builds an efficient key-policy attribute-based encryption scheme, and proves its security in the selective sense from learning-with-errors intractability in the standard model.
Abstract: We introduce a broad lattice manipulation technique for expressive cryptography, and use it to realize functional encryption for access structures from post-quantum hardness assumptions. Specifically, we build an efficient key-policy attribute-based encryption scheme, and prove its security in the selective sense from learning-with-errors intractability in the standard model.

111 citations


Book ChapterDOI
03 Mar 2013
TL;DR: In this article, the problem of proving the folklore conjecture that every semantically secure bit-encryption scheme is circular secure has been studied in the context of fully homomorphic encryption.
Abstract: Motivated by recent developments in fully homomorphic encryption, we consider the folklore conjecture that every semantically-secure bit-encryption scheme is circular secure, or in other words, that every bit-encryption scheme remains secure even when the adversary is given encryptions of the individual bits of the private-key. We show the following obstacles to proving this conjecture: 1 We construct a public-key bit-encryption scheme that is plausibly semantically secure, but is not circular secure. The circular security attack manages to fully recover the private-key. The construction is based on an extension of the Symmetric External Diffie-Hellman assumption (SXDH) from bilinear groups, to l-multilinear groups of order p where l≥c ·logp for some c>1. While there do exist l-multilinear groups (unconditionally), for l≥3 there are no known candidates for which the SXDH problem is believed to be hard. Nevertheless, there is also no evidence that such groups do not exist. Our result shows that in order to prove the folklore conjecture, one must rule out the possibility that there exist l-multilinear groups for which SXDH is hard. 2 We show that the folklore conjecture cannot be proved using a black-box reduction. That is, there is no reduction of circular security of a bit-encryption scheme to semantic security of that very same scheme that uses both the encryption scheme and the adversary as black-boxes. Both of our negative results extend also to the (seemingly) weaker conjecture that every CCA secure bit-encryption scheme is circular secure. As a final contribution, we show an equivalence between three seemingly distinct notions of circular security for public-key bit-encryption schemes. In particular, we give a general search to decision reduction that shows that an adversary that distinguishes between encryptions of the bits of the private-key and encryptions of zeros can be used to actually recover the private-key.

77 citations


Book ChapterDOI
03 Mar 2013
TL;DR: Gennaro et al. as discussed by the authors extended the notion of non-interactive verifiable computation to the multi-client setting, where n computationally weak clients wish to outsource to an untrusted server the computation of a function f over a series of joint inputs without interacting with each other.
Abstract: Gennaro et al. (Crypto 2010) introduced the notion of non-interactive verifiable computation, which allows a computationally weak client to outsource the computation of a function f on a series of inputs x(1),... to a more powerful but untrusted server. Following a pre-processing phase (that is carried out only once), the client sends some representation of its current input x(i) to the server; the server returns an answer that allows the client to recover the correct result f(x(i)), accompanied by a proof of correctness that ensures the client does not accept an incorrect result. The crucial property is that the work done by the client in preparing its input and verifying the server's proof is less than the time required for the client to compute f on its own. We extend this notion to the multi-client setting, where n computationally weak clients wish to outsource to an untrusted server the computation of a function f over a series of joint inputs $(x_1^{(1)},...,x_1^{(1)})$,... without interacting with each other. We present a construction for this setting by combining the scheme of Gennaro et al. with a primitive called proxy oblivious transfer.

74 citations


Book ChapterDOI
03 Mar 2013
TL;DR: Algebraic (Trapdoor) One Way Functions are introduced, which captures and formalizes many of the properties of number-theoretic one-way functions and several applications where algebraic (trapdoor) OWFs turn out to be useful are shown.
Abstract: In this paper we introduce the notion of Algebraic (Trapdoor) One Way Functions, which, roughly speaking, captures and formalizes many of the properties of number-theoretic one-way functions. Informally, a (trapdoor) one way function F: X#8594;Y is said to be algebraic if X and Y are (finite) abelian cyclic groups, the function is homomorphic i.e. F(x)·F(y)=F(x ·y), and is ring-homomorphic, meaning that it is possible to compute linear operations 'in the exponent' over some ring (which may be different from ℤp where p is the order of the underlying group X) without knowing the bases. Moreover, algebraic OWFs must be flexibly one-way in the sense that given y=F(x), it must be infeasible to compute (x′, d) such that F(x′)=yd (for d≠0). Interestingly, algebraic one way functions can be constructed from a variety of standard number theoretic assumptions, such as RSA, Factoring and CDH over bilinear groups. As a second contribution of this paper, we show several applications where algebraic (trapdoor) OWFs turn out to be useful. These include publicly verifiable secure outsourcing of polynomials, linearly homomorphic signatures and batch execution of Sigma protocols.

Book ChapterDOI
03 Mar 2013
TL;DR: This work devise multi-party computation protocols for general secure function evaluation with the property that each party is only required to communicate with a small number of dynamically chosen parties and provides a protocol for securely computing such sublinear f that runs in polylog(n)+O(q) rounds, has each party communicating with at most q ·polylog( n) other parties, and supports message sizes
Abstract: We devise multi-party computation protocols for general secure function evaluation with the property that each party is only required to communicate with a small number of dynamically chosen parties. More explicitly, starting with n parties connected via a complete and synchronous network, our protocol requires each party to send messages to (and process messages from) at most polylog(n) other parties using polylog(n) rounds. It achieves secure computation of any polynomial-time computable randomized function f under cryptographic assumptions, and tolerates up to $({1\over 3} - \epsilon) \cdot n$ statically scheduled Byzantine faults. We then focus on the particularly interesting setting in which the function to be computed is a sublinear algorithm: An evaluation of f depends on the inputs of at most q=o(n) of the parties, where the identity of these parties can be chosen randomly and possibly adaptively. Typically, q=polylog(n). While the sublinear query complexity of f makes it possible in principle to dramatically reduce the communication complexity of our general protocol, the challenge is to achieve this while maintaining security: in particular, while keeping the identities of the selected inputs completely hidden. We solve this challenge, and we provide a protocol for securely computing such sublinear f that runs in polylog(n)+O(q) rounds, has each party communicating with at most q ·polylog(n) other parties, and supports message sizespolylog(n) ·(l+n), where l is the parties' input size. Our optimized protocols rely on a multi-signature scheme, fully homomorphic encryption (FHE), and simulation-sound adaptive NIZK arguments. However, we remark that multi-signatures and FHE are used to obtain our bounds on message size and round complexity. Assuming only standard digital signatures and public-key encryption, one can still obtain the property that each party only communicates with polylog(n) other parties. We emphasize that the scheduling of faults can depend on the initial PKI setup of digital signatures and the NIZK parameters.

Book ChapterDOI
Rafael Pass1
03 Mar 2013
TL;DR: In this article, the authors present barriers to provable security of two fundamental cryptographic primitives perfect non-interactive zero knowledge (NIZK), and non-malleable commitments.
Abstract: We present barriers to provable security of two fundamental (and well-studied) cryptographic primitives perfect non-interactive zero knowledge (NIZK), and non-malleable commitments: · Black-box reductions cannot be used to demonstrate adaptive soundness (i.e., that soundness holds even if the statement to be proven is chosen as a function of the common reference string) of any statistical (and thus also perfect) NIZK for ${\cal NP}$ based on any 'standard' intractability assumptions. · Black-box reductions cannot be used to demonstrate non-malleability of non-interactive, or even 2-message, commitment schemes based on any 'standard' intractability assumptions. We emphasize that the above separations apply even if the construction of the considered primitives makes a non-black-box use of the underlying assumption As an independent contribution, we suggest a taxonomy of game-based intractability assumption based on 1) the security threshold, 2) the number of communication rounds in the security game, 3) the computational complexity of the game challenger, 4) the communication complexity of the challenger, and 5) the computational complexity of the security reduction.

Book ChapterDOI
03 Mar 2013
TL;DR: Barak, Lindell and Vadhan as discussed by the authors showed that the security of the Fiat-Shamir heuristic for statistically sound proofs cannot be proved under virtually any standard assumption via a black-box reduction.
Abstract: The Fiat-Shamir heuristic [CRYPTO '86] is used to convert any 3-message public-coin proof or argument system into a non-interactive argument, by hashing the prover's first message to select the verifier's challenge. It is known that this heuristic is sound when the hash function is modeled as a random oracle. On the other hand, the surprising result of Goldwasser and Kalai [FOCS '03] shows that there exists a computationally sound argument on which the Fiat-Shamir heuristic is never sound, when instantiated with any actual efficient hash function. This leaves us with the following interesting possibility: perhaps we can securely instantiates the Fiat-Shamir heuristic for all 3-message public-coin statistically sound proofs, even if we must fail for some computationally sound arguments. Indeed, this has been conjectured to be the case by Barak, Lindell and Vadhan [FOCS '03], but we do not have any provably secure instantiation under any 'standard assumption'. In this work, we give a broad black-box separation result showing that the security of the Fiat-Shamir heuristic for statistically sound proofs cannot be proved under virtually any standard assumption via a black-box reduction. More precisely: –If we want to have a 'universal' instantiation of the Fiat-Shamir heuristic that works for all 3-message public-coin proofs, then we cannot prove its security via a black-box reduction from any assumption that has the format of a 'cryptographic game'. –For many concrete proof systems, if we want to have a 'specific' instantiation of the Fiat-Shamir heuristic for that proof system, then we cannot prove its security via a black box reduction from any 'falsifiable assumption' that has the format of a cryptographic game with an efficient challenger.

Book ChapterDOI
03 Mar 2013
TL;DR: This work presents a public-coin concurrent ZK protocol for any NP language that assumes that all verifiers have access to a globally specified function, drawn from a collision resistant hash function family, in the Global Hash Function, or GHF model.
Abstract: Public-coin zero-knowledge and concurrent zero-knowledge (cZK) are two classes of zero knowledge protocols that guarantee some additional desirable properties. Still, to this date no protocol is known that is both public-coin and cZK for a language outside BPP. Furthermore, it is known that no such protocol can be black-box ZK [Pass et.al, Crypto 09]. We present a public-coin concurrent ZK protocol for any NP language. The protocol assumes that all verifiers have access to a globally specified function, drawn from a collision resistant hash function family. (This model, which we call the Global Hash Function, or GHF model, can be seen as a restricted case of the non-programmable reference string model.) We also show that the impossibility of black-box public-coin cZK extends also to the GHF model. Our protocol assumes CRH functions against quasi-polynomial adversaries and takes O(log1+en) rounds for any e>0, where n is the security parameter. Our techniques combine those for (non-public-coin) black-box cZK with Barak's non-black-box technique for public-coin constant-round ZK. As a corollary we obtain the first simultaneously resettable zero-knowledge protocol with O(log1+en) rounds, in the GHF model.

Book ChapterDOI
03 Mar 2013
TL;DR: It is shown that the random oracle can be replaced with a symmetric encryption which remains secure under a combined form of related-key (RK) and key-dependent message (KDM) attacks and constructed based on the LPN assumption.
Abstract: Yao's Garbled Circuit (GC) technique is a powerful cryptographic tool which allows to 'encrypt' a circuit C by another circuit ${\hat C}$ in a way that hides all information except for the final output. Yao's original construction incurs a constant overhead in both computation and communication per gate of the circuit C (proportional to the complexity of symmetric encryption). Kolesnikov and Schneider (ICALP 2008) introduced an optimized variant that garbles XOR gates 'for free' in a way that involves no cryptographic operations and no communication. This variant has become very popular and has lead to notable performance improvements. The security of the free-XOR optimization was originally proven in the random oracle model. Despite some partial progress (Choi et al., TCC 2012), the question of replacing the random oracle with a standard cryptographic assumption has remained open. We resolve this question by showing that the free-XOR approach can be realized in the standard model under the learning parity with noise (LPN) assumption. Our result is obtained in two steps: –We show that the random oracle can be replaced with a symmetric encryption which remains secure under a combined form of related-key (RK) and key-dependent message (KDM) attacks; and –We show that such a symmetric encryption can be constructed based on the LPN assumption. As an additional contribution, we prove that the combination of RK and KDM security is non-trivial: There exists an encryption scheme which achieves both RK security and KDM security but breaks completely at the presence of combined RK-KDM attacks.

Book ChapterDOI
03 Mar 2013
TL;DR: In this article, the authors focus on the class of deterministic Boolean functions with finite domain, and ask for which functions in this class is it possible to information-theoretically toss an unbiased coin, given a protocol for securely computing the function with fairness.
Abstract: It is well known that it is impossible for two parties to toss a coin fairly (Cleve, STOC 1986). This result implies that it is impossible to securely compute with fairness any function that can be used to toss a fair coin. In this paper, we focus on the class of deterministic Boolean functions with finite domain, and we ask for which functions in this class is it possible to information-theoretically toss an unbiased coin, given a protocol for securely computing the function with fairness. We provide a complete characterization of the functions in this class that imply and do not imply fair coin tossing. This characterization extends our knowledge of which functions cannot be securely computed with fairness. In addition, it provides a focus as to which functions may potentially be securely computed with fairness, since a function that cannot be used to fairly toss a coin is not ruled out by the impossibility result of Cleve (which is the only known impossibility result for fairness). In addition to the above, we draw corollaries to the feasibility of achieving fairness in two possible fail-stop models.

Book ChapterDOI
03 Mar 2013
TL;DR: This paper presents a proof-of-principle for constant parallel-time cryptography that proves the feasibility of local functions that each of their output bits depend on a constant number of input bits.
Abstract: Constant parallel-time cryptography allows performing complex cryptographic tasks at an ultimate level of parallelism, namely, by local functions that each of their output bits depend on a constant number of input bits. The feasibility of such highly efficient cryptographic constructions was widely studied in the last decade via two main research threads.

Book ChapterDOI
03 Mar 2013
TL;DR: A computationally efficient algorithm is designed that verifies whether $\cal T}_{priv}$ satisfies differential privacy on typical datasets (DPTD) guarantee in time sublinear in the size of the domain of the datasets.
Abstract: In the past few years, the focus of research in the area of statistical data privacy has been in designing algorithms for various problems which satisfy some rigorous notions of privacy. However, not much effort has gone into designing techniques to computationally verify if a given algorithm satisfies some predefined notion of privacy. In this work, we address the following question: Can we design algorithms which tests if a given algorithm satisfies some specific rigorous notion of privacy (e.g., differential privacy)? We design algorithms to test privacy guarantees of a given algorithm $\mathcal{A}$ when run on a dataset x containing potentially sensitive information about the individuals. More formally, we design a computationally efficient algorithm ${\cal T}_{priv}$ that verifies whether $\mathcal{A}$ satisfies differential privacy on typical datasets (DPTD) guarantee in time sublinear in the size of the domain of the datasets. DPTD, a similar notion to generalized differential privacy first proposed by [3], is a distributional relaxation of the popular notion of differential privacy [14]. To design algorithm ${\cal T}_{priv}$, we show a formal connection between the testing of privacy guarantee for an algorithm and the testing of the Lipschitz property of a related function. More specifically, we show that an efficient algorithm for testing of Lipschitz property can be used as a subroutine in ${\cal T}_{priv}$ that tests if an algorithm satisfies differential privacy on typical datasets. Apart from formalizing the connection between the testing of privacy guarantee and testing of the Lipschitz property, we generalize the work of [21] to the setting of property testing under product distribution. More precisely, we design an efficient Lipschitz tester for the case where the domain points are drawn from hypercube according to some fixed but unknown product distribution instead of the uniform distribution.

Book ChapterDOI
03 Mar 2013
TL;DR: This work considers the problem of realizing general UC-functionalities from untrusted resettable hardware-tokens, with the goal of minimizing both the amount of interaction and the number of tokens employed, and shows that even a simple functionality cannot be realized non-interactively using a single token.
Abstract: Resettable hardware tokens, usually in the form of smart cards, are used for a variety of security-critical tasks in open environments. Many of these tasks require trusted hardware tokens. With the complexity of hardware, however, it is not feasible to check if the hardware contains an internal state or gives away information over side channels. This inspires the question of the cryptographic strength of untrusted resettable hardware tokens in the universal composability framework. In this work, we consider the problem of realizing general UC-functionalities from untrusted resettable hardware-tokens, with the goal of minimizing both the amount of interaction and the number of tokens employed. Our main result consists of two protocols, realizing functionalities that are sufficient to UC-realize any resettable two-party functionality. The first protocol requires two rounds of interaction in an initialization phase and only a single hardware-token. The second protocol is fully non-interactive and requires two tokens. One of these relaxations, allowing either communication with the issuer of the token or issuing two tokens, is necessary. We show that even a simple functionality cannot be realized non-interactively using a single token.

Book ChapterDOI
03 Mar 2013
TL;DR: It is shown that the existence of any oblivious transfer extension protocol with security for static semi-honest adversaries implies one-way functions, that an oblivious transfer extensions protocol with adaptive security implies oblivious transfer with static security, and that theexistence of an oblivioustransfer extension protocol from only O(logn) oblivious transfers implies oblivioustransfer itself.
Abstract: Oblivious transfer is one of the most basic and important building blocks in cryptography. As such, understanding its cost is of prime importance. Beaver (STOC 1996) showed that it is possible to obtain poly(n) oblivious transfers given only n actual oblivious transfer calls and using one-way functions, where n is the security parameter. In addition, he showed that it is impossible to extend oblivious transfer information theoretically. The notion of extending oblivious transfer is important theoretically (to understand the complexity of computing this primitive) and practically (since oblivious transfers can be expensive and thus extending them using only one-way functions is very attractive). Despite its importance, very little is known about the feasibility of extending oblivious transfer, beyond the fact that it is impossible information theoretically. Specifically, it is not known whether or not one-way functions are actually necessary for extending oblivious transfer, whether or not it is possible to extend oblivious transfers with adaptive security, and whether or not it is possible to extend oblivious transfers when starting with O(logn) oblivious transfers. In this paper, we address these questions and provide almost complete answers to all of them. We show that the existence of any oblivious transfer extension protocol with security for static semi-honest adversaries implies one-way functions, that an oblivious transfer extension protocol with adaptive security implies oblivious transfer with static security, and that the existence of an oblivious transfer extension protocol from only O(logn) oblivious transfers implies oblivious transfer itself.

Book ChapterDOI
03 Mar 2013
TL;DR: A black-box provability barrier is presented to compilations of arbitrary public-key encryption into RDM-secure ones using just pre-processing of the randomness, and the existence of bounded-RDM secure schemes that can encrypt arbitrarily 'long' messages using 'short' randomness is demonstrated.
Abstract: Traditional definitions of the security of encryption schemes assume that the messages encrypted are chosen independently of the randomness used by the encryption scheme. Recent works, implicitly by Myers and Shelat (FOCS'09) and Bellare et al (AsiaCrypt'09), and explicitly by Hemmenway and Ostrovsky (ECCC'10), consider randomness-dependent message (RDM) security of encryption schemes, where the message to be encrypted may be selected as a function—referred to as the RDM function—of the randomness used to encrypt this particular message, or other messages, but in a circular way. We carry out a systematic study of this notion. Our main results demonstrate the following: · Full RDM security—where the RDM function may be an arbitrary polynomial-size circuit—is not possible. · Any secure encryption scheme can be slightly modified, by just performing some pre-processing to the randomness, to satisfy bounded-RDM security, where the RDM function is restricted to be a circuit of a priori bounded polynomial size. The scheme, however, requires the randomness r needed to encrypt a message m to be slightly longer than the length of m (i.e., |r|>|m|+ω(logk), where k is the security parameter). · We present a black-box provability barrier to compilations of arbitrary public-key encryption into RDM-secure ones using just pre-processing of the randomness, whenever |m|>|r|+ω(logk). On the other hand, under the DDH assumption, we demonstrate the existence of bounded-RDM secure schemes that can encrypt arbitrarily 'long' messages using 'short' randomness. We finally note that the existence of public-key encryption schemes imply the existence of a fully RDM-secure encryption scheme in an 'ultra-weak' Random-Oracle Model—where the security reduction need not 'program' the oracle, or see the queries made by the adversary to the oracle; combined with our impossibility result, this yields the first example of a cryptographic task that has a secure implementation in such a weak Random-Oracle Model, but does not have a secure implementation without random oracles.

Book ChapterDOI
03 Mar 2013
TL;DR: The authors should be listed in alphabetical order so that Rafail Ostrovsky is second last and Omer Paneth is last author.
Abstract: Unfortunately the order of appearance of the authors on the title page is not correct. Second last and last author were switched. In fact, the authors should be listed in alphabetical order so that Rafail Ostrovsky is second last and Omer Paneth is last author.

Book ChapterDOI
03 Mar 2013
TL;DR: This work shows how to go beyond the birthday attack barrier by replacing the above simple hashing approach with a variant of cuckoo hashing — a hashing paradigm typically used for resolving hash collisions in a table, by using two hash functions and two tables, and cleverly assigning each element into one of the two tables.
Abstract: A common method for increasing the usability and uplifting the security of pseudorandom function families (PRFs) is to 'hash' the inputs into a smaller domain before applying the PRF. This approach, known as 'Levin's trick', is used to achieve 'PRF domain extension' (using a short,e.g,fixed, input length PRF to get a variable-length PRF), and more recently to transform non-adaptive PRFs to adaptive ones. Such reductions, however, are vulnerable to a 'birthday attack': after $\sqrt{|\mathcal{U}}$ queries to the resulting PRF, where $\mathcal{U}$ being the hash function range, a collision (i.e., two distinct inputs have the same hash value) happens with high probability. As a consequence, the resulting PRF is insecure against an attacker making this number of queries. In this work we show how to go beyond the birthday attack barrier, by replacing the above simple hashing approach with a variant of cuckoo hashing — a hashing paradigm typically used for resolving hash collisions in a table, by using two hash functions and two tables, and cleverly assigning each element into one of the two tables. We use this approach to obtain: (i) A domain extension method that requires just two calls to the original PRF, can withstand as many queries as the original domain size and has a distinguishing probability that is exponentially small in the non cryptographic work. (ii) A security-preserving reduction from non-adaptive to adaptive PRFs.

Book ChapterDOI
03 Mar 2013
TL;DR: This work studies the feasibility of realizing functionalities in the framework of universal composability, with respect to both computational and information-theoretic security, and shows that existing feasibility results carry over unchanged from the classical to the quantum world.
Abstract: It is known that cryptographic feasibility results can change by moving from the classical to the quantum world. With this in mind, we study the feasibility of realizing functionalities in the framework of universal composability, with respect to both computational and information-theoretic security. With respect to computational security, we show that existing feasibility results carry over unchanged from the classical to the quantum world; a functionality is 'trivial' (i.e., can be realized without setup) in the quantum world if and only if it is trivial in the classical world. The same holds with regard to functionalities that are complete (i.e., can be used to realize arbitrary other functionalities). In the information-theoretic setting, the quantum and classical worlds differ. In the quantum world, functionalities in the class we consider are either complete, trivial, or belong to a family of simultaneous-exchange functionalities (e.g., XOR). However, other results in the information-theoretic setting remain roughly unchanged.

Book ChapterDOI
03 Mar 2013
TL;DR: Any no private input, semi-honest two-party functionality that can be securely implemented in the random oracle model, can be safely implemented information theoretically, and this work generalizes the above result to function families that provide some natural combinatorial property.
Abstract: In the random oracle model, parties are given oracle access to a random function (i.e., a uniformly chosen function from the set of all functions), and are assumed to have unbounded computational power (though they can only make a bounded number of oracle queries). This model provides powerful properties that allow proving the security of many protocols, even such that cannot be proved secure in the standard model (under any hardness assumptions). The random oracle model is also used for showing that a given cryptographic primitive cannot be used in a black-box way to construct another primitive; in their seminal work, ImpagliazzoRu89 [STOC '89] showed that no key-agreement protocol exists in the random oracle model, yielding that key-agreement cannot be black-box reduced to one-way functions. Their work has a long line of followup works (Simon [EC '98], Gertner et al. [STOC '00] and Gennaro et al. [SICOMP '05], to name a few), showing that given oracle access to a certain type of function family (e.g., the family that 'implements' public-key encryption) is not sufficient for building a given cryptographic primitive (e.g., oblivious transfer). Yet, the following question remained open: What is the exact power of the random oracle model? We make progress towards answering this question, showing that essentially, any no private input, semi-honest two-party functionality that can be securely implemented in the random oracle model, can be securely implemented information theoretically (where parties are assumed to be all powerful, and no oracle is given). We further generalize the above result to function families that provide some natural combinatorial property. Our result immediately yields that essentially the only no-input functionalities that can be securely realized in the random oracle model (in the sense of secure function evaluation), are the trivial ones (ones that can be securely realized information theoretically). In addition, we use the recent information theoretic impossibility result of McGregor et al. [FOCS '10], to show the existence of functionalities (e.g., inner product) that cannot be computed both accurately and in a differentially private manner in the random oracle model; yielding that protocols for computing these functionalities cannot be black-box reduced to one-way functions.

Book ChapterDOI
03 Mar 2013
TL;DR: In this article, it was shown that if a scheme can homomorphically evaluate the majority function, then its decryption cannot be weakly-learnable (in particular, linear), even if the probability of decryption error is high.
Abstract: We show that an encryption scheme cannot have a simple decryption function and be homomorphic at the same time, even with added noise. Specifically, if a scheme can homomorphically evaluate the majority function, then its decryption cannot be weakly-learnable (in particular, linear), even if the probability of decryption error is high. (In contrast, without homomorphism, such schemes do exist and are presumed secure, e.g. based on LPN.) An immediate corollary is that known schemes that are based on the hardness of decoding in the presence of low hamming-weight noise cannot be fully homomorphic. This applies to known schemes such as LPN-based symmetric or public key encryption. Using these techniques, we show that the recent candidate fully homomorphic encryption, suggested by Bogdanov and Lee (ePrint '11, henceforth BL), is insecure. In fact, we show two attacks on the BL scheme: One that uses homomorphism, and another that directly attacks a component of the scheme.

Book ChapterDOI
03 Mar 2013
TL;DR: In this paper, the bounded player model for secure computation was proposed, in which the number of players that will ever be involved in secure computations is bounded, but not a priori bounded.
Abstract: In this paper we put forward the Bounded Player Model for secure computation. In this new model, the number of players that will ever be involved in secure computations is bounded, but the number of computations is not a priori bounded. Indeed, while the number of devices and people on this planet can be realistically estimated and bounded, the number of computations these devices will run can not be realistically bounded. Further, we note that in the bounded player model, in addition to no a priori bound on the number of sessions, there is no synchronization barrier, no trusted party, and simulation must be performed in polynomial time. In this setting, we achieve concurrent Zero Knowledge (cZK) with sub-logarithmic round complexity. Our security proof is (necessarily) non-black-box, our simulator is 'straight-line' and works as long as the number of rounds is ω(1). We further show that unlike previously studied relaxations of the standard model (e.g., bounded number of sessions, timing assumptions, super-polynomial simulation), concurrent-secure computation is still impossible to achieve in the Bounded Player model. This gives evidence that our model is 'closer' to the standard model than previously studied models, and study of this model might shed light on constructing round efficient concurrent zero-knowledge in the standard model as well.

Book ChapterDOI
03 Mar 2013
TL;DR: This paper provides a generic construction of controlled-malleable proofs using succinct non-interactive arguments of knowledge, or SNARGs for short, which can support very general classes of transformations, as they no longer rely on the transformations that Groth-Sahai proofs can support.
Abstract: Depending on the application, malleability in cryptography can be viewed as either a flaw or — especially if sufficiently understood and restricted — a feature. In this vein, Chase, Kohlweiss, Lysyanskaya, and Meiklejohn recently defined malleable zero-knowledge proofs, and showed how to control the set of allowable transformations on proofs. As an application, they construct the first compact verifiable shuffle, in which one such controlled-malleable proof suffices to prove the correctness of an entire multi-step shuffle. Despite these initial steps, a number of natural problems remained: (1) their construction of controlled-malleable proofs relies on the inherent malleability of Groth-Sahai proofs and is thus not based on generic primitives; (2) the classes of allowable transformations they can support are somewhat restrictive. In this paper, we address these issues by providing a generic construction of controlled-malleable proofs using succinct non-interactive arguments of knowledge, or SNARGs for short. Our construction can support very general classes of transformations, as we no longer rely on the transformations that Groth-Sahai proofs can support.

Book ChapterDOI
03 Mar 2013
TL;DR: The chain rule for conditional HILL entropy holds in general, and it is shown that some more sophisticated cryptographic objects like lossy functions can be used to sample a distribution constituting a counterexample to the chain rule making only a single invocation to the underlying object.
Abstract: A chain rule for an entropy notion H(·) states that the entropy H(X) of a variable X decreases by at most l if conditioned on an l-bit string A, ie, H(X|A)≥H(X)−l More generally, it satisfies a chain rule for conditional entropy if H(X|Y,A)≥H(X|Y)−l All natural information theoretic entropy notions we are aware of (like Shannon or min-entropy) satisfy some kind of chain rule for conditional entropy Moreover, many computational entropy notions (like Yao entropy, unpredictability entropy and several variants of HILL entropy) satisfy the chain rule for conditional entropy, though here not only the quantity decreases by l, but also the quality of the entropy decreases exponentially in l However, for the standard notion of conditional HILL entropy (the computational equivalent of min-entropy) the existence of such a rule was unknown so far In this paper, we prove that for conditional HILL entropy no meaningful chain rule exists, assuming the existence of one-way permutations: there exist distributions X,Y,A, where A is a distribution over a single bit, but HHILL(X|Y)≫ HHILL(X|Y,A), even if we simultaneously allow for a massive degradation in the quality of the entropy The idea underlying our construction is based on a surprising connection between the chain rule for HILL entropy and deniable encryption