scispace - formally typeset
Search or ask a question

Showing papers presented at "Theory of Cryptography Conference in 2005"


Book ChapterDOI
10 Feb 2005
TL;DR: A homomorphic public key encryption scheme that allows the public evaluation of ψ given an encryption of the variables x1,...,xn and can evaluate quadratic multi-variate polynomials on ciphertexts provided the resulting value falls within a small set.
Abstract: Let ψ be a 2-DNF formula on boolean variables x1,...,xn ∈ {0,1}. We present a homomorphic public key encryption scheme that allows the public evaluation of ψ given an encryption of the variables x1,...,xn. In other words, given the encryption of the bits x1,...,xn, anyone can create the encryption of ψ(x1,...,xn). More generally, we can evaluate quadratic multi-variate polynomials on ciphertexts provided the resulting value falls within a small set. We present a number of applications of the system: In a database of size n, the total communication in the basic step of the Kushilevitz-Ostrovsky PIR protocol is reduced from $\sqrt{n}$ to $\sqrt[3]{n}$. An efficient election system based on homomorphic encryption where voters do not need to include non-interactive zero knowledge proofs that their ballots are valid. The election system is proved secure without random oracles but still efficient. A protocol for universally verifiable computation.

1,754 citations


Book ChapterDOI
10 Feb 2005
TL;DR: This work investigates the problem of privacy-preserving access to a database, where records in the database are accessed according to their associated keywords and gives efficient solutions for various settings of KS.
Abstract: We study the problem of privacy-preserving access to a database. Particularly, we consider the problem of privacy-preserving keyword search (KS), where records in the database are accessed according to their associated keywords and where we care for the privacy of both the client and the server. We provide efficient solutions for various settings of KS, based either on specific assumptions or on general primitives (mainly oblivious transfer). Our general solutions rely on a new connection between KS and the oblivious evaluation of pseudorandom functions (OPRFs). We therefore study both the definition and construction of OPRFs and, as a corollary, give improved constructions of OPRFs that may be of independent interest.

440 citations


Book ChapterDOI
10 Feb 2005
TL;DR: In this paper, it was shown that even if an adversary holds quantum information about the initial string Z, the key obtained by two-universal hashing is secure, according to a universally composable security definition, and an asymptotically optimal lower bound on the length of the extractable key S in terms of the adversary's knowledge about Z was given.
Abstract: Privacy amplification is the art of shrinking a partially secret string Z to a highly secret key S. We show that, even if an adversary holds quantum information about the initial string Z, the key S obtained by two-universal hashing is secure, according to a universally composable security definition. Additionally, we give an asymptotically optimal lower bound on the length of the extractable key S in terms of the adversary's (quantum) knowledge about Z. Our result has applications in quantum cryptography. In particular, it implies that many of the known quantum key distribution protocols are universally composable.

406 citations


Book ChapterDOI
10 Feb 2005
TL;DR: This work provides a formal security definition for securely outsourcing computations from a computationally limited device to an untrusted helper, and provides a framework for quantifying the efficiency and checkability of an outsourcing implementation.
Abstract: We address the problem of using untrusted (potentially malicious) cryptographic helpers. We provide a formal security definition for securely outsourcing computations from a computationally limited device to an untrusted helper. In our model, the adversarial environment writes the software for the helper, but then does not have direct communication with it once the device starts relying on it. In addition to security, we also provide a framework for quantifying the efficiency and checkability of an outsourcing implementation. We present two practical outsource-secure schemes. Specifically, we show how to securely outsource modular exponentiation, which presents the computational bottleneck in most public-key cryptography on computationally limited devices. Without outsourcing, a device would need O(n) modular multiplications to carry out modular exponentiation for n-bit exponents. The load reduces to O(log2n) for any exponentiation-based scheme where the honest device may use two untrusted exponentiation programs; we highlight the Cramer-Shoup cryptosystem [13] and Schnorr signatures [28] as examples. With a relaxed notion of security, we achieve the same load reduction for a new CCA2-secure encryption scheme using only one untrusted Cramer-Shoup encryption program.

357 citations


Book ChapterDOI
10 Feb 2005
TL;DR: An important contribution of this work is a definition of privacy (and privacy compromise) for statistical databases, together with a method for describing and comparing the privacy offered by specific sanitization techniques.
Abstract: We initiate a theoretical study of the census problem. Informally, in a census individual respondents give private information to a trusted party (the census bureau), who publishes a sanitized version of the data. There are two fundamentally conflicting requirements: privacy for the respondents and utility of the sanitized data. Unlike in the study of secure function evaluation, in which privacy is preserved to the extent possible given a specific functionality goal, in the census problem privacy is paramount; intuitively, things that cannot be learned “safely” should not be learned at all. An important contribution of this work is a definition of privacy (and privacy compromise) for statistical databases, together with a method for describing and comparing the privacy offered by specific sanitization techniques. We obtain several privacy results using two different sanitization techniques, and then show how to combine them via cross training. We also obtain two utility results involving clustering.

299 citations


Book ChapterDOI
10 Feb 2005
TL;DR: This work presents a method for converting shares of a secret into shares of the same secret in a different secret-sharing scheme using only local computation and no communication between players, and shows how this can be combined with any pseudorandom function to create any number of Shamir secret-sharings of (pseudo)random values without communication.
Abstract: We present a method for converting shares of a secret into shares of the same secret in a different secret-sharing scheme using only local computation and no communication between players. In particular, shares in a replicated scheme based on a CNF representation of the access structure can be converted into shares from any linear scheme for the same structure. We show how this can be combined with any pseudorandom function to create, from initially distributed randomness, any number of Shamir secret-sharings of (pseudo)random values without communication. We apply this technique to obtain efficient non-interactiveprotocols for secure computation of low-degree polynomials, which in turn give rise to other applications in secure computation and threshold cryptography. For instance, we can make the Cramer-Shoup threshold cryptosystem by Canetti and Goldwasser fully non-interactive, or construct non-interactive threshold signature schemes secure without random oracles. The latter solutions are practical only for a relatively small number of players. However, in our main applications the number of players is typically small, and furthermore it can be argued that no solution that makes a black-box use of a pseudorandom function can be more efficient.

272 citations


Book ChapterDOI
10 Feb 2005
TL;DR: In this paper, a composable security definition of QKD was proposed and shown to imply the composability of the QKDF protocol. But this definition does not apply to joint attacks over QKDs and subsequent use of the resulting key.
Abstract: The existing unconditional security definitions of quantum key distribution (QKD) do not apply to joint attacks over QKD and the subsequent use of the resulting key. In this paper, we close this potential security gap by using a universal composability theorem for the quantum setting. We first derive a composable security definition for QKD. We then prove that the usual security definition of QKD still implies the composable security definition. Thus, a key produced in any QKD protocol that is unconditionally secure in the usual definition can indeed be safely used, a property of QKD that is hitherto unproven. We propose two other useful sufficient conditions for composability. As a simple application of our result, we show that keys generated by repeated runs of QKD degrade slowly.

212 citations


Book ChapterDOI
10 Feb 2005
TL;DR: This work formalizes the problem of chosen-ciphertext security for multiple encryption, and gives simple, efficient, and generic constructions of multiple encryption schemes secure against chosen-Ciphertext attacks (based on any component scheme secure against such attacks) in the standard model.
Abstract: Encryption of data using multiple, independent encryption schemes (“multiple encryption”) has been suggested in a variety of contexts, and can be used, for example, to protect against partial key exposure or cryptanalysis, or to enforce threshold access to data. Most prior work on this subject has focused on the security of multiple encryption against chosen-plaintext attacks, and has shown constructions secure in this sense based on the chosen-plaintext security of the component schemes. Subsequent work has sometimes assumed that these solutions are also secure against chosen-ciphertext attacks when component schemes with stronger security properties are used. Unfortunately, this intuition is false for all existing multiple encryption schemes. Here, in addition to formalizing the problem of chosen-ciphertext security for multiple encryption, we give simple, efficient, and generic constructions of multiple encryption schemes secure against chosen-ciphertext attacks (based on any component schemes secure against such attacks) in the standard model. We also give a more efficient construction from any (hierarchical) identity-based encryption scheme secure against selective-identity chosen plaintext attacks. Finally, we discuss a wide range of applications for our proposed schemes.

167 citations


Book ChapterDOI
10 Feb 2005
TL;DR: Canetti et al. as mentioned in this paper showed that the notion of entropic security is equivalent to indistinguishability on pairs of input distributions of sufficiently high entropy, which is in turn related to randomness extraction from non-uniform distributions.
Abstract: We study entropic security, an information-theoretic notion of security introduced by Russell and Wang [24] in the context of encryption and by Canetti et al. [5,6] in the context of hash functions. Informally, a probabilitic map $Y = \mathcal{E}(X)$ (e.g., an encryption sheme or a hash function) is entropically secure if knowledge of Y does not help predicting any predicate of X, whenever X has high min-entropy from the adversary's point of view. On one hand, we strengthen the formulation of [5,6,24] and show that entropic security in fact implies that Y does not help predicting any function of X (as opposed to a predicate), bringing this notion closer to the conventioonal notion of semantic security [10]. On the other hand, we also show that entropic security is equivalent to indistinguishability on pairs of input distributions of sufficiently high entropy, which is in turn related to randomness extraction from non-uniform distributions [21]. We then use the equivalence above, and the connection to randomness extraction, to prove several new results on entropically-secure encryption. First, we give two general frameworks for constructing entropically secure encryption schemes: one based on expander graphs and the other on XOR-universal hash functions. These schemes generalize the schemes of Russell and Wang, yielding simpler constructions and proofs, as well as improved parameters. To encrypt an n-bit message of min-entropy t while allowing at most e-advantage to the adversary, our best schemes use a shared secret key of length $k = n - t + 2{\rm log} (\frac {1}{\epsilon})$. Second, we obtain lower bounds on the key length k for entropic security and indistinguishability. In particular, we show near tightness of our constructions: k > n – t. For a large class of schemes — including all the schemes we study — the bound can be strengthened to $k \geq n - t+{\rm log} (\frac {1}{\epsilon})-O(1)$.

132 citations


Book ChapterDOI
10 Feb 2005
TL;DR: The construction works in the public-key model (requiring no shared secret key or joint local state) and allows the channel to know everything that the receiver knows, and shows that unique decoding under high error rates is impossible in several natural relaxations of the model.
Abstract: For computationally bounded adversarial models of error, we construct appealingly simple, efficient, cryptographic encoding and unique decoding schemes whose error-correction capability is much greater than classically possible. In particular: For binary alphabets, we construct positive-rate coding schemes which are uniquely decodable from a 1/2 – γerror rate for any constant γ> 0. For large alphabets, we construct coding schemes which are uniquely decodable from a $1 - \sqrt{R}$error rate for any information rate R> 0. Our results are qualitatively stronger than related work: the construction works in the public-key model (requiring no shared secret key or joint local state) and allows the channel to know everything that the receiver knows. In addition, our techniques can potentially be used to construct coding schemes that have information rates approaching the Shannon limit. Finally, our construction is qualitatively optimal: we show that unique decoding under high error rates is impossible in several natural relaxations of our model.

99 citations


Book ChapterDOI
Ran Canetti1, Shai Halevi1, Michael Steiner1
10 Feb 2005
TL;DR: Weakly verifiable puzzles as discussed by the authors show that solving many independent puzzles with probability more than e is harder than solving a single instance, and when the puzzles are not even weakly verifiably verifiable, solving many puzzles may be no harder than the single one.
Abstract: Is it harder to solve many puzzles than it is to solve just one? This question has different answers, depending on how you define puzzles. For the case of inverting one-way functions it was shown by Yao that solving many independent instances simultaneously is indeed harder than solving a single instance (cf. the transformation from weak to strong one-way functions). The known proofs of that result, however, use in an essential way the fact that for one-way functions, verifying candidate solutions to a given puzzle is easy. We extend this result to the case where solutions are efficiently verifiable only by the party that generated the puzzle. We call such puzzles weakly verifiable. That is, for weakly verifiable puzzles we show that if no efficient algorithm can solve a single puzzle with probability more than e, then no efficient algorithm can solve n independent puzzles simultaneously with probability more than en. We also demonstrate that when the puzzles are not even weakly verifiable, solving many puzzles may be no harder than solving a single one. Hardness amplification of weakly verifiable puzzles turns out to be closely related to the reduction of soundness error under parallel repetition in computationally sound arguments. Indeed, the proof of Bellare, Impagliazzo and Naor that parallel repetition reduces soundness error in three-round argument systems implies a result similar to our first result, albeit with considerably worse parameters. Also, our second result is an adaptation of their proof that parallel repetition of four-round systems may not reduce the soundness error.

Book ChapterDOI
10 Feb 2005
TL;DR: Several new constructions of collision-resistant hash-functions (CRHFs) from general assumptions are presented, including constructions from two other primitives that are implied by homomorphic-encryption: one-round private information retrieval (PIR) protocols and homomorphic one-way commitments.
Abstract: We present several new constructions of collision-resistant hash-functions (CRHFs) from general assumptions. We start with a simple construction of CRHF from any homomorphic encryption. Then, we strengthen this result by presenting constructions of CRHF from two other primitives that are implied by homomorphic-encryption: one-round private information retrieval (PIR) protocols and homomorphic one-way commitments.

Book ChapterDOI
Michael Backes1, Christian Cachin1
10 Feb 2005
TL;DR: In this paper, a complexity-theoretic model for public-key steganography with active attacks is introduced and a relaxation called steganographic security against publicly-detectable replayable adaptive chosen-covertext attacks (SS-PDR-CCA) is formalized.
Abstract: A complexity-theoretic model for public-key steganography with active attacks is introduced. The notion of steganographic security against adaptive chosen-covertext attacks (SS-CCA) and a relaxation called steganographic security against publicly-detectable replayable adaptive chosen-covertext attacks (SS-PDR-CCA) are formalized. These notions are closely related to CCA-security and PDR-CCA-security for public-key cryptosystems. In particular, it is shown that any SS-(PDR-)CCA stegosystem is a (PDR-)CCA-secure public-key cryptosystem and that an SS-PDR-CCA stegosystem for any covertext distribution with sufficiently large min-entropy can be realized from any PDR-CCA-secure public-key cryptosystem with pseudorandom ciphertexts.

Book ChapterDOI
10 Feb 2005
TL;DR: A signature scheme known to be secure against known message attack is analyzed and it is proved that it is secure against adaptive chosen message attack and a new decisional assumption, related to high residuosity assumptions, is suggested, suggesting a homomorphic public-key cryptosystem.
Abstract: We demonstrate the cryptographic usefulness of a small subgroup of $\mathbb{Z}_{n}^{*}$ of hidden order. Cryptographic schemes for integer commitment and digital signatures have been suggested over large subgroups of $\mathbb{Z}_{n}^{*}$, by reducing the order of the groups we obtain quite similar but more efficient schemes. The underlying cryptographic assumption resembles the strong RSA assumption. We analyze a signature scheme known to be secure against known message attack and prove that it is secure against adaptive chosen message attack. This result does not necessarily rely on the use of a small subgroup, but the small subgroup can make the security reduction tighter. We also investigate the case where $\mathbb{Z}_{n}^{*}$ has semi-smooth order. Using a new decisional assumption, related to high residuosity assumptions, we suggest a homomorphic public-key cryptosystem.

Book ChapterDOI
10 Feb 2005
TL;DR: In this article, the authors describe how to securely compute the mean of an attribute value in a database that is shared between two parties, and demonstrate that existing solutions in the literature that could be used to do this leak information, therefore underlining the importance of applying rigorous theoretical analysis rather than settling for ad hoc techniques.
Abstract: In recent years there has been massive progress in the development of technologies for storing and processing of data. If statistical analysis could be applied to such data when it is distributed between several organisations, there could be huge benefits. Unfortunately, in many cases, for legal or commercial reasons, this is not possible. The idea of using the theory of multi-party computation to analyse efficient algorithms for privacy preserving data-mining was proposed by Pinkas and Lindell. The point is that algorithms developed in this way can be used to overcome the apparent impasse described above: the owners of data can, in effect, pool their data while ensuring that privacy is maintained. Motivated by this, we describe how to securely compute the mean of an attribute value in a database that is shared between two parties. We also demonstrate that existing solutions in the literature that could be used to do this leak information, therefore underlining the importance of applying rigorous theoretical analysis rather than settling for ad hoc techniques.

Book ChapterDOI
10 Feb 2005
TL;DR: In this paper, the authors proposed an adaptively secure, completely non-interactive encryption scheme that allows arbitrarily many parties to use a single encryption key to securely encrypt arbitrarily many messages to a given receiver who maintains only a single short decryption key.
Abstract: Adaptively-secure encryption schemes ensure secrecy even in the presence of an adversary who can corrupt parties in an adaptive manner based on public keys, ciphertexts, and secret data of already-corrupted parties. Ideally, an adaptively-secure encryption scheme should, like standard public-key encryption, allow arbitrarily-many parties to use a single encryption key to securely encrypt arbitrarily-many messages to a given receiver who maintains only a single short decryption key. However, it is known that these requirements are impossible to achieve: no non-interactive encryption scheme that supports encryption of an unbounded number of messages and uses a single, unchanging decryption key can be adaptively secure. Impossibility holds even if secure data erasure is possible. We show that this limitation can be overcome by updating the decryption key over time and making some mild assumptions about the frequency of communication between parties. Using this approach, we construct adaptively-secure, completely non-interactive encryption schemes supporting secure encryption of arbitrarily-many messages from arbitrarily-many senders. Our schemes additionally provide forward security and security against chosen-ciphertext attacks.

Book ChapterDOI
10 Feb 2005
TL;DR: A computational soundness theorem is proved for the symbolic analysis of cryptographic protocols which extends an analogous theorem of Abadi and Rogaway to a scenario where the adversary gets to see the encryption of a sequence of adaptively chosen symbolic expressions.
Abstract: We prove a computational soundness theorem for the symbolic analysis of cryptographic protocols which extends an analogous theorem of Abadi and Rogaway (J. of Cryptology 15(2):103–127, 2002) to a scenario where the adversary gets to see the encryption of a sequence of adaptively chosen symbolic expressions. The extension of the theorem of Abadi and Rogaway to such an adaptive scenario is nontrivial, and raises issues related to the classic problem of selective decommitment, which do not appear in the original formulation of the theorem. Although the theorem of Abadi and Rogaway applies only to passive adversaries, our extension to adaptive attacks makes it substantially stronger, and powerful enough to analyze the security of cryptographic protocols of practical interest. We exemplify the use of our soundness theorem in the analysis of group key distribution protocols like those that arise in multicast and broadcast applications. Specifically, we provide cryptographic definitions of security for multicast key distribution protocols both in the symbolic as well as the computational framework and use our theorem to prove soundness of the symbolic definition.

Book ChapterDOI
10 Feb 2005
TL;DR: Using a general computational framework, this work identifies the position of a “master process” in the distributed system, and some limitations on transparent message forwarding within computational complexity bounds, as two main factors.
Abstract: Several compositional forms of simulation-based security have been proposed in the literature, including universal composability, black-box simulatability, and variants thereof. These relations between a protocol and an ideal functionality are similar enough that they can be ordered from strongest to weakest according to the logical form of their definitions. However, determining whether two relations are in fact identical depends on some subtle features that have not been brought out in previous studies. We identify the position of a “master process” in the distributed system, and some limitations on transparent message forwarding within computational complexity bounds, as two main factors. Using a general computational framework, we clarify the relationships between the simulation-based security conditions.

Book ChapterDOI
10 Feb 2005
TL;DR: This work characterization all weighted threshold access structures that are ideal relies heavily on the strong connection between ideal secret sharing schemes and matroids, as proved by Brickell and Davenport.
Abstract: Weighted threshold secret sharing was introduced by Shamir in his seminal work on secret sharing. In such settings, there is a set of users where each user is assigned a positive weight. A dealer wishes to distribute a secret among those users so that a subset of users may reconstruct the secret if and only if the sum of weights of its users exceeds a certain threshold. A secret sharing scheme is ideal if the size of the domain of shares of each user is the same as the size of the domain of possible secrets (this is the smallest possible size for the domain of shares). The family of subsets authorized to reconstruct the secret in a secret sharing scheme is called an access structure. An access structure is ideal if there exists an ideal secret sharing scheme that realizes it. It is known that some weighted threshold access structures are not ideal, while other nontrivial weighted threshold access structures do have an ideal scheme that realizes them. In this work we characterize all weighted threshold access structures that are ideal. We show that a weighted threshold access structure is ideal if and only if it is a hierarchical threshold access structure (as introduced by Simmons), or a tripartite access structure (these structures, that we introduce here, generalize the concept of bipartite access structures due to Padro and Saez), or a composition of two ideal weighted threshold access structures that are defined on smaller sets of users. We further show that in all those cases the weighted threshold access structure may be realized by a linear ideal secret sharing scheme. The proof of our characterization relies heavily on the strong connection between ideal secret sharing schemes and matroids, as proved by Brickell and Davenport.

Book ChapterDOI
10 Feb 2005
TL;DR: It is shown that the existence of a time-lock puzzle gives an even stronger separation of standard and universal simulatability with respect to computational security.
Abstract: In this work, relations between the security notions standard simulatability and universal simulatability for cryptographic protocols are investigated. A simulatability-based notion of security considers a protocol π as secure as an idealization τ of the protocol task, if and only if every attack on π can be simulated by an attack on τ. Two formalizations, which both provide secure composition of protocols, are common: standard simulatability means that for every π-attack and protocol user H, there is a τ-attack, such that H cannot distinguish π from τ. Universal simulatability means that for every π-attack, there is a τ-attack, such that no protocol user H can distinguish π from τ. Trivially, universal simulatability implies standard simulatability. We show: the converse is true with respect to perfect security, but not with respect to computational or statistical security. Besides, we give a formal definition of a time-lock puzzle, which may be of independent interest. Although the described results do not depend on any computational assumption, we show that the existence of a time-lock puzzle gives an even stronger separation of standard and universal simulatability with respect to computational security.

Book ChapterDOI
10 Feb 2005
TL;DR: This paper provides a general paradigm for constructing secure and error-resilient private-key cryptosystems in the bounded storage model that tolerate a constant fraction of errors, and attain the near optimal parameters achieved by Vadhan's construction (JoC, 2004) in the errorless case.
Abstract: We initiate a study of Maurer's bounded storage model (JoC, 1992) in presence of transmission errors and perhaps other types of errors that cause different parties to have inconsistent views of the public random source Such errors seem inevitable in any implementation of the model All previous schemes and protocols in the model assume a perfectly consistent view of the public source from all parties, and do not function correctly in presence of errors, while the private-key encryption scheme of Aumann, Ding and Rabin (IEEE IT, 2002) can be extended to tolerate only a O(1/log(1/e)) fraction of errors, where e is an upper bound on the advantage of an adversary In this paper, we provide a general paradigm for constructing secure and error-resilient private-key cryptosystems in the bounded storage model that tolerate a constant fraction of errors, and attain the near optimal parameters achieved by Vadhan's construction (JoC, 2004) in the errorless case In particular, we show that any local fuzzy extractor yields a secure and error-resilient cryptosystem in the model, in analogy to the result of Lu (JoC, 2004) that any local strong extractor yields a secure cryptosystem in the errorless case, and construct efficient local fuzzy extractors by extending Vadhan's sample-then-extract paradigm The main ingredients of our constructions are averaging samplers (Bellare and Rompel, FOCS '94), randomness extractors (Nisan and Zuckerman, JCSS, 1996), error correcting codes, and fuzzy extractors (Dodis, Reyzin and Smith, EUROCRYPT '04)

Book ChapterDOI
10 Feb 2005
TL;DR: statistical zero-knowledge protocols for statements of the form “plaintext m corresponds to ciphertext c” and “ciphertext c and c' decrypt to the same value” for the Ajtai-Dwork cryptosystem are shown.
Abstract: Ajtai and Dwork proposed a public-key encryption scheme in 1996 which they proved secure under the assumption that the unique shortest vector problem is hard in the worst case. This cryptosystem and its extension by Regev are the only one known for which security can be proved under a worst case assumption, and as such present a particularly interesting case to study. In this paper, we show statistical zero-knowledge protocols for statements of the form “plaintext m corresponds to ciphertext c” and “ciphertext c and c' decrypt to the same value” for the Ajtai-Dwork cryptosystem. We then show a interactive zero-knowledge proof of plaintext knowledge (PPK) for the Ajtai-Dwork cryptosystem, based directly on the security of the cryptosystem rather than resorting to general interactive zero-knowledge constructions. The witness for these proofs is the randomness used in the encryption.

Book ChapterDOI
10 Feb 2005
TL;DR: It is proved that any “weakly black-box” construction with parameters q(n) and l( n) better than the above lower bounds implies the unconditional existence of strong one-way functions (and, therefore, the existence of a weaklyBlack-box construction with q( n)=0).
Abstract: We continue the study of the efficiency of black-box reductions in cryptography. We focus on the question of constructing strong one-way functions (respectively, permutations) from weak one-way functions (respectively, permutations). To make our impossibility results stronger, we focus on the weakest type of constructions: those that start from a weak one-way permutation and define a strong one-way function. We show that for every “fully black-box” construction of a e(n)-secure function based on a (1–δ(n))-secure permutation, if q(n) is the number of oracle queries used in the construction and l(n) is the input length of the new function, then we have $q \geq {\it \Omega}(\frac {1}{\delta}\cdot {\rm log}\frac {1}{\epsilon})$ and l ≥ n + ${\it \Omega}$(log 1/e) – O(log q). This result is proved by showing that fully black-box reductions of strong to weak one-way functions imply the existence of “hitters” and then by applying known lower bounds for hitters. We also show a sort of reverse connection, and we revisit the construction of Goldreich et al. (FOCS 1990) in terms of this reverse connection. Finally, we prove that any “weakly black-box” construction with parameters q(n) and l(n) better than the above lower bounds implies the unconditional existence of strong one-way functions (and, therefore, the existence of a weakly black-box construction with q(n)=0). This result, like the one for fully black-box reductions, is proved by reasoning about the function defined by such a construction when using the identity permutation as an oracle.

Book ChapterDOI
10 Feb 2005
TL;DR: The limitations of steganography when the sender is not using any properties of the underlying channel beyond its entropy and the ability to sample from it are studied and the first secret-key stegosystem that essentially matches this lower bound regardless of the entropy ofThe underlying channel is presented.
Abstract: We study the limitations of steganography when the sender is not using any properties of the underlying channel beyond its entropy and the ability to sample from it. On the negative side, we show that the number of samples the sender must obtain from the channel is exponential in the rate of the stegosystem. On the positive side, we present the first secret-key stegosystem that essentially matches this lower bound regardless of the entropy of the underlying channel. Furthermore, for high-entropy channels, we present the first secret-key stegosystem that matches this lower bound statelessly (i.e., without requiring synchronized state between sender and receiver).

Book ChapterDOI
10 Feb 2005
TL;DR: Fair Zero-Knowledge is introduced, a multi-verifier ZK system where every proof is guaranteed to be “zero-knowledge for all verifiers,” and tools —such as Unique Zero Knowledge— that may be of independent interest are introduced.
Abstract: We introduce Fair Zero-Knowledge, a multi-verifier ZK system where every proof is guaranteed to be “zero-knowledge for all verifiers.” That is, if an honest verifier accepts a fair zero-knowledge proof, then he is assured that all other verifiers also learn nothing more than the verity of the statement in question, even if they maliciously collude with a cheating prover. We construct Fair Zero-Knowledge systems based on standard complexity assumptions (specifically, the quadratic residuosity assumption) and an initial, one-time use of a physically secure communication channel (specifically, each verifier sends the prover a private message in an envelope). All other communication occurs (and must occur) on a broadcast channel. The main technical challenge of our construction consists of provably removing any possibility of using steganography in a ZK proof. To overcome this technical difficulty, we introduce tools —such as Unique Zero Knowledge— that may be of independent interest.

Book ChapterDOI
10 Feb 2005
TL;DR: An improved security analysis and a generalization of the [36] scheme are presented, which shows that this scheme remains secure for smaller share sizes, leading to the same factor of 2 or 4 improvements for that scheme as well.
Abstract: We present a new robust proactive (and threshold) RSA signature scheme secure with the optimal threshold of t < n/2 corruptions. The new scheme offers a simpler alternative to the best previously known (static) proactive RSA scheme given by Tal Rabin [36], itself a simplification over the previous schemes given by Frankel et al. [18,17]. The new scheme is conceptually simple because all the sharing and proactive re-sharing of the RSA secret key is done modulo a prime, while the reconstruction of the RSA signature employs an observation that the secret can be recovered from such sharing using a simple equation over the integers. This equation was first observed and utilized by Luo and Lu in a design of a simple and efficient proactive RSA scheme [31] which was not proven secure and which, alas, turned out to be completely insecure [29] due to the fact that the aforementioned equation leaks some partial information about the shared secret. Interestingly, this partial information leakage can be proven harmless once the polynomial sharing used by [31] is replaced by top-level additive sharing with second-level polynomial sharing for back-up. Apart of conceptual simplicity and of new techniques of independent interests, efficiency-wise the new scheme gives a factor of 2 improvement in speed and share size in the general case, and almost a factor of 4 improvement for the common RSA public exponents 3, 17, or 65537, over the scheme of [36] as analyzed in [63]. However, we also present an improved security analysis and a generalization of the [36] scheme, which shows that this scheme remains secure for smaller share sizes, leading to the same factor of 2 or 4 improvements for that scheme as well.

Book ChapterDOI
10 Feb 2005
TL;DR: This paper shows that KEM semantically secure against adaptively chosen ciphertext attacks (IND-CCA2) and DEM semanticallySecure against adaptive chosen plaintext/ciphertext attacks(IND-P2-C2) along with secure signatures and ideal certification authority are sufficient to realize a universally composable (UC) secure channel.
Abstract: For ISO standards on public-key encryption, Shoup introduced the framework of KEM (Key Encapsulation Mechanism), and DEM (Data Encapsulation Mechanism), for formalizing and realizing one-directional hybrid encryption; KEM is a formalization of asymmetric encryption specified for key distribution, and DEM is a formalization of symmetric encryption. This paper investigates a more general hybrid protocol, secure channel, using KEM and DEM, such that KEM is used for distribution of a session key and DEM, along with the session key, is used for multiple bi-directional encrypted transactions in a session. This paper shows that KEM semantically secure against adaptively chosen ciphertext attacks (IND-CCA2) and DEM semantically secure against adaptively chosen plaintext/ciphertext attacks (IND-P2-C2) along with secure signatures and ideal certification authority are sufficient to realize a universally composable (UC) secure channel. To obtain the main result, this paper also shows several equivalence results: UC KEM, IND-CCA2 KEM and NM-CCA2 (non-malleable against CCA2) KEM are equivalent, and UC DEM, IND-P2-C2 DEM and NM-P2-C2 DEM are equivalent.

Book ChapterDOI
10 Feb 2005
TL;DR: Both probabilistic constructions (which also have other properties such as K-wise independence and being computationally indistinguishable from G (N,p(n) ), and deterministic constructions where for each graph size the authors provide a specific graph that captures the properties of G (2n,p (n)) for slightly smaller quantifier depths.
Abstract: We construct efficiently computable sequences of random-looking graphs that preserve properties of the canonical random graphs G (2n,p(n)). We focus on first-order graph properties, namely properties that can be expressed by a formula φ in the language where variables stand for vertices and the only relations are equality and adjacency (e.g. having an isolated vertex is a first-order property ∃x∀y(¬EDGE(x, y))). Random graphs are known to have remarkable structure w.r.t. first order properties, as indicated by the following 0/1 law: for a variety of choices of p(n), any fixed first-order property φ holds for G (2n,p(n)) with probability tending either to 0 or to 1 as n grows to infinity. We first observe that similar 0/1 laws are satisfied by G (2n,p(n)) even w.r.t. sequences of formulas {φn}n∈ℕ with bounded quantifier depth, ${\it depth}(\phi_{n}) \leq {\frac {1}{{\rm lg} (1/p(n))}}$. We also demonstrate that 0/1 laws do not hold for random graphs w.r.t. properties of significantly larger quantifier depth. For most choices of p(n), we present efficient constructions of huge graphs with edge density nearly p(n) that emulate G (2n,p(n)) by satisfying ${\it \Theta} ({\frac {1}{{\rm lg} (1/p(n))}})-0/1$ laws. We show both probabilistic constructions (which also have other properties such as K-wise independence and being computationally indistinguishable from G (N,p(n)) ), and deterministic constructions where for each graph size we provide a specific graph that captures the properties of G (2n,p(n)) for slightly smaller quantifier depths.

Book ChapterDOI
10 Feb 2005
TL;DR: The usability of the Monitored Functionalities framework is illustrated by providing much simpler protocols in the plain model than in an earlier work than in [19] for some limited functionalities in the server-client model.
Abstract: Definition of security under the framework of Environmental Security (a.k.a Network-Aware Security or Universally Composable Security) typically requires “extractability” of the private inputs of parties running a protocol. Formalizing concepts that appeared in an earlier work [19], we introduce a framework of “Monitored Functionalities,” which allows us to avoid such a requirement from the security definition, while still providing very strong composition properties. We also consider a specialization of the Environmental Security framework by designating one party as a “server” and all other parties as clients. Both these contributions in the work are aimed at being able to provide weaker Environmental Security guarantees to simpler protocols. We illustrate the usability of the Monitored Functionalities framework by providing much simpler protocols in the plain model than in [19] for some limited functionalities in the server-client model.

Book ChapterDOI
10 Feb 2005
TL;DR: Techniques for dealing with expected polynomial-time adversaries in the context of simulation-based security proofs are developed.
Abstract: The standard class of adversaries considered in cryptography is that of strict polynomial-time probabilistic machines (or circuits). However, expected polynomial-time machines are often also considered. For example, there are many zero-knowledge protocols for which the only simulation techniques known run in expected (and not strict) polynomial-time. In addition, it has been shown that expected polynomial-time simulation is essential for achieving constant-round black-box zero-knowledge protocols. This reliance on expected polynomial-time simulation introduces a number of conceptual and technical difficulties. In this paper, we develop techniques for dealing with expected polynomial-time adversaries in the context of simulation-based security proofs.