scispace - formally typeset
Search or ask a question

Showing papers presented at "Theory of Cryptography Conference in 2010"


Book ChapterDOI
09 Feb 2010
TL;DR: In this paper, Waters et al. constructed a fully secure HIBE scheme with short ciphertexts and proved the security of the scheme under three static assumptions, including a non-static assumption on the depth of the hierarchy.
Abstract: We construct a fully secure HIBE scheme with short ciphertexts. The previous construction of Boneh, Boyen, and Goh was only proven to be secure in the selective model, under a non-static assumption which depended on the depth of the hierarchy. To obtain full security, we apply the dual system encryption concept recently introduced by Waters. A straightforward application of this technique is insufficient to achieve short ciphertexts, since the original instantiation of the technique includes tags that do not compress. To overcome this challenge, we design a new method for realizing dual system encryption. We provide a system in composite order groups (of three primes) and prove the security of our scheme under three static assumptions.

562 citations


Book ChapterDOI
09 Feb 2010
TL;DR: This work constructs public-key cryptosystems that remain secure even when the adversary is given any computationally uninvertible function of the secret key as auxiliary input (even one that may reveal the secretKey information-theoretically).
Abstract: We construct public-key cryptosystems that remain secure even when the adversary is given any computationally uninvertible function of the secret key as auxiliary input (even one that may reveal the secret key information-theoretically). Our schemes are based on the decisional Diffie-Hellman (DDH) and the Learning with Errors (LWE) problems.

226 citations


Book ChapterDOI
09 Feb 2010
TL;DR: In this paper, it was shown that stateless hardware tokens are sufficient to base general secure (in fact, UC-secure) computation on the existence of one-way functions, and the first general feasibility result for program obfuscation using stateless tokens, while strengthening the standard notion of obfuscation by providing security against a malicious sender.
Abstract: A number of works have investigated using tamper-proof hardware tokens as tools to achieve a variety of cryptographic tasks. In particular, Goldreich and Ostrovsky considered the problem of software protection via oblivious RAM. Goldwasser, Kalai, and Rothblum introduced the concept of one-time programs: in a one-time program, an honest sender sends a set of simple hardware tokens to a (potentially malicious) receiver. The hardware tokens allow the receiver to execute a secret program specified by the sender’s tokens exactly once (or, more generally, up to a fixed t times). A recent line of work initiated by Katz examined the problem of achieving UC-secure computation using hardware tokens. Motivated by the goal of unifying and strengthening these previous notions, we consider the general question of basing secure computation on hardware tokens. We show that the following tasks, which cannot be realized in the “plain” model, become feasible if the parties are allowed to generate and exchange tamper-proof hardware tokens. Unconditional and non-interactive secure computation. We show that by exchanging simple stateful hardware tokens, any functionality can be realized with unconditional security against malicious parties. In the case of two-party functionalities f(x,y) which take their inputs from a sender and a receiver and deliver their output to the receiver, our protocol is non-interactive and only requires a unidirectional communication of simple stateful tokens from the sender to the receiver. This strengthens previous feasibility results for one-time programs both by providing unconditional security and by offering general protection against malicious senders. As is typically the case for unconditionally secure protocols, our protocol is in fact UC-secure. This improves over previous works on UC-secure computation based on hardware tokens, which provided computational security under cryptographic assumptions. Interactive secure computation from stateless tokens based on one-way functions. We show that stateless hardware tokens are sufficient to base general secure (in fact, UC-secure) computation on the existence of one-way functions. Obfuscation from stateless tokens. We consider the problem of realizing non-interactive secure computation from stateless tokens for functionalities which allow the receiver to provide an arbitrary number of inputs (these are the only functionalities one can hope to realize non-interactively with stateless tokens). By building on recent techniques for resettably secure computation, we obtain a general positive result under standard cryptographic assumptions. This gives the first general feasibility result for program obfuscation using stateless tokens, while strengthening the standard notion of obfuscation by providing security against a malicious sender.

187 citations


Book ChapterDOI
09 Feb 2010
TL;DR: Leakage-resilient signatures as mentioned in this paper allow the adversary to learn a bounded amount of arbitrary information about the secret state that was accessed during every signature generation, which is the strongest standard security notion for digital signature.
Abstract: The strongest standard security notion for digital signature schemes is unforgeability under chosen message attacks. In practice, however, this notion can be insufficient due to “side-channel attacks” which exploit leakage of information about the secret internal state. In this work we put forward the notion of “leakage-resilient signatures,” which strengthens the standard security notion by giving the adversary the additional power to learn a bounded amount of arbitrary information about the secret state that was accessed during every signature generation. This notion naturally implies security against all side-channel attacks as long as the amount of information leaked on each invocation is bounded and “only computation leaks information.” The main result of this paper is a construction which gives a (tree-based, stateful) leakage-resilient signature scheme based on any 3-time signature scheme. The amount of information that our scheme can safely leak per signature generation is 1/3 of the information the underlying 3-time signature scheme can leak in total. Signature schemes that remain secure even if a bounded total amount of information is leaked were recently constructed, hence instantiating our construction with these schemes gives the first constructions of provably secure leakage-resilient signature schemes. The above construction assumes that the signing algorithm can sample truly random bits, and thus an implementation would need some special hardware (randomness gates). Simply generating this randomness using a leakage-resilient stream-cipher will in general not work. Our second contribution is a sound general principle to replace uniform random bits in any leakage-resilient construction with pseudorandom ones: run two leakage-resilient stream-ciphers (with independent keys) in parallel and then apply a two-source extractor to their outputs.

153 citations


Book ChapterDOI
09 Feb 2010
TL;DR: In this article, the authors propose a new methodology for rational secret sharing leading to various instantiations (in both the two-party and multi-party settings) that are simple and efficient in terms of computation, share size, and round complexity.
Abstract: We propose a new methodology for rational secret sharing leading to various instantiations (in both the two-party and multi-party settings) that are simple and efficient in terms of computation, share size, and round complexity. Our protocols do not require physical assumptions or simultaneous channels, and can even be run over asynchronous, point-to-point networks. We also propose new equilibrium notions (namely, computational versions of strict Nash equilibrium and stability with respect to trembles) and prove that our protocols satisfy them. These notions guarantee, roughly speaking, that at each point in the protocol there is a unique legal message a party can send. This, in turn, ensures that protocol messages cannot be used as subliminal channels, something achieved in prior work only by making strong assumptions on the communication network.

123 citations


Book ChapterDOI
09 Feb 2010
TL;DR: This work examines several private learning tasks and gives tight bounds on their sample complexity, and shows strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complications of efficient and inefficient proper private learners.
Abstract: Learning is a task that generalizes many of the analyses that are applied to collections of data, and in particular, collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. [Kasiviswanathan, Lee, Nissim, Raskhodnikova, and Smith; FOCS 2008] initiated such a discussion. They formalized the notion of private learning, as a combination of PAC learning and differential privacy, and investigated what concept classes can be learned privately. Somewhat surprisingly, they showed that, ignoring time complexity, every PAC learning task could be performed privately with polynomially many samples, and in many natural cases this could even be done in polynomial time. While these results seem to equate non-private and private learning, there is still a significant gap: the sample complexity of (non-private) PAC learning is crisply characterized in terms of the VC-dimension of the concept class, whereas this relationship is lost in the constructions of private learners, which exhibit, generally, a higher sample complexity. Looking into this gap, we examine several private learning tasks and give tight bounds on their sample complexity. In particular, we show strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexities of efficient and inefficient proper private learners. Our results show that VC-dimension is not the right measure for characterizing the sample complexity of proper private learning. We also examine the task of private data release (as initiated by [Blum, Ligett, and Roth; STOC 2008]), and give new lower bounds on the sample complexity. Our results show that the logarithmic dependence on size of the instance space is essential for private data release.

122 citations


Book ChapterDOI
09 Feb 2010
TL;DR: This paper describes a new qTMC scheme where hard and short position-wise openings, both, have constant size and shows how this scheme is amenable to constructing independent zero-knowledge sets (i.e., ZKS’s that prevent adversaries from correlating their set to the sets of honest provers, as defined by Gennaro and Micali).
Abstract: Introduced by Micali, Rabin and Kilian (MRK), the basic primitive of zero-knowledge sets (ZKS) allows a prover to commit to a secret set S so as to be able to prove statements such as x∈S or $x ot\in S$. Chase et al. showed that ZKS protocols are underlain by a cryptographic primitive termed mercurial commitment. A (trapdoor) mercurial commitment has two commitment procedures. At committing time, the committer can choose not to commit to a specific message and rather generate a dummy value which it will be able to softly open to any message without being able to completely open it. Hard commitments, on the other hand, can be hardly or softly opened to only one specific message. At Eurocrypt 2008, Catalano, Fiore and Messina (CFM) introduced an extension called trapdoor q-mercurial commitment (qTMC), which allows committing to a vector of q messages. These qTMC schemes are interesting since their openings w.r.t. specific vector positions can be short (ideally, the opening length should not depend on q), which provides zero-knowledge sets with much shorter proofs when such a commitment is combined with a Merkle tree of arity q. The CFM construction notably features short proofs of non-membership as it makes use of a qTMC scheme with short soft openings. A problem left open is that hard openings still have size O(q), which prevents proofs of membership from being as compact as those of non-membership. In this paper, we solve this open problem and describe a new qTMC scheme where hard and short position-wise openings, both, have constant size. We then show how our scheme is amenable to constructing independent zero-knowledge sets (i.e., ZKS’s that prevent adversaries from correlating their set to the sets of honest provers, as defined by Gennaro and Micali). Our solution retains the short proof property for this important primitive as well.

114 citations


Book ChapterDOI
09 Feb 2010
TL;DR: This work constructs an obfuscator for a more complex algebraic functionality: testing for membership in a hyperplane (of constant dimension), and proves the security of the obfuscator under a new strong variant of the Decisional Diffie-Hellman assumption.
Abstract: Previous work on program obfuscation gives strong negative results for general-purpose obfuscators, and positive results for obfuscating simple functions such as equality testing (point functions). In this work, we construct an obfuscator for a more complex algebraic functionality: testing for membership in a hyperplane (of constant dimension). We prove the security of the obfuscator under a new strong variant of the Decisional Diffie-Hellman assumption. Finally, we show a cryptographic application of the new obfuscator to digital signatures.

110 citations


Book ChapterDOI
09 Feb 2010
TL;DR: In this paper, a new variant of Regev's cryptosystem was presented, with a new choice of parameters, based on the worst-case lattice problem GapSVP.
Abstract: We present a variant of Regev’s cryptosystem first presented in [Reg05], but with a new choice of parameters. By a recent classical reduction by Peikert we prove the scheme semantically secure based on the worst-case lattice problem GapSVP. From this we construct a threshold cryptosystem which has a very efficient and non-interactive decryption protocol. We prove the threshold cryptosystem secure against passive adversaries corrupting all but one of the players, and againts active adversaries corrupting less than one third of the players. We also describe how one can build a distributed key generation protocol. In the final part of the paper we show how one can, in zero-knowledge - prove knowledge of the plaintext contained in a given ciphertext from Regev’s original cryptosystem or our variant. The proof is of size only a constant times the size of the public key.

82 citations


Book ChapterDOI
09 Feb 2010
TL;DR: A semantically-secure public-key encryption scheme whose security is polynomial-time equivalent to the hardness of solving random instances of the subset sum problem and an oblivious transfer protocol that is secure against semi-honest adversaries.
Abstract: We propose a semantically-secure public-key encryption scheme whose security is polynomial-time equivalent to the hardness of solving random instances of the subset sum problem. The subset sum assumption required for the security of our scheme is weaker than that of existing subset-sum based encryption schemes, namely the lattice-based schemes of Ajtai and Dwork (STOC’97), Regev (STOC’03, STOC’05), and Peikert (STOC’09). Additionally, our proof of security is simple and direct. We also present a natural variant of our scheme that is secure against key-leakage attacks, and an oblivious transfer protocol that is secure against semi-honest adversaries.

75 citations


Book ChapterDOI
09 Feb 2010
TL;DR: Tight connections between several cryptographic primitives, namely encryption with weakly random keys, encryption with key-dependent messages (KDM), and obfuscation of point functions with multi-bit output (which the authors call multi- bit point functions, or MBPFs, for short) are shown.
Abstract: We show tight connections between several cryptographic primitives, namely encryption with weakly random keys, encryption with key-dependent messages (KDM), and obfuscation of point functions with multi-bit output (which we call multi-bit point functions, or MBPFs, for short). These primitives, which have been studied mostly separately in recent works, bear some apparent similarities, both in the flavor of their security requirements and in the flavor of their constructions and assumptions. Still, rigorous connections have not been drawn. Our results can be interpreted as indicating that MBPF obfuscators imply a very strong form of encryption that simultaneously achieves security for weakly-random keys and key-dependent messages as special cases. Similarly, each one of the other primitives implies a certain restricted form of MBPF obfuscation. Our results carry both constructions and impossibility results from one primitive to others. In particular: The recent impossibility result for KDM security of Haitner and Holenstein (TCC ’09) carries over to MBPF obfuscators. The Canetti-Dakdouk construction of MBPF obfuscators based on a strong variant of the DDH assumption (EC ’08) gives an encryption scheme which is secure w.r.t. any weak key distribution of super-logarithmic min-entropy (and in particular, also has very strong leakage resilient properties). All the recent constructions of encryption schemes that are secure w.r.t. weak keys imply a weak form of MBPF obfuscators.

Book ChapterDOI
09 Feb 2010
TL;DR: In this article, a general parallel-repetition theorem with an efficient reduction is presented, and it is shown that parallel repetition reduces the soundness error at an exponential rate in any public-coin argument, and more generally, any argument where the verifier's messages, but not necessarily its decision to accept or reject, can be efficiently simulated with noticeable probability.
Abstract: We present a general parallel-repetition theorem with an efficient reduction. As a corollary of this theorem we establish that parallel repetition reduces the soundness error at an exponential rate in any public-coin argument, and more generally, any argument where the verifier’s messages, but not necessarily its decision to accept or reject, can be efficiently simulated with noticeable probability.

Book ChapterDOI
Vladimir Kolesnikov1
09 Feb 2010
TL;DR: This work proposes a truly efficient String Oblivious Transfer (OT) technique relying on resettable (actually, stateless) tamper-proof token, which is secure against covert sender and malicious receiver, and is sequentially composable.
Abstract: SFE requires expensive public key operations for each input bit of the function. This cost can be avoided by using tamper-proof hardware. However, all known efficient techniques require the hardware to have long-term secure storage and to be resistant to reset or duplication attacks. This is due to the intrinsic use of counters or erasures. Known techniques that use resettable tokens rely on expensive primitives, such as generic concurrent ZK, and are out of reach of practice. We propose a truly efficient String Oblivious Transfer (OT) technique relying on resettable (actually, stateless) tamper-proof token. Our protocols require between 6 and 27 symmetric key operations, depending on the model. Our OT is secure against covert sender and malicious receiver, and is sequentially composable. If the token is semi-honest (e.g. if it is provided by a trusted entity, but adversarily initialized), then our protocol is secure against malicious adversaries in concurrent execution setting. Only one party is required to provide the token, which makes it appropriate for typical asymmetric client-server scenarios (banking, TV, etc.)

Book ChapterDOI
09 Feb 2010
TL;DR: The first protocol for distributed RSA key generation is presented which is constant round, secure against malicious adversaries and has a negligibly small bound on the error probability, even using only one iteration of the underlying primality test on each candidate number.
Abstract: We present the first protocol for distributed RSA key generation which is constant round, secure against malicious adversaries and has a negligibly small bound on the error probability, even using only one iteration of the underlying primality test on each candidate number.

Book ChapterDOI
09 Feb 2010
TL;DR: This paper develops a theoretical framework of “related-secret secure” cryptographic primitives, a class of primitives which includes related-key secure blockciphers and PRFs, and shows that while a single related-secret pseduorandom bit is sufficient and necessary to create related- key secure block ciphers, hard-core bits with typical proofs are not related- secret psuedorandom.
Abstract: Related-key attacks are attacks against constructions which use a secret key (such as a blockcipher) in which an attacker attempts to exploit known or chosen relationships among keys to circumvent security properties. Security against related-key attacks has been a subject of study in numerous recent cryptographic papers. However, most of these results are attacks on specific constructions, while there has been little positive progress on constructing related-key secure primitives. In this paper, we attempt to address the question of whether related-key secure blockciphers can be built from traditional cryptographic primitives. We develop a theoretical framework of “related-secret secure” cryptographic primitives, a class of primitives which includes related-key secure blockciphers and PRFs. We show that while a single related-secret pseduorandom bit is sufficient and necessary to create related-key secure blockciphers, hard-core bits with typical proofs are not related-secret psuedorandom. Since the pseudorandomness of hard-core bits is the essential technique known to make pseudorandomness from assumptions of simple hardness, this presents a very strong barrier to the development of provably related-key secure blockciphers based on standard hardness assumptions.

Book ChapterDOI
09 Feb 2010
TL;DR: The first domain extender for ideal ciphers using 3-round Feistel was described in this paper, where the authors showed that 2 rounds are enough to get a 2n-bit tweakable block-cipher from a n-bit ideal cipher.
Abstract: We describe the first domain extender for ideal ciphers, i.e. we show a construction that is indifferentiable from a 2n-bit ideal cipher, given a n-bit ideal cipher. Our construction is based on a 3-round Feistel, and is more efficient than first building a n-bit random oracle from a n-bit ideal cipher (as in [9]) and then a 2n-bit ideal cipher from a n-bit random oracle (as in [10], using a 6-round Feistel). We also show that 2 rounds are not enough for indifferentiability by exhibiting a simple attack. We also consider our construction in the standard model: we show that 2 rounds are enough to get a 2n-bit tweakable block-cipher from a n-bit tweakable block-cipher and we show that with 3 rounds we can get beyond the birthday security bound.

Book ChapterDOI
09 Feb 2010
TL;DR: A tight parallel repetition theorem for public-coin interactive arguments is shown by giving a tight analysis for a reduction algorithm of Hastad et al.
Abstract: We study efficient parallel repetition theorems for several classes of interactive arguments and obtain the following results: We show a tight parallel repetition theorem for public-coin interactive arguments by giving a tight analysis for a reduction algorithm of Hastad et al. [HPPW08]. That is, n-fold parallel repetition decreases the soundness error from δ to δn. The crux of our improvement is a new analysis that avoid using Raz’s Sampling Lemma, which is the key ingredient to the previous results. We give a new security analysis to strengthen a parallel repetition theorem of Hastad et al. for a more general class of arguments. We show that n-fold parallel repetition decreases the soundness error from δ to δn/2, which is almost tight. In particular, we remove the dependency on the number of rounds in the bound, and as a consequence, extend the “concurrent” repetition theorem of Wikstrom [Wik09] to this model. We obtain a way to turn any interactive argument to one in the class above using fully homomorphic encryption schemes. This gives a way to amplify the soundness of any interactive argument without increasing the round complexity. We give a simple and generic transformation which shows that tight direct product theorems imply almost-tight Chernoff-type theorems. This extends our results to Chernoff-type theorems, and gives an alternative proof to the Chernoff-type theorem of Impagliazzo et al. [IJK09] for weakly-verifiable puzzles.

Book ChapterDOI
09 Feb 2010
TL;DR: This work begins a rigorous study of completeness for primitives that allow fair computation, and shows that for fair secure computation, no primitive of size O(logk) is complete, where k is a security parameter.
Abstract: For secure two-party and multi-party computation with abort, classification of which primitives are complete has been extensively studied in the literature. However, for fair secure computation, where (roughly speaking) either all parties learn the output or none do, the question of complete primitives has remained largely unstudied. In this work, we initiate a rigorous study of completeness for primitives that allow fair computation. We show the following results: No “short” primitive is complete for fairness. In surprising contrast to other notions of security for secure two-party computation, we show that for fair secure computation, no primitive of size O(logk) is complete, where k is a security parameter. This is the case even if we can enforce parallelism in calls to the primitives (i.e., the adversary does not get output from any primitive in a parallel call until it sends input to all of them). This negative result holds regardless of any computational assumptions. A fairness hierarchy. We clarify the fairness landscape further by exhibiting the existence of a “fairness hierarchy”. We show that for every “short” l=O(logk), no protocol making (serial) access to any l-bit primitive can be used to construct even a (l+1)-bit simultaneous broadcast. Positive results. To complement the negative results, we exhibit a k-bit primitive that is complete for two-party fair secure computation. We show how to generalize this result to the multi-party setting. Fairness combiners. We also introduce the question of constructing a protocol for fair secure computation from primitives that may be faulty. We show that this is possible when a majority of the instances are honest. On the flip side, we show that this result is tight: no functionality is complete for fairness if half (or more) of the instances can be malicious.

Book ChapterDOI
09 Feb 2010
TL;DR: Aumann and Lindell defined security against covert attacks, where the adversary is malicious, but is only caught cheating with a certain probability as discussed by the authors, and showed that in many real-world cases, a large probability of being caught is sufficient to prevent the adversary from trying to cheat.
Abstract: Aumann and Lindell defined security against covert attacks, where the adversary is malicious, but is only caught cheating with a certain probability. The idea is that in many real-world cases, a large probability of being caught is sufficient to prevent the adversary from trying to cheat. In this paper, we show how to compile a passively secure protocol for honest majority into one that is secure against covert attacks, again for honest majority and catches cheating with probability 1/4. The cost of the modified protocol is essentially twice that of the original plus an overhead that only depends on the number of inputs.

Book ChapterDOI
09 Feb 2010
TL;DR: This paper shows a compiler that produces an argument system π for L that is concurrent non-malleable zero-knowledge (under non-adaptive inputs – which is the best one can hope to achieve – which was achieved by Barak, Prabhakaran and Sahai].
Abstract: Ever since the invention of Zero-Knowledge by Goldwasser, Micali, and Rackoff [1], Zero-Knowledge has become a central building block in cryptography - with numerous applications, ranging from electronic cash to digital signatures The properties of Zero-Knowledge range from the most simple (and not particularly useful in practice) requirements, such as honest-verifier zero-knowledge to the most demanding (and most useful in applications) such as non-malleable and concurrent zero-knowledge In this paper, we study the complexity of efficient zero-knowledge reductions, from the first type to the second type More precisely, under a standard complexity assumption (ddh), on input a public-coin honest-verifier statistical zero knowledge argument of knowledge π′ for a language L we show a compiler that produces an argument system π for L that is concurrent non-malleable zero-knowledge (under non-adaptive inputs – which is the best one can hope to achieve [2,3]) If κ is the security parameter, the overhead of our compiler is as follows: The round complexity of π is $r+\tilde{O}(\log\kappa)$ rounds, where r is the round complexity of π′ The new prover $\mathcal{P}$ (resp, the new verifier $\mathcal{V}$) incurs an additional overhead of (at most) $r+{\kappa\cdot\tilde{O}(\log^2\kappa)}$ modular exponentiations If tags of length $\tilde{O}(\log\kappa)$ are provided, the overhead is only $r+{\tilde{O}(\log^2\kappa)}$ modular exponentiations The only previous concurrent non-malleable zero-knowledge (under non-adaptive inputs) was achieved by Barak, Prabhakaran and Sahai [4] Their construction, however, mainly focuses on a feasibility result rather than efficiency, and requires expensive ${\mathcal{NP}}$-reductions

Book ChapterDOI
09 Feb 2010
TL;DR: It is proved that there is no black box construction of correlation secure injective trapdoor functions from classic trapdoor permutations, even if the latter is assumed to be one-way for inputs from high entropy, rather than uniform distributions.
Abstract: A family of trapdoor functions is one-way under correlated inputs if no efficient adversary can invert it even when given the value of the function on multiple correlated inputs. This powerful primitive was introduced at TCC 2009 by Rosen and Segev, who use it in an elegant black box construction of a chosen ciphertext secure public key encryption. In this work we continue the study of security under correlated inputs, and prove that there is no black box construction of correlation secure injective trapdoor functions from classic trapdoor permutations, even if the latter is assumed to be one-way for inputs from high entropy, rather than uniform distributions. Our negative result holds for all input distributions where each xi is determined by the remaining n−1 coordinates. The techniques we employ for proving lower bounds about trapdoor permutations are new and quite general, and we believe that they will find other applications in the area of black-box separations.

Book ChapterDOI
Ueli Maurer1, Stefano Tessaro1
09 Feb 2010
TL;DR: This paper proves an optimally efficient construction for converting a weak PRG for any e<1 into a strong PRG by proving that the intuitive construction of applying an extractor to an appropriate number of independent weakPRG outputs yields a strongPRG.
Abstract: It is well known that two random variables X and Y with the same range can be viewed as being equal (in a well-defined sense) with probability 1−d(X,Y), where d(X,Y) is their statistical distance, which in turn is equal to the best distinguishing advantage for X and Y. In other words, if the best distinguishing advantage for X and Y is e, then with probability 1−e they are completely indistinguishable. This statement, which can be seen as an information-theoretic version of a hardcore lemma, is for example very useful for proving indistinguishability amplification results. In this paper we prove the computational version of such a hardcore lemma, thereby extending the concept of hardcore sets from the context of computational hardness to the context of computational indistinguishability. This paradigm promises to have many applications in cryptography and complexity theory. It is proven both in a non-uniform and a uniform setting. For example, for a weak pseudorandom generator (PRG) for which the (computational) distinguishing advantage is known to be bounded by e (e.g. $\epsilon=\frac{1}{2}$), one can define an event on the seed of the PRG which has probability at least 1−e and such that, conditioned on the event, the output of the PRG is essentially indistinguishable from a string with almost maximal min-entropy, namely log(1/(1−e)) less than its length. As an application, we show an optimally efficient construction for converting a weak PRG for any e<1 into a strong PRG by proving that the intuitive construction of applying an extractor to an appropriate number of independent weak PRG outputs yields a strong PRG. This improves strongly over the best previous construction for security amplification of PRGs which does not work for $\epsilon \geq \frac{1}{2}$ and requires the seed of the constructed strong PRG to be very long.

Book ChapterDOI
09 Feb 2010
TL;DR: A variant of the Naor-Yung (NY) paradigm that leads to practical, fully IND-CCA secure encryption schemes whose security can be based on a generic class of algebraic complexity assumptions, and which uses hash proof systems in the NY way as a device to prove consistency.
Abstract: The Naor-Yung (NY) paradigm shows how to build a chosen-ciphertext secure encryption scheme from three conceptual ingredients: a weakly (i.e., IND-CPA) secure encryption scheme, a “replication strategy” that specifies how to use the weakly secure encryption scheme; concretely, a NY-encryption contains several weak encryptions of the same plaintext, a non-interactive zero-knowledge (NIZK) proof system to show that a given ciphertext is consistent, i.e., contains weak encryptions of the same plaintext. The NY paradigm served both as a breakthrough proof-of-concept, and as an inspiration to subsequent constructions. However, the NY construction leads to impractical encryption schemes, due to the usually prohibitively expensive NIZK proof. In this contribution, we give a variant of the NY paradigm that leads to practical, fully IND-CCA secure encryption schemes whose security can be based on a generic class of algebraic complexity assumptions. Our approach refines NY’s approach as follows: Our sole computational assumption is that of a Diffie-Hellman (DH) type two-move key exchange protocol, interpreted as a weakly secure key encapsulation mechanism (KEM). Our “replication strategy” is as follows. Key generation consists of replicating the KEM several times, but only the first pass. Encryption then consists of performing the second pass with respect to all of these, but with the same random coins in each instance. For proving consistency of a given ciphertext, we employ a practical universal hash proof system, case-tailored to our KEM and replication strategy. We instantiate our paradigm both from computational Diffie-Hellman (CDH) and from RSA type assumptions. This way, practical IND-CCA secure encryption schemes based on search problems can be built and explained in a generic, NY-like fashion. We would like to stress that while we generalize universal hash proof systems as a proof system, we do not follow or generalize the approach of Cramer and Shoup to build IND- CCA secure encryption. Their approach uses specific hash proof systems that feature, on top of a NIZK property, a computational indistinguishability property. Hence they necessarily build upon decisional assumptions, whereas we show how to implement our approach with search assumptions. Our approach uses hash proof systems in the NY way, namely solely as a device to prove consistency. In our case, secrecy is provided by the “weak encryption” component, which allows us to embed search problems.

Book ChapterDOI
09 Feb 2010
TL;DR: It is proved that every ideal hierarchical access structure is the port of a representable matroid and, more specifically, every ideal structure in this family admits ideal linear secret sharing schemes over fields of all characteristics.
Abstract: Hierarchical secret sharing is among the most natural generalizations of threshold secret sharing, and it has attracted a lot of attention from the invention of secret sharing until nowadays. Several constructions of ideal hierarchical secret sharing schemes have been proposed, but it was not known what access structures admit such a scheme. We solve this problem by providing a natural definition for the family of the hierarchical access structures and, more importantly, by presenting a complete characterization of the ideal hierarchical access structures, that is, the ones admitting an ideal secret sharing scheme. Our characterization deals with the properties of the hierarchically minimal sets of the access structure, which are the minimal qualified sets whose participants are in the lowest possible levels in the hierarchy. By using our characterization, it can be efficiently checked whether any given hierarchical access structure that is defined by its hierarchically minimal sets is ideal. We use the well known connection between ideal secret sharing and matroids and, in particular, the fact that every ideal access structure is a matroid port. In addition, we use recent results on ideal multipartite access structures and the connection between multipartite matroids and integer polymatroids. We prove that every ideal hierarchical access structure is the port of a representable matroid and, more specifically, we prove that every ideal structure in this family admits ideal linear secret sharing schemes over fields of all characteristics. In addition, methods to construct such ideal schemes can be derived from the results in this paper and the aforementioned ones on ideal multipartite secret sharing. Finally, we use our results to find a new proof for the characterization of the ideal weighted threshold access structures that is simpler than the existing one.

Book ChapterDOI
09 Feb 2010
TL;DR: In this paper, the necessary and sufficient assumptions for universally composable (UC) computation, both in terms of setup and computational assumptions, were studied for the common reference string model, the uniform random string model and the key-registration authority model.
Abstract: We study the necessary and sufficient assumptions for universally composable (UC) computation, both in terms of setup and computational assumptions. We look at the common reference string model, the uniform random string model and the key-registration authority model (KRA), and provide new results for all of them. Perhaps most interestingly we show that: For even the minimal meaningful KRA, where we only assume that the secret key is a value which is hard to compute from the public key, one can UC securely compute any poly-time functionality if there exists a passive secure oblivious-transfer protocol for the stand-alone model. Since a KRA where the secret keys can be computed from the public keys is useless, and some setup assumption is needed for UC secure computation, this establishes the best we could hope for the KRA model: any non-trivial KRA is sufficient for UC computation. We show that in the KRA model one-way functions are sufficient for UC commitment and UC zero-knowledge. These are the first examples of UC secure protocols for non-trivial tasks which do not assume the existence of public-key primitives. In particular, the protocols show that non-trivial UC computation is possible in Minicrypt.

Book ChapterDOI
09 Feb 2010
TL;DR: Strong evidence is provided that while there are efficient protocols allowing a prover to cheat only with negligibly small probability in the case of known order groups, this probability is bounded below by 1/2 for all efficient Σ-protocols not using common reference strings or the like for hidden order groups.
Abstract: Efficient zero-knowledge proofs of knowledge for group homomorphisms are essential for numerous systems in applied cryptography. Especially, Σ-protocols for proving knowledge of discrete logarithms in known and hidden order groups are of prime importance. Yet, while these proofs can be performed very efficiently within groups of known order, for hidden order groups the respective proofs are far less efficient. This paper shows strong evidence that this efficiency gap cannot be bridged. Namely, while there are efficient protocols allowing a prover to cheat only with negligibly small probability in the case of known order groups, we provide strong evidence that for hidden order groups this probability is bounded below by 1/2 for all efficient Σ-protocols not using common reference strings or the like. We prove our results for a comprehensive class of Σ-protocols in the generic group model, and further strengthen them by investigating certain instantiations in the plain model.

Book ChapterDOI
Charanjit S. Jutla1
09 Feb 2010
TL;DR: In this paper, the authors considered weakly-verifiable puzzles, where the responder may not be able to verify for itself whether it answered the challenge correctly, and showed that the hardness of these puzzles is arbitrarily close to Chernoff bound.
Abstract: We consider weakly-verifiable puzzles which are challenge-response puzzles such that the responder may not be able to verify for itself whether it answered the challenge correctly. We consider k-wise direct product of such puzzles, where now the responder has to solve k puzzles chosen independently in parallel. Canetti et al have earlier shown that such direct product puzzles have a hardness which rises exponentially with k. In the threshold case addressed in Impagliazzo et al, the responder is required to answer correctly a fraction of challenges above a threshold. The bound on hardness of this threshold parallel version was shown to be similar to Chernoff bound, but the constants in the exponent are rather weak. Namely, Impagliazzo et al show that for a puzzle for which probability of failure is δ, the probability of failing on less than (1−γ)δk out of k puzzles, for any parallel strategy, is at most $e^{-\gamma^2\delta k/40}$. In this paper, we develop new techniques to bound this probability, and show that it is arbitrarily close to Chernoff bound. To be precise, the bound is $e^{-\gamma^2(1-\gamma) \delta k/2}$. We show that given any responder that solves k parallel puzzles with a good threshold, there is a uniformized parallel solver who has the same threshold of solving k parallel puzzles, while being oblivious to the permutation of the puzzles. This enhances the analysis considerably, and may be of independent interest.

Book ChapterDOI
09 Feb 2010
TL;DR: This work studies rationality in protocol design for the full-information model, a model characterized by computationally unbounded adversaries, no private communication, and no simultaneity within rounds, to design protocols that are faithful and resilient.
Abstract: We study rationality in protocol design for the full-information model, a model characterized by computationally unbounded adversaries, no private communication, and no simultaneity within rounds. Assuming that players derive some utility from the outcomes of an interaction, we wish to design protocols that are faithful: following the protocol should be an optimal strategy for every player, for various definitions of “optimal” and under various assumptions about the behavior of others and the presence, size, and incentives of coalitions. We first focus on leader election for players who only care about whether or not they are elected. We seek protocols that are both faithful and resilient, and for some notions of faithfulness we provide protocols, whereas for others we prove impossibility results. We then proceed to random sampling, in which the aim is for the players to jointly sample from a set of m items with a distribution that is a function of players’ preferences over them. We construct protocols for m≥3 that are faithful and resilient when players are single-minded. We also show that there are no such protocols for 2 items or for complex preferences.

Book ChapterDOI
09 Feb 2010
TL;DR: It is shown that only languages in BPPSam—where Sam is a “collision-finding” oracle in analogy with Simon (Eurocrypt’98) and Haitner et.
Abstract: Goldreich-Krawczyk (Siam J of Comp’96) showed that only languages in BPP have constant-round public-coin black-box zero-know-ledge protocols. We extend their lower bound to “fully black-box” private-coin protocols based on one-way functions. More precisely, we show that only languages in BPPSam—where Sam is a “collision-finding” oracle in analogy with Simon (Eurocrypt’98) and Haitner et. al (FOCS’07)—can have constant-round fully black-box zero-knowledge proofs; the same holds for constant-round fully black-box zero-knowledge arguments with sublinear verifier communication complexity. We also establish near-linear lower bounds on the round complexity of fully black-box concurrent zero-knowledge proofs (or arguments with sublinear verifier communication) for languages outside BPPSam. The technique used to establish these results is a transformation from private-coin protocols into Sam-relativized public-coin protocols; for the case of fully black-box protocols based on one-way functions, this transformation preserves zero knowledge, round complexity and communication complexity.

Proceedings Article
Marthann Schulte1
01 Jan 2010
TL;DR: In this paper, a study focused on the faculty of an established distance learning program at a small Midwestern university was conducted to understand instructors' perceptions of distance education transactions, which is becoming increasingly important as the mode of distance learning has become not only accepted, but preferred by many students.
Abstract: Understanding instructors‟ perceptions of distance education transactions is becoming increasingly important as the mode of distance learning has become not only accepted, but preferred by many students. A need for more empirical qualitative data was evident as anecdotal information still dominates the research literature. The study focused on the faculty of an established distance learning program at a small Midwestern university. Qualitative interviews were gathered, coded and analyzed. Faculty who used technology in their distance education courses were targeted. Faculty perceptions on course context, students, interpersonal and procedural transactions, learning and teaching transactions, and assessment transactions were gathered. The qualitative analysis provided rich data to further inform distance education programs‟ administrative, technological, and andragogical needs. Recommendations for future study, including a model for transactional hierarchy, were proffered.