scispace - formally typeset
Search or ask a question

Showing papers presented at "Theory of Cryptography Conference in 2009"


Book ChapterDOI
20 Feb 2009
TL;DR: The public-key encryption scheme of Regev, and the identity-basedryption scheme of Gentry, Peikert and Vaikuntanathan are remarkably robust against memory attacks where the adversary can measure a large fraction of the bits of the secret-key, or more generally, can compute an arbitrary function of thesecret-key of bounded output length.
Abstract: This paper considers two questions in cryptography. Cryptography Secure Against Memory Attacks. A particularly devastating side-channel attack against cryptosystems, termed the "memory attack", was proposed recently. In this attack, a significant fraction of the bits of a secret key of a cryptographic algorithm can be measured by an adversary if the secret key is ever stored in a part of memory which can be accessed even after power has been turned off for a short amount of time. Such an attack has been shown to completely compromise the security of various cryptosystems in use, including the RSA cryptosystem and AES. We show that the public-key encryption scheme of Regev (STOC 2005), and the identity-based encryption scheme of Gentry, Peikert and Vaikuntanathan (STOC 2008) are remarkably robust against memory attacks where the adversary can measure a large fraction of the bits of the secret-key, or more generally, can compute an arbitrary function of the secret-key of bounded output length. This is done without increasing the size of the secret-key, and without introducing any complication of the natural encryption and decryption routines. Simultaneous Hardcore Bits. We say that a block of bits of x are simultaneously hard-core for a one-way function f (x ), if given f (x ) they cannot be distinguished from a random string of the same length. Although any candidate one-way function can be shown to hide one hardcore bit and even a logarithmic number of simultaneously hardcore bits, there are few examples of one-way or trapdoor functions for which a linear number of the input bits have been proved simultaneously hardcore; the ones that are known relate the simultaneous security to the difficulty of factoring integers. We show that for a lattice-based (injective) trapdoor function which is a variant of function proposed earlier by Gentry, Peikert and Vaikuntanathan, an N *** o (N ) number of input bits are simultaneously hardcore, where N is the total length of the input. These two results rely on similar proof techniques.

560 citations


Book ChapterDOI
20 Feb 2009
TL;DR: A symmetric-key predicate encryption scheme which supports inner product queries and it is proved that the scheme achieves both plaintext privacy and predicate privacy.
Abstract: Predicate encryption is a new encryption paradigm which gives a master secret key owner fine-grained control over access to encrypted data. The master secret key owner can generate secret key tokens corresponding to predicates. An encryption of data x can be evaluated using a secret token corresponding to a predicate f ; the user learns whether the data satisfies the predicate, i.e., whether f (x ) = 1. Prior work on public-key predicate encryption has focused on the notion of data or plaintext privacy, the property that ciphertexts reveal no information about the encrypted data to an attacker other than what is inherently revealed by the tokens the attacker possesses. In this paper, we consider a new notion called predicate privacy , the property that tokens reveal no information about the encoded query predicate. Predicate privacy is inherently impossible to achieve in the public-key setting and has therefore received little attention in prior work. In this work, we consider predicate encryption in the symmetric-key setting and present a symmetric-key predicate encryption scheme which supports inner product queries. We prove that our scheme achieves both plaintext privacy and predicate privacy.

414 citations


Book ChapterDOI
20 Feb 2009
TL;DR: The main insight of this work comes from a simple connection between PoR schemes and the notion of hardness amplification, and then building nearly optimal PoR codes using state-of-the-art tools from coding and complexity theory.
Abstract: Proofs of Retrievability (PoR) , introduced by Juels and Kaliski [JK07], allow the client to store a file F on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client's data. Constructions of PoR schemes attempt to minimize the client and server storage, the communication complexity of an audit, and even the number of file-blocks accessed by the server during the audit. In this work, we identify several different variants of the problem (such as bounded-use vs. unbounded-use, knowledge-soundness vs. information-soundness), and giving nearly optimal PoR schemes for each of these variants. Our constructions either improve (and generalize) the prior PoR constructions, or give the first known PoR schemes with the required properties. In particular, we Formally prove the security of an (optimized) variant of the bounded-use scheme of Juels and Kaliski [JK07], without making any simplifying assumptions on the behavior of the adversary. Build the first unbounded-use PoR scheme where the communication complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open question of Shacham and Waters [SW08]. Build the first bounded-use scheme with information-theoretic security. The main insight of our work comes from a simple connection between PoR schemes and the notion of hardness amplification , extensively studied in complexity theory. In particular, our improvements come from first abstracting a purely information-theoretic notion of PoR codes , and then building nearly optimal PoR codes using state-of-the-art tools from coding and complexity theory.

381 citations


Book ChapterDOI
20 Feb 2009
TL;DR: It is shown that the OPRF implies a new practical fully-simulatable adaptive adaptive (and committed) OT protocol secure without ROM and implies the first secure computation protocol of set intersection on committed data with computational cost of O (N ) exponentiations where N is the maximum size of both data sets.
Abstract: An Oblivious Pseudorandom Function (OPRF) [15] is a two-party protocol between sender S and receiver R for securely computing a pseudorandom function f k (·) on key k contributed by S and input x contributed by R , in such a way that receiver R learns only the value f k (x ) while sender S learns nothing from the interaction. In other words, an OPRF protocol for PRF f k (·) is a secure computation for functionality $\mathcal F_{\mathsf{OPRF}}:(k,x)\rightarrow(\perp,f_k(x))$. We propose an OPRF protocol on committed inputs which requires only O (1) modular exponentiations, and has a constant number of communication rounds (two in ROM). Our protocol is secure in the CRS model under the Composite Decisional Residuosity (CDR) assumption, while the PRF itself is secure on a polynomially-sized domain under the Decisional q -Diffie-Hellman Inversion assumption on a group of composite order, where q is the size of the PRF domain, and it has a useful feature that f k is an injection for every k . practical OPRF protocol for an injective PRF, even limited to a polynomially-sized domain, is a versatile tool with many uses in secure protocol design. We show that our OPRF implies a new practical fully-simulatable adaptive (and committed) OT protocol secure without ROM. In another example, this oblivious PRF construction implies the first secure computation protocol of set intersection on committed data with computational cost of O (N ) exponentiations where N is the maximum size of both data sets.

320 citations


Book ChapterDOI
20 Feb 2009
TL;DR: This study shows that any collection of injective trapdoor functions that is secure under a very natural correlated product can be used to construct a CCA-secure encryption scheme.
Abstract: We initiate the study of one-wayness under correlated products We are interested in identifying necessary and sufficient conditions for a function f and a distribution on inputs (x 1 , , x k ), so that the function (f (x 1 ), , f (x k )) is one-way The main motivation of this study is the construction of public-key encryption schemes that are secure against chosen-ciphertext attacks (CCA) We show that any collection of injective trapdoor functions that is secure under a very natural correlated product can be used to construct a CCA-secure encryption scheme The construction is simple, black-box, and admits a direct proof of security We provide evidence that security under correlated products is achievable by demonstrating that lossy trapdoor functions (Peikert and Waters, STOC '08) yield injective trapdoor functions that are secure under the above mentioned correlated product Although we currently base security under correlated products on existing constructions of lossy trapdoor functions, we argue that the former notion is potentially weaker as a general assumption Specifically, there is no fully-black-box construction of lossy trapdoor functions from trapdoor functions that are secure under correlated products

160 citations


Book ChapterDOI
20 Feb 2009
TL;DR: A new cut-and-choose based approach called LEGO (Large Efficient Garbled-circuit Optimization): It is specifically aimed at large circuits, and obtains a factor $\log\vert\mathcal{C}\vert$ improvement in computation and communication over previous cut- and-cho choose based solutions.
Abstract: This paper continues the recent line of work of making Yao's garbled circuit approach to two-party computation secure against an active adversary. We propose a new cut-and-choose based approach called LEGO (Large Efficient Garbled-circuit Optimization): It is specifically aimed at large circuits. Asymptotically it obtains a factor $\log\vert\mathcal{C}\vert$ improvement in computation and communication over previous cut-and-choose based solutions, where $\vert\mathcal{C}\vert$ is the size of the circuit being computed. The protocol is universally composable (UC) in the OT-hybrid model against a static, active adversary.

160 citations


Book ChapterDOI
20 Feb 2009
TL;DR: The first hierarchical identity based encryption (HIBE) system that has full security for more than a constant number of levels is presented, and the hardness assumption is similar to that underlying Gentry's IBE system.
Abstract: We present the first hierarchical identity based encryption (HIBE) system that has full security for more than a constant number of levels. In all prior HIBE systems in the literature, the security reductions suffered from exponential degradation in the depth of the hierarchy, so these systems were only proven fully secure for identity hierarchies of constant depth. (For deep hierarchies, previous work could only prove the weaker notion of selective-ID security.) In contrast, we offer a tight proof of security, regardless of the number of levels; hence our system is secure for polynomially many levels. Our result can very roughly be viewed as an application of Boyen's framework for constructing HIBE systems from exponent-inversion IBE systems to a (dramatically souped-up) version of Gentry's IBE system, which has a tight reduction. In more detail, we first describe a generic transformation from "identity based broadcast encryption with key randomization" (KR-IBBE) to a HIBE, and then construct KR-IBBE by modifying a recent construction of IBBE of Gentry and Waters, which is itself an extension of Gentry's IBE system. Our hardness assumption is similar to that underlying Gentry's IBE system.

153 citations


Book ChapterDOI
Cynthia Dwork1
20 Feb 2009
TL;DR: The definition of differential privacy is reviewed and a handful of very recent contributions to the differential privacy frontier are surveyed.
Abstract: We review the definition of differential privacy and briefly survey a handful of very recent contributions to the differential privacy frontier

150 citations


Book ChapterDOI
20 Feb 2009
TL;DR: These results extend a previous approach of Naor and Pinkas for secure polynomial evaluation to two-party protocols with security against malicious parties and present several solutions which differ in their efficiency, generality, and underlying intractability assumptions.
Abstract: We study the complexity of securely evaluating arithmetic circuits over finite rings. This question is motivated by natural secure computation tasks. Focusing mainly on the case of two-party protocols with security against malicious parties, our main goals are to: (1) only make black-box calls to the ring operations and standard cryptographic primitives, and (2) minimize the number of such black-box calls as well as the communication overhead. We present several solutions which differ in their efficiency, generality, and underlying intractability assumptions. These include: An unconditionally secure protocol in the OT-hybrid model which makes a black-box use of an arbitrary ring R ,but where the number of ring operations grows linearly with (an upper bound on) log|R |. Computationally secure protocols in the OT-hybrid model which make a black-box use of an underlying ring, and in which the number of ring operations does not grow with the ring size. The protocols rely on variants of previous intractability assumptions related to linear codes. In the most efficient instance of these protocols, applied to a suitable class of fields, the (amortized) communication cost is a constant number of field elements per multiplication gate and the computational cost is dominated by O (logk ) field operations per gate, where k is a security parameter. These results extend a previous approach of Naor and Pinkas for secure polynomial evaluation (SIAM J. Comput. , 2006). A protocol for the rings *** m = ***/m *** which only makes a black-box use of a homomorphic encryption scheme. When m is prime, the (amortized) number of calls to the encryption scheme for each gate of the circuit is constant. All of our protocols are in fact UC-secure in the OT-hybrid model and can be generalized to multiparty computation with an arbitrary number of malicious parties.

148 citations


Book ChapterDOI
20 Feb 2009
TL;DR: There exists no reduction from an encryption scheme secure against key-dependent messages to, essentially, any cryptographic assumption if the adversary can obtain an encryption of g (k ) for an arbitrary g, as long as the reduction's proof of security treats both the adversary and the function g as black boxes.
Abstract: We study the possibility of constructing encryption schemes secure under messages that are chosen depending on the key k of the encryption scheme itself. We give the following separation results that hold both in the private and in the public key settings: Let $\mathcal{H}$ be the family of poly(n )-wise independent hash-functions. There exists no fully-black-box reduction from an encryption scheme secure against key-dependent messages to one-way permutations (and also to families of trapdoor permutations) if the adversary can obtain encryptions of h (k ) for $h \in \mathcal{H}$. There exists no reduction from an encryption scheme secure against key-dependent messages to, essentially, any cryptographic assumption, if the adversary can obtain an encryption of g (k ) for an arbitrary g , as long as the reduction's proof of security treats both the adversary and the function g as black boxes.

128 citations


Book ChapterDOI
20 Feb 2009
TL;DR: This work obtains a constant-round black-box construction of secure two-party computation protocols starting from only semi-honest oblivious transfer, and by combining the techniques with recent constructions of concurrent zero-knowledge and non-malleable primitives, obtains black-boxes construction of concurrentzero-knowledge arguments for NP andNon-Malleable commitmentsStarting from only one-way functions.
Abstract: We exhibit constructions of the following two-party cryptographic protocols given only black-box access to a one-way function: constant-round zero-knowledge arguments (of knowledge) for any language in NP; constant-round trapdoor commitment schemes; constant-round parallel coin-tossing. Previous constructions either require stronger computational assumptions (e.g. collision-resistant hash functions), non-black-box access to a one-way function, or a super-constant number of rounds. As an immediate corollary, we obtain a constant-round black-box construction of secure two-party computation protocols starting from only semi-honest oblivious transfer. In addition, by combining our techniques with recent constructions of concurrent zero-knowledge and non-malleable primitives, we obtain black-box constructions of concurrent zero-knowledge arguments for NP and non-malleable commitments starting from only one-way functions.

Book ChapterDOI
20 Feb 2009
TL;DR: In this article, a simple protocol for secret reconstruction in any threshold secret sharing scheme was proposed, and it was shown that all parties will learn the secret with high probability when the honest parties follow the protocol and the rational parties act in their own self-interest.
Abstract: We provide a simple protocol for secret reconstruction in any threshold secret sharing scheme, and prove that it is fair when executed with many rational parties together with a small minority of honest parties. That is, all parties will learn the secret with high probability when the honest parties follow the protocol and the rational parties act in their own self-interest (as captured by a set-Nash analogue of trembling hand perfect equilibrium). The protocol only requires a standard (synchronous) broadcast channel, tolerates both early stopping and incorrectly computed messages, and only requires 2 rounds of communication. Previous protocols for this problem in the cryptographic or economic models have either required an honest majority, used strong communication channels that enable simultaneous exchange of information, or settled for approximate notions of security/equilibria. They all also required a nonconstant number of rounds of communication.

Book ChapterDOI
20 Feb 2009
TL;DR: The optimal trade-off between the round complexity and the bias of two-party coin-flipping protocols is established: an r -round protocol with bias O (1/r).
Abstract: We address one of the foundational problems in cryptography: the bias of coin-flipping protocols. Coin-flipping protocols allow mutually distrustful parties to generate a common unbiased random bit, guaranteeing that even if one of the parties is malicious, it cannot significantly bias the output of the honest party. A classical result by Cleve [STOC '86] showed that for any two-party r -round coin-flipping protocol there exists an efficient adversary that can bias the output of the honest party by *** (1/r ). However, the best previously known protocol only guarantees $O(1/\sqrt{r})$ bias, and the question of whether Cleve's bound is tight has remained open for more than twenty years. In this paper we establish the optimal trade-off between the round complexity and the bias of two-party coin-flipping protocols. Under standard assumptions (the existence of oblivious transfer), we show that Cleve's lower bound is tight: we construct an r -round protocol with bias O (1/r ).

Book ChapterDOI
20 Feb 2009
TL;DR: This work introduces and study on-line deniability, where deniability should hold even when one of the parties colludes with a third party during execution of the protocol, and shows feasibility with respect to static corruptions and a relaxation termed deniability with incriminating abort under adaptive corruptions.
Abstract: Protocols for deniable authentication achieve seemingly paradoxical guarantees: upon completion of the protocol the receiver is convinced that the sender authenticated the message, but neither party can convince anyone else that the other party took part in the protocol. We introduce and study on-line deniability , where deniability should hold even when one of the parties colludes with a third party during execution of the protocol. This turns out to generalize several realistic scenarios that are outside the scope of previous models. We show that a protocol achieves our definition of on-line deniability if and only if it realizes the message authentication functionality in the generalized universal composability framework; any protocol satisfying our definition thus automatically inherits strong composability guarantees. Unfortunately, we show that our definition is impossible to realize in the PKI model if adaptive corruptions are allowed (even if secure erasure is assumed). On the other hand, we show feasibility with respect to static corruptions (giving the first separation in terms of feasibility between the static and adaptive setting), and show how to realize a relaxation termed deniability with incriminating abort under adaptive corruptions.

Book ChapterDOI
20 Feb 2009
TL;DR: Results by Alekhnovich, Hirsch and Itsykson imply that Goldreich's function is secure against "myopic" backtracking algorithms (an interesting subclass) if the 3-ary parity predicate P (x 1 ,x 2 ,x 3 ) = x 1 *** x 2 *** x 3 is used.
Abstract: Goldreich (ECCC 2000) proposed a candidate one-way function construction which is parameterized by the choice of a small predicate (over d = O (1) variables) and of a bipartite expanding graph of right-degree d . The function is computed by labeling the n vertices on the left with the bits of the input, labeling each of the n vertices on the right with the value of the predicate applied to the neighbors, and outputting the n -bit string of labels of the vertices on the right. Inverting Goldreich's one-way function is equivalent to finding solutions to a certain constraint satisfaction problem (which easily reduces to SAT) having a "planted solution," and so the use of SAT solvers constitutes a natural class of attacks. We perform an experimental analysis using MiniSat, which is one of the best publicly available algorithms for SAT. Our experiment shows that the running time required to invert the function grows exponentially with the length of the input, and that such an attack becomes infeasible already with small input length (a few hundred bits). Motivated by these encouraging experiments, we initiate a rigorous study of the limitations of back-tracking based SAT solvers as attacks against Goldreich's function. Results by Alekhnovich, Hirsch and Itsykson imply that Goldreich's function is secure against "myopic" backtracking algorithms (an interesting subclass) if the 3-ary parity predicate P (x 1 ,x 2 ,x 3 ) = x 1 *** x 2 *** x 3 is used. One must, however, use non-linear predicates in the construction, which otherwise succumbs to a trivial attack via Gaussian elimination. We generalized the work of Alekhnovich et al. to handle a more general class of predicates, and we present a lower bound for the construction that uses the predicate P d (x 1 ,...,x d ) : = x 1 *** x 2 *** *** *** x d *** 2 *** (x d *** 1 *** x d ) and a random graph.

Book ChapterDOI
20 Feb 2009
TL;DR: The compiler achieves security in the universal composability framework, assuming access to an ideal commitment functionality, and improves over previous work achieving the same security guarantee in two ways: it uses black-box access to the underlying protocol and achieves a constant multiplicative overhead in the round complexity.
Abstract: We present a compiler for transforming an oblivious transfer (OT) protocol secure against an adaptive semi-honest adversary into one that is secure against an adaptive malicious adversary. Our compiler achieves security in the universal composability framework, assuming access to an ideal commitment functionality, and improves over previous work achieving the same security guarantee in two ways: it uses black-box access to the underlying protocol and achieves a constant multiplicative overhead in the round complexity. As a corollary, we obtain the first constructions of adaptively secure protocols in the stand-alone model using black-box access to a low-level primitive.

Book ChapterDOI
20 Feb 2009
TL;DR: This work shows that for checkers that access the remote storage in a deterministic and non-adaptive manner (as do all known memory checkers), their query complexity must be at least *** (logn /loglogn ).
Abstract: We consider the problem of memory checking, where a user wants to maintain a large database on a remote server but has only limited local storage The user wants to use the small (but trusted and secret) local storage to detect faults in the large (but public and untrusted) remote storage A memory checker receives from the user store and retrieve operations to the large database The checker makes its own requests to the (untrusted) remote storage and receives answers to these requests It then uses these responses, together with its small private and reliable local memory, to ascertain that all requests were answered correctly, or to report faults in the remote storage (the public memory) A fruitful line of research investigates the complexity of memory checking in terms of the number of queries the checker issues per user request (query complexity) and the size of the reliable local memory (space complexity) Blum et al, who first formalized the question, distinguished between online checkers (that report faults as soon as they occur) and offline checkers (that report faults only at the end of a long sequence of operations) In this work we revisit the question of memory checking, asking how efficient can memory checking be? For online checkers, Blum et al provided a checker with logarithmic query complexity in n , the database size Our main result is a lower bound: we show that for checkers that access the remote storage in a deterministic and non-adaptive manner (as do all known memory checkers), their query complexity must be at least *** (logn /loglogn ) To cope with this negative result, we show how to trade off the read and write complexity of online memory checkers: for any desired logarithm base d , we construct an online checker where either reading or writing is inexpensive and has query complexity O (log d n ) The price for this is that the other operation (write or read respectively) has query complexity O (d ·log d n ) Finally, if even this performance is unacceptable, offline memory checking may be an inexpensive alternative We provide a scheme with O (1) amortized query complexity, improving Blum et al's construction, which only had such performance for long sequences of at least n operations

Book ChapterDOI
20 Feb 2009
TL;DR: In this article, the feasibility of securely implementing symmetric secure function evaluation (SSFE) functions against passive and active (standalone), computationally unbounded adversaries has been investigated.
Abstract: In symmetric secure function evaluation (SSFE), Alice has an input x , Bob has an input y , and both parties wish to securely compute f (x ,y ). We show several new results classifying the feasibility of securely implementing these functions in several security settings. Namely, we give new alternate characterizations of the functions that have (statistically) secure protocols against passive and active (standalone), computationally unbounded adversaries. We also show a strict, infinite hierarchy of complexity for SSFE functions with respect to universally composable security against unbounded adversaries. That is, there exists a sequence of functions f 1 , f 2 , ... such that there exists a UC-secure protocol for f i in the f j -hybrid world if and only if i ≤ j . The main new technical tool that unifies our unrealizability results is a powerful protocol simulation theorem, which may be of independent interest. Essentially, in any adversarial setting (UC, standalone, or passive), f is securely realizable if and only if a very simple (deterministic) "canonical" protocol for f achieves the desired security. Thus, to show that f is unrealizable, one need simply demonstrate a single attack on a single simple protocol.

Proceedings Article
01 Jan 2009
TL;DR: A range of current social networking choices are explored and it is argued that like any tool, it should carefully evaluated in terms of affordances and course goals.
Abstract: Debates rage about the appropriateness of using social networking in teaching, with arguments ranging from waste of time and distraction from academic goals to needed to reach net generation students. This paper explores a range of current social networking choices and argues that like any tool, it should carefully evaluated in terms of affordances and course goals. Several different tools are reviewed, and questions that might be useful for evaluation are discussed. An example of using a social networking tool, Ning, in an online class is reported.

Book ChapterDOI
20 Feb 2009
TL;DR: It is demonstrated that a weak notion of extraction implies a strong one, and the possibility of constructing cryptographic primitives from simpler or weaker ones while maintaining extractability is studied, to make rigorous the intuition that extraction and obfuscation are complementary notions.
Abstract: Extractable functions are functions where any adversary that outputs a point in the range of the function is guaranteed to "know" a corresponding preimage. Here, knowledge is captured by the existence of an efficient extractor that recovers the preimage from the internal state of the adversary . Extractability of functions was defined by the authors (ICALP'08) in the context of perfectly one-way functions. It can be regarded as an abstraction from specific knowledge assumptions, such as the Knowledge of Exponent assumption (Hada and Tanaka, Crypto 1998). We initiate a more general study of extractable functions. We explore two different approaches. The first approach is aimed at understanding the concept of extractability in of itself; in particular we demonstrate that a weak notion of extraction implies a strong one, and make rigorous the intuition that extraction and obfuscation are complementary notions. In the second approach, we study the possibility of constructing cryptographic primitives from simpler or weaker ones while maintaining extractability. Results are generally positive. Specifically, we show that several cryptographic reductions are either "knowledge-preserving" or can be modified to be so. Examples include reductions from extractable weak one-way functions to extractable strong ones, from extractable pseudorandom generators to extractable pseudorandom functions, and from extractable one-way functions to extractable commitments. Other questions, such as constructing extractable pseudorandom generators from extractable one way functions, remain open.

Book ChapterDOI
20 Feb 2009
TL;DR: In this article, the first completely fair protocols for non-trivial functions in the multi-party setting were presented, at least with regard to round complexity, and it was shown that achieving fairness is "harder" than achieving complete fairness in the two-party case.
Abstract: Gordon et al. recently showed that certain (non-trivial) functions can be computed with complete fairness in the two-party setting. Motivated by their results, we initiate a study of complete fairness in the multi-party case and demonstrate the first completely-fair protocols for non-trivial functions in this setting. We also provide evidence that achieving fairness is "harder" in the multi-party setting, at least with regard to round complexity.

Book ChapterDOI
20 Feb 2009
TL;DR: A very efficient and purely rational solution to the rational secret sharing problem with a verifiable trusted channel is exhibited.
Abstract: Rational secret sharing is a problem at the intersection of cryptography and game theory. In essence, a dealer wishes to engineer a communication game that, when rationally played, guarantees that each of the players learns the dealer's secret. Yet, all solutions proposed so far did not rely solely on the players' rationality, but also on their beliefs , and were also quite inefficient. After providing a more complete definition of the problem, we exhibit a very efficient and purely rational solution to it with a verifiable trusted channel.

Book ChapterDOI
20 Feb 2009
TL;DR: In this article, the weak erasure channel and the weak binary symmetric channel are introduced, which are more general and use much weaker assumptions than unfair noisy channels, which makes implementation a more realistic prospect.
Abstract: Various results show that oblivious transfer can be implemented using the assumption of noisy channels . Unfortunately, this assumption is not as weak as one might think, because in a cryptographic setting, these noisy channels must satisfy very strong security requirements. Unfair noisy channels , introduced by Damgard, Kilian and Salvail [Eurocrypt '99], reduce these limitations: They give the adversary an unfair advantage over the honest player, and therefore weaken the security requirements on the noisy channel. However, this model still has many shortcomings: For example, the adversary's advantage is only allowed to have a very special form, and no error is allowed in the implementation. In this paper we generalize the idea of unfair noisy channels. We introduce two new models of cryptographic noisy channels that we call the weak erasure channel and the weak binary symmetric channel , and show how they can be used to implement oblivious transfer. Our models are more general and use much weaker assumptions than unfair noisy channels, which makes implementation a more realistic prospect. For example, these are the first models that allow the parameters to come from experimental evidence.

Book ChapterDOI
20 Feb 2009
TL;DR: Goldreich et al. as discussed by the authors characterized the classes of functions that can be computed securely in the authenticated channels model in the presence of passive, semi-honest, active, and quantum adversaries.
Abstract: While general secure function evaluation (SFE) with information-theoretical (IT) security is infeasible in presence of a corrupted majority in the standard model, there are SFE protocols (Goldreich et al. [STOC'87]) that are computationally secure (without fairness) in presence of an actively corrupted majority of the participants. Now, computational assumptions can usually be well justified at the time of protocol execution. The concern is rather a potential violation of the privacy of sensitive data by an attacker whose power increases over time. Therefore, we ask which functions can be computed with long-term security, where we admit computational assumptions for the duration of a computation, but require IT security (privacy) once the computation is concluded. Towards a combinatorial characterization of this class of functions, we also characterize the classes of functions that can be computed IT securely in the authenticated channels model in presence of passive, semi-honest, active, and quantum adversaries.

Book ChapterDOI
20 Feb 2009
TL;DR: This work proposes a general security definition for cryptographic quantum protocols that implement classical non-reactive two-party tasks and shows that recently proposed quantum protocols for secure identification and oblivious transfer in the bounded-quantum-storage model satisfy this definition, and thus compose in the above sense.
Abstract: We propose a general security definition for cryptographic quantum protocols that implement classical non-reactive two-party tasks. The definition is expressed in terms of simple quantum- information-theoretic conditions which must be satisfied by the protocol to be secure. The conditions are uniquely determined by the ideal functionality $\mathcal{F}$ defining the cryptographic task to be implemented. We then show the following composition result. If quantum protocols *** 1 ,...,*** *** securely implement ideal functionalities $\mathcal{F}_1,\ldots,\mathcal{F}_\ell$ according to our security definition, then any purely classical two-party protocol, which makes sequential calls to $\mathcal{F}_1,\ldots,\mathcal{F}_\ell$, is equally secure as the protocol obtained by replacing the calls to $\mathcal{F}_1,\ldots,\mathcal{F}_\ell$ with the respective quantum protocols *** 1 ,...,*** *** . Hence, our approach yields the minimal security requirements which are strong enough for the typical use of quantum protocols as subroutines within larger classical schemes. Finally, we show that recently proposed quantum protocols for secure identification and oblivious transfer in the bounded-quantum-storage model satisfy our security definition, and thus compose in the above sense.

Book ChapterDOI
Chris Peikert1
20 Feb 2009
TL;DR: This tutorial will survey the foundational results of the core hard cryptographic problems, some recurring techniques and abstractions, and a few notable applications of lattice-based cryptographic schemes.
Abstract: The past decade in computer science has witnessed tremendous progress in the understanding of lattices , which are a rich source of seemingly hard computational problems. One of their most promising applications is to the design of cryptographic schemes that enjoy exceptionally strong security guarantees and other desirable properties. Most notably, these schemes can be proved secure assuming only the worst-case hardness of well-studied lattice problems. Additionally, and in contrast with number-theoretic problems typically used in cryptography, the underlying problems have so far resisted attacks by subexponential-time and quantum algorithms. Yet even with these security advantages, lattice-based schemes also tend to be remarkably simple, asymptotically efficient, and embarrassingly parallelizable. This tutorial will survey the foundational results of the area, as well as some more recent developments. Our particular focus will be on the core hard cryptographic (average-case) problems, some recurring techniques and abstractions, and a few notable applications.

Book ChapterDOI
20 Feb 2009
TL;DR: In this article, the authors consider a similar but strictly weaker physical assumption, where a player (Alice) can partially isolate another player (Bob) for a brief portion of the computation and prevent Bob from communicating more than some limited number of bits with the environment.
Abstract: It is well known that universally composable multiparty computation cannot, in general, be achieved in the standard model without setup assumptions when the adversary can corrupt an arbitrary number of players. One way to get around this problem is by having a trusted third party generate some global setup such as a common reference string (CRS) or a public key infrastructure (PKI) . The recent work of Katz shows that we may instead rely on physical assumptions, and in particular tamper-proof hardware tokens . In this paper, we consider a similar but strictly weaker physical assumption. We assume that a player (Alice) can partially isolate another player (Bob) for a brief portion of the computation and prevent Bob from communicating more than some limited number of bits with the environment. For example, isolation might be achieved by asking Bob to put his functionality on a tamper-proof hardware token and assuming that Alice can prevent this token from communicating to the outside world. Alternatively, Alice may interact with Bob directly but in a special office which she administers and where there are no high-bandwidth communication channels to the outside world. We show that, under standard cryptographic assumptions, such physical setup can be used to UC-realize any two party and multiparty computation in the presence of an active and adaptive adversary corrupting any number of players. We also consider an alternative scenario, in which there are some trusted third parties but no single such party is trusted by all of the players. This compromise allows us to significantly limit the use of the physical set-up and hence might be preferred in practice.

Proceedings Article
01 Jan 2009
TL;DR: Investigation of the effectiveness of blended learning instruction in terms of students' satisfaction revealed that perceived e-learner satisfaction was higher than the average indicating students' high satisfaction with the overall learning experience.
Abstract: During the past few years the number of courses offered online has greatly increased as technology has made delivery of such courses more feasible. Blended learning environment amalgamates the advantages of distance education with the effective aspects of traditional education. The purpose of this study was to investigate the effectiveness of blended learning instruction in terms of students' satisfaction. Participants were sixty one (n=61) undergraduate students, between the ages from 17-24 years old. Forty-four percentage (44.3%) of the participants were male and fifty-six were female (55.7%). For the data collection at the end of this study, students completed a questionnaire with 2 sections. The first section included the students' demographic/personal data. The second section evaluated students' satisfaction about the blended learning course. Students' satisfaction had been acknowledged as an important factor in order to estimate the effectiveness of a course, especially a blended course. Data analysis revealed that perceived e-learner satisfaction was higher than the average indicating students' high satisfaction with the overall learning experience.

Book ChapterDOI
20 Feb 2009
TL;DR: This work defines a natural relaxation of VRFs that is called weak verifiable random functions, where pseudorandomness is required to hold only for randomly selected inputs, and conducts a study of weak VRFs, focusing on applications, constructions, and their relationship to other cryptographic primitives.
Abstract: Verifiable random functions (VRFs), introduced by Micali, Rabin and Vadhan, are pseudorandom functions in which the owner of the seed produces a public-key that constitutes a commitment to all values of the function and can then produce, for any input x , a proof that the function has been evaluated correctly on x , preserving pseudorandomness for all other inputs. No public-key (even a falsely generated one) should allow for proving more than one value per input. VRFs are both a natural and a useful primitive, and previous works have suggested a variety of constructions and applications. Still, there are many open questions in the study of VRFs, especially their relation to more widely studied cryptographic primitives and constructing them from a wide variety of cryptographic assumptions. In this work we define a natural relaxation of VRFs that we call weak verifiable random functions, where pseudorandomness is required to hold only for randomly selected inputs. We conduct a study of weak VRFs, focusing on applications, constructions, and their relationship to other cryptographic primitives. We show: Constructions. We present constructions of weak VRFs based on a variety of assumptions, including general assumptions such as (enhanced) trapdoor permutations, as well as constructions based on specific number-theoretic assumptions such as the Diffie-Hellman assumption in bilinear groups. Separations. Verifiable random functions (both weak and standard) cannot be constructed from one-way permutations in a black-box manner. This constitutes the first result separating (standard) VRFs from any cryptographic primitive. Applications. Weak VRFs capture the essence of constructing non-interactive zero-knowledge proofs for all NP languages.

Book ChapterDOI
20 Feb 2009
TL;DR: In this article, the authors formulate two natural flavors of non-malleability requirements for obfuscation, and show that they are incomparable in general, and construct nonmalleable obfuscators for some program families of interest.
Abstract: Existing definitions of program obfuscation do not rule out malleability attacks, where an adversary that sees an obfuscated program is able to generate another (potentially obfuscated) program that is related to the original one in some way. We formulate two natural flavors of non-malleability requirements for program obfuscation, and show that they are incomparable in general. We also construct non-malleable obfuscators of both flavors for some program families of interest. Some of our constructions are in the Random Oracle model, whereas another one is in the common reference string model. We also define the notion of verifiable obfuscation which is of independent interest.