scispace - formally typeset
Search or ask a question

Showing papers presented at "Theory of Cryptography Conference in 2011"


Book ChapterDOI
28 Mar 2011
TL;DR: The formal study of functional encryption was initiated by as mentioned in this paper, who gave precise definitions of the concept and its security, and showed that defining security for functional encryption is non-trivial.
Abstract: We initiate the formal study of functional encryption by giving precise definitions of the concept and its security. Roughly speaking, functional encryption supports restricted secret keys that enable a key holder to learn a specific function of encrypted data, but learn nothing else about the data. For example, given an encrypted program the secret key may enable the key holder to learn the output of the program on a specific input without learning anything else about the program. We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area.

877 citations


Book ChapterDOI
28 Mar 2011
TL;DR: In this paper, the authors present a protocol for secure two-party computation that follows the methodology of using cut-and-choose to boost Yao's protocol to be secure in the presence of malicious adversaries.
Abstract: Protocols for secure two-party computation enable a pair of parties to compute a function of their inputs while preserving security properties such as privacy, correctness and independence of inputs. Recently, a number of protocols have been proposed for the efficient construction of two-party computation secure in the presence of malicious adversaries (where security is proven under the standard simulationbased ideal/real model paradigm for defining security). In this paper, we present a protocol for this task that follows the methodology of using cut-and-choose to boost Yao's protocol to be secure in the presence of malicious adversaries. Relying on specific assumptions (DDH), we construct a protocol that is significantly more efficient and far simpler than the protocol of Lindell and Pinkas (Eurocrypt 2007) that follows the same methodology. We provide an exact, concrete analysis of the efficiency of our scheme and demonstrate that (at least for not very small circuits) our protocol is more efficient than any other known today.

224 citations


Book ChapterDOI
28 Mar 2011
TL;DR: In this article, a lower bound on the amount of randomness needed for implementing an information theoretically secure oblivious RAM is proved, without assuming that the CPU has access to a random oracle.
Abstract: We present an algorithm for implementing a secure oblivious RAM where the access pattern is perfectly hidden in the information theoretic sense, without assuming that the CPU has access to a random oracle. In addition we prove a lower bound on the amount of randomness needed for implementing an information theoretically secure oblivious RAM.

187 citations


Book ChapterDOI
28 Mar 2011
TL;DR: In this paper, the authors show that strong leakage resilience for cryptosystems with advanced functionalities can be obtained quite naturally within the methodology of dual system encryption, recently introduced by Waters.
Abstract: In this work, we show that strong leakage resilience for cryptosystems with advanced functionalities can be obtained quite naturally within the methodology of dual system encryption, recently introduced by Waters. We demonstrate this concretely by providing fully secure IBE, HIBE, and ABE systems which are resilient to bounded leakage from each of many secret keys per user, as well as many master keys. This can be realized as resilience against continual leakage if we assume keys are periodically updated and no (or logarithmic) leakage is allowed during the update process. Our systems are obtained by applying a simple modification to previous dual system encryption constructions: essentially this provides a generic tool for making dual system encryption schemes leakage-resilient.

167 citations


Book ChapterDOI
28 Mar 2011
TL;DR: A general framework for constructing passwordbased authenticated key exchange protocols with optimal round complexity - one message per party, sent simultaneously - in the standard model, assuming a common reference string is assumed.
Abstract: We show a general framework for constructing passwordbased authenticated key exchange protocols with optimal round complexity - one message per party, sent simultaneously - in the standard model, assuming a common reference string. When our framework is instantiated using bilinear-map cryptosystems, the resulting protocol is also (reasonably) efficient. Somewhat surprisingly, our framework can be adapted to give protocols in the standard model that are universally composable while still using only one (simultaneous) round.

122 citations


Book ChapterDOI
28 Mar 2011
TL;DR: A zero-knowledge based definition of privacy is put forward that is strictly stronger than the notion of differential privacy and is particularly attractive when modeling privacy in social networks and can be meaningfully achieved for tasks such as computing averages, fractions, histograms, and a variety of graph parameters and properties.
Abstract: We put forward a zero-knowledge based definition of privacy. Our notion is strictly stronger than the notion of differential privacy and is particularly attractive when modeling privacy in social networks. We furthermore demonstrate that it can be meaningfully achieved for tasks such as computing averages, fractions, histograms, and a variety of graph parameters and properties, such as average degree and distance to connectivity. Our results are obtained by establishing a connection between zero-knowledge privacy and sample complexity, and by leveraging recent sublinear time algorithms.

119 citations


Book ChapterDOI
28 Mar 2011
TL;DR: In this paper, the authors presented the first signature scheme that is resilient to full continual leakage: memory leakage as well as leakage from processing during signing (both from the secret key and the randomness), in key generation, and in update.
Abstract: Recent breakthrough results by Brakerski et al and Dodis et al have shown that signature schemes can be made secure even if the adversary continually obtains information leakage from the secret key of the scheme. However, the schemes currently do not allow leakage on the secret key and randomness during signing, except in the random oracle model. Further, the random oracle based schemes require updates to the secret key in order to maintain security, even when no leakage during computation is present. We present the first signature scheme that is resilient to full continual leakage: memory leakage as well as leakage from processing during signing (both from the secret key and the randomness), in key generation, and in update. Our scheme can tolerate leakage of a 1 - o(1) fraction of the secret key between updates, and is proven secure in the standard model based on the symmetric external DDH (SXDH) assumption in bilinear groups. The time periods between updates are a function of the amount of leakage in the period (and nothing more). As an additional technical contribution, we introduce a new tool: independent pre-image resistant hash functions, which may be of independent interest.

118 citations


Book ChapterDOI
28 Mar 2011
TL;DR: It is shown that there is no polynomial-time, differentially private algorithm A that takes a database D and outputs a "synthetic database" D all of whose two-way marginals are approximately equal to those of D.
Abstract: Assuming the existence of one-way functions, we show that there is no polynomial-time, differentially private algorithm A that takes a database D ∈ ({0, 1}d)n and outputs a "synthetic database" D all of whose two-way marginals are approximately equal to those of D. (A two-way marginal is the fraction of database rows x ∈ {0, 1}d with a given pair of values in a given pair of columns). This answers a question of Barak et al. (PODS '07), who gave an algorithm running in time poly(n, 2d). Our proof combines a construction of hard-to-sanitize databases based on digital signatures (by Dwork et al., STOC '09) with encodings based on probabilistically checkable proofs. We also present both negative and positive results for generating "relaxed" synthetic data, where the fraction of rows in D satisfying a predicate c are estimated by applying c to each row of D and aggregating the results in some way.

109 citations


Book ChapterDOI
28 Mar 2011
TL;DR: A general study of hash functions secure under correlated inputs, meaning that security should be maintained when the adversary sees hash values of many related high-entropy inputs, and shows relations between correlated-input secure hash functions and cryptographic primitives secure under related-key attacks.
Abstract: We undertake a general study of hash functions secure under correlated inputs, meaning that security should be maintained when the adversary sees hash values of many related high-entropy inputs. Such a property is satisfied by a random oracle, and its importance is illustrated by study of the "avalanche effect," a well-known heuristic in cryptographic hash function design. One can interpret "security" in different ways: e.g., asking for one-wayness or that the hash values look uniformly and independently random; the latter case can be seen as a generalization of correlation-robustness introduced by Ishai et al. (CRYPTO 2003). We give specific applications of these notions to password-based login and efficient search on encrypted data. Our main construction achieves them (without random oracles) for inputs related by polynomials over the input space (namely Zp), based on corresponding variants of the q-Diffie Hellman Inversion assumption. Additionally, we show relations between correlated-input secure hash functions and cryptographic primitives secure under related-key attacks. Using our techniques, we are also able to obtain a host of new results for such related-key attack secure cryptographic primitives.

108 citations


Book ChapterDOI
28 Mar 2011
TL;DR: The first IBE schemes that are proven secure against selective opening attack (SOA) are presented, which means that if an adversary, given a vector of ciphertexts, adaptively corrupts some fraction of the senders, exposing not only their messages but also their coins, the privacy of the unopened messages is guaranteed.
Abstract: We present the first IBE schemes that are proven secure against selective opening attack (SOA). This means that if an adversary, given a vector of ciphertexts, adaptively corrupts some fraction of the senders, exposing not only their messages but also their coins, the privacy of the unopened messages is guaranteed. Achieving security against such attacks is well-known to be challenging and was only recently done in the PKE case. We show that IBE schemes having a property we call 1-sided public openability (1SPO) yield SOA secure IBE schemes and then provide two 1SPO IBE schemes, the first based on the Boyen-Waters anonymous IBE and the second on Waters' dual-system approach.

94 citations


Book ChapterDOI
28 Mar 2011
TL;DR: The notion of leakage-resilient PKE was introduced in this article, which captures the intuition that as long as the entropy of the encrypted message is higher than the amount of leakage, the message still has some (pseudo) entropy left.
Abstract: What does it mean for an encryption scheme to be leakage-resilient? Prior formulations require that the scheme remains semantically secure even in the presence of leakage, but only considered leakage that occurs before the challenge ciphertext is generated. Although seemingly necessary, this restriction severely limits the usefulness of the resulting notion. In this work we study after-the-fact leakage, namely leakage that the adversary obtains after seeing the challenge ciphertext. We seek a "natural" and realizable notion of security, which is usable in higher-level protocols and applications. To this end, we formulate entropic leakage-resilient PKE. This notion captures the intuition that as long as the entropy of the encrypted message is higher than the amount of leakage, the message still has some (pseudo) entropy left. We show that this notion is realized by the Naor-Segev constructions (using hash proof systems). We demonstrate that entropic leakage-resilience is useful by showing a simple construction that uses it to get semantic security in the presence of after-the-fact leakage, in a model of bounded memory leakage from a split state.

Book ChapterDOI
28 Mar 2011
TL;DR: This work shows how to transform any additively homomorphic private-key encryption scheme that is compact, into a public-keyryption scheme, and shows that the length of a homomorphically generated encryption is independent of the number of ciphertexts from which it was created.
Abstract: We show how to transform any additively homomorphic private-key encryption scheme that is compact, into a public-key encryption scheme. By compact we mean that the length of a homomorphically generated encryption is independent of the number of ciphertexts from which it was created. We do not require anything else on the distribution of homomorphically generated encryptions (in particular, we do not require them to be distributed like real ciphertexts). Our resulting public-key scheme is homomorphic in the following sense. If the private-key scheme is i+1-hop homomorphic with respect to some set of operations then the public-key scheme we construct is i-hop homomorphic with respect to the same set of operations.

Book ChapterDOI
28 Mar 2011
TL;DR: This paper shows that it is impossible to achieve optimally-fair coin tossing via a black-box construction from one-way functions for r that is less than O(n/log n), where n is the input/output length of the one- way function used.
Abstract: A fair two-party coin tossing protocol is one in which both parties output the same bit that is almost uniformly distributed (ie, it equals 0 and 1 with probability that is at most negligibly far from one half) It is well known that it is impossible to achieve fair coin tossing even in the presence of fail-stop adversaries (Cleve, FOCS 1986) In fact, Cleve showed that for every coin tossing protocol running for r rounds, an efficient fail-stop adversary can bias the output by Ω(1/r) Since this is the best possible, a protocol that limits the bias of any adversary to Ω(1/r) is called optimally-fair The only optimally-fair protocol that is known to exist relies on the existence of oblivious transfer, because it uses general secure computation (Moran, Naor and Segev, TCC 2009) However, it is possible to achieve a bias of Ω(1/√r) in r rounds relying only on the assumption that there exist one-way functions In this paper we show that it is impossible to achieve optimally-fair coin tossing via a black-box construction from one-way functions for r that is less than O(n/log n), where n is the input/output length of the one-way function used An important corollary of this is that it is impossible to construct an optimally-fair coin tossing protocol via a black-box construction from one-way functions whose round complexity is independent of the security parameter n determining the security of the one-way function being used Informally speaking, the main ingredient of our proof is to eliminate the random-oracle from "secure" protocols with "low round-complexity" and simulate the protocol securely against semi-honest adversaries in the plain model We believe our simulation lemma to be of broader interest

Book ChapterDOI
28 Mar 2011
TL;DR: This work presents the first protocol realizing universally composable two-party computations with information-theoretic security using only one single tamper-proof device issued by one of the mutually distrusting parties.
Abstract: Cryptographic assumptions regarding tamper proof hardware tokens have gained increasing attention. Even if the tamperproof hardware is issued by one of the parties, and hence not necessarily trusted by the other, many tasks become possible: Tamper proof hardware is sufficient for universally composable protocols, for information-theoretically secure protocols, and even allow to create software which can only be used once (One-Time-Programs). However, all known protocols employing tamper-proof hardware are either indirect, i.e., additional computational assumptions must be used to obtain general two party computations or a large number of devices must be used. In this work we present the first protocol realizing universally composable two-party computations (and even trusted One-Time-Programs) with information-theoretic security using only one single tamper-proof device issued by one of the mutually distrusting parties.

Book ChapterDOI
28 Mar 2011
TL;DR: In this paper, it was shown that for any constant d e N, there exists a public-key encryption scheme that can securely encrypt any function f of its own secret key, assuming f can be expressed as a polynomial of total degree d. The security of such a scheme is said to be keydependent message (KDM) secure w.r.t.
Abstract: We show how to achieve public-key encryption schemes that can securely encrypt nonlinear functions of their own secret key. Specifically, we show that for any constant d e N, there exists a public-key encryption scheme that can securely encrypt any function f of its own secret key, assuming f can be expressed as a polynomial of total degree d. Such a scheme is said to be key-dependent message (KDM) secure w.r.t. degree-d polynomials. We also show that for any constants c, e, there exists a public-key encryption scheme that is KDM secure w.r.t. all Turing machines with description size c log λ and running time λe, where λ is the security parameter. The security of such public-key schemes can be based either on the standard decision Diffie-Hellman (DDH) assumption or on the learning with errors (LWE) assumption (with certain parameters (settings). In the case of functions that can be expressed as degree-d polynomials, we show that the resulting schemes are also secure with respect to key cycles of any length. Specifically, for any polynomial number n of key pairs, our schemes can securely encrypt a degree-d polynomial whose variables are the collection of coordinates of all n secret keys. Prior to this work, it was not known how to achieve this for nonlinear functions. Our key idea is a general transformation that amplifies KDM security. The transformation takes an encryption scheme that is KDM secure w.r.t. some functions even when the secret keys are weak (i.e. chosen from an arbitrary distribution with entropy k), and outputs a scheme that is KDM secure w.r.t. a richer class of functions. The resulting scheme may no longer be secure with weak keys. Thus, in some sense, this transformation converts security with weak keys into amplified KDM security.

Book ChapterDOI
28 Mar 2011
TL;DR: A new cryptographic notion is introduced that is a one-time computable pseudorandom function (PRF) that can be evaluated on at most one input, even by an adversary who controls the device storing the key K, and it is shown that this tool can be used to improve the communication complexity of proofs-of-erasure schemes.
Abstract: This paper studies the design of cryptographic schemes that are secure even if implemented on untrusted machines that fall under adversarial control. For example, this includes machines that are infected by a software virus. We introduce a new cryptographic notion that we call a one-time computable pseudorandom function (PRF), which is a PRF FK(ċ) that can be evaluated on at most one input, even by an adversary who controls the device storing the key K, as long as: (1) the adversary cannot "leak" the key K out of the device completely (this is similar to the assumptions made in the Bounded-Retrieval Model), and (2) the local read/write memory of the machine is restricted, and not too much larger than the size of K. In particular, the only way to evaluate FK(x) on such device, is to overwrite part of the key K during the computation, thus preventing all future evaluations of FK(ċ) at any other point x′ ≠ x. We show that this primitive can be used to construct schemes for password protected storage that are secure against dictionary attacks, even by a virus that infects the machine. Our constructions rely on the random-oracle model, and lower-bounds for graphs pebbling problems. We show that our techniques can also be used to construct another primitive, called uncomputable hash functions, which are hash functions that have a short description but require a large amount of space to compute on any input. We show that this tool can be used to improve the communication complexity of proofs-of-erasure schemes, introduced recently by Perito and Tsudik (ESORICS 2010).

Book ChapterDOI
28 Mar 2011
TL;DR: A new framework for blackbox constructions that encompasses constructions with a nonblack-box flavor: specifically, those that rely on zero-knowledge proofs relative to some oracle, which is powerful enough to capture the Naor-Yung/Sahai paradigm for building a (shielding) CCA-secure public-key encryption scheme from a CPA-secure one.
Abstract: For over 20 years, black-box impossibility results have been used to argue the infeasibility of constructing certain cryptographic primitives (e.g., key agreement) from others (e.g., one-way functions). A widely recognized limitation of such impossibility results, however, is that they say nothing about the usefulness of (known) nonblack-box techniques. This is unsatisfying, as we would at least like to rule out constructions using the set of techniques we have at our disposal. With this motivation in mind, we suggest a new framework for blackbox constructions that encompasses constructions with a nonblack-box flavor: specifically, those that rely on zero-knowledge proofs relative to some oracle. We show that our framework is powerful enough to capture the Naor-Yung/Sahai paradigm for building a (shielding) CCA-secure public-key encryption scheme from a CPA-secure one, something ruled out by prior black-box separation results. On the other hand, we show that several black-box impossibility results still hold even in a setting that allows for zero-knowledge proofs.

Book ChapterDOI
28 Mar 2011
TL;DR: There exist events on the choices of the respective states, occurring each with probability at least 1-e, such that the two systems are computationally indistinguishable conditioned on these events, which settles a long-standing open problem due to Luby and Rackoff (STOC '86).
Abstract: We consider the task of amplifying the security of a weak pseudorandom permutation (PRP), called an e-PRP, for which the computational distinguishing advantage is only guaranteed to be bounded by some (possibly non-negligible) quantity e > 1. We prove that the cascade (i.e., sequential composition) of m e-PRPs (with independent keys) is an ((m - (m - 1)e)em + V)-PRP, where V is a negligible function. In the asymptotic setting, this implies security amplification for all e > 1-1/poly, and the result extends to two-sided PRPs, where the inverse of the given permutation is also queried. Furthermore, we show that this result is essentially tight. This settles a long-standing open problem due to Luby and Rackoff (STOC '86). Our approach relies on the first hardcore lemma for computational indistinguishability of interactive systems: Given two systems whose states do not depend on the interaction, and which no efficient adversary can distinguish with advantage better than e, we show that there exist events on the choices of the respective states, occurring each with probability at least 1-e, such that the two systems are computationally indistinguishable conditioned on these events.

Book ChapterDOI
28 Mar 2011
TL;DR: The first adaptive OT protocol simultaneously satisfying a strong notion of security under a simple assumption in the standard model is presented, and the observation that a secure signature scheme is not necessary is made, provided that signatures can only be forged in certain ways.
Abstract: In an adaptive oblivious transfer (OT) protocol, a sender commits to a database of messages and then repeatedly interacts with a receiver in such a way that the receiver obtains one message per interaction of his choice (and nothing more) while the sender learns nothing about any of the choices. Recently, there has been significant effort to design practical adaptive OT schemes and to use these protocols as a building block for larger database applications. To be well suited for these applications, the underlying OT protocol should: (1) support an efficient initialization phase where one commitment can support an arbitrary number of receivers who are guaranteed of having the same view of the database, (2) execute transfers in time independent of the size of the database, and (3) satisfy a strong notion of security under a simple assumption in the standard model. We present the first adaptive OT protocol simultaneously satisfying these requirements. The sole complexity assumption required is that given (g, ga, gb, gc,Q), where g generates a bilinear group of prime order p and a, b, c are selected randomly from Zp, it is hard to decide if Q = gabc. All prior protocols in the standard model either do not meet our efficiency requirements or require dynamic "q-based" assumptions. Our construction makes an important change to the established "assisted decryption" technique for designing adaptive OT. As in prior works, the sender commits to a database of n messages by publishing an encryption of each message and a signature on each encryption. Then, each transfer phase can be executed in time independent of n as the receiver blinds one of the encryptions and proves knowledge of the blinding factors and a signature on this encryption, after which the sender helps the receiver decrypt the chosen ciphertext. One of the main obstacles to designing an adaptive OT scheme from a simple assumption is realizing a suitable signature for this purpose (i.e., enabling signatures on group elements in a manner that later allows for efficient proofs.) We make the observation that a secure signature scheme is not necessary for this paradigm, provided that signatures can only be forged in certain ways.

Book ChapterDOI
28 Mar 2011
TL;DR: This paper shows that there is no fully black-box construction of a OWP from a length-increasing injective one-way function (OWF), even if the latter is just 1-bit-increasing and achieves strong form of one- wayness which it is called adaptive one-Wayness.
Abstract: A one-way permutation (OWP) is one of the most fundamental cryptographic primitives, and can be used as a building block for most of basic symmetric-key cryptographic primitives However, despite its importance and usefulness, previous black-box separation results have shown that constructing a OWP from another primitive seems hopeless, unless building blocks already achieve "one-way" property and "permutation" property simultaneously In this paper, in order to clarify more about the constructions of a OWP from other primitives, we study the construction of a OWP from primitives that are very close to a OWP Concretely, as a negative result, we show that there is no fully black-box construction of a OWP from a length-increasing injective one-way function (OWF), even if the latter is just 1-bit-increasing and achieves strong form of one-wayness which we call adaptive one-wayness As a corollary, we show that there is no fully black-box construction of a OWP from a regular OWF with regularity greater than 1 Since a permutation is length-preserving and injective, and is a regular OWF with regularity 1, our negative result indicates that to construct a OWP from another primitive is quite difficult, even if we use very close primitives to a OWP as building blocks Moreover, we extend our separation result of a OWP from a length-increasing injective OWF, and show a certain restrictive form of black-box separations among injective OWFs in terms of how much a function stretches its input This result shows a hierarchy among injective OWFs (including a OWP)

Proceedings Article
01 Jan 2011
TL;DR: This research explores this emerging technology and how it serves to collaborate, innovate, and produce positive learning outcomes.
Abstract: The emergence of augmented reality technology in the form of interactive games has produced a valuable tool for education. The "Live" communal nature of these games, blending virtual content with global access and communication, has resulted in a new research arena previously called, "edutainment" but more recently called "learning games". Windows Live combined with Xbox 360 with Kinect technology provides an agile, real- time environment with case-based reasoning, where learners can enjoy games, simulations and face to face chat, stream HD movies and television, music, sports and even Twitter and Facebook, with others around the world, or alone, in the privacy of the home. This research explores this emerging technology and how it serves to collaborate, innovate, and produce positive learning outcomes.

Book ChapterDOI
28 Mar 2011
TL;DR: Every finite deterministic 2- party function is either complete or can be considered equivalent to a noncomplete symmetric 2-party function-this assertion holds true with respect to active adversaries as well as passive adversaries.
Abstract: In this paper we present simple but comprehensive combinatorial criteria for completeness of finite deterministic 2-party functions with respect to information-theoretic security. We give a general protocol construction for efficient and statistically secure reduction of oblivious transfer to any finite deterministic 2-party function that fulfills our criteria. For the resulting protocols we prove universal composability. Our results are tight in the sense that our criteria still are necessary for any finite deterministic 2-party function to allow for implementation of oblivious transfer with statistical privacy and correctness. We unify and generalize results of Joe Kilian (1991, 2000) in two ways. Firstly, we show that his completeness criteria also hold in the UC framework. Secondly, what is our main contribution, our criteria also cover a wide class of primitives that are not subject of previous criteria. We show that there are non-trivial examples of finite deterministic 2- party functions that are neither symmetric nor asymmetric and therefore have not been covered by existing completeness criteria so far. As a corollary of our work, every finite deterministic 2-party function is either complete or can be considered equivalent to a noncomplete symmetric 2-party function-this assertion holds true with respect to active adversaries as well as passive adversaries. Thereby known results on non-complete symmetric 2-party functions are strengthened.

Book ChapterDOI
28 Mar 2011
TL;DR: This work rule out black-box constructions of blind signature schemes from one-way functions even from a random permutation oracle, and these results hold even for blind signing schemes for 1-bit messages that achieve security only against honest-but-curious behavior.
Abstract: A seminal result in cryptography is that signature schemes can be constructed (in a black-box fashion) from any one-way function. The minimal assumptions needed to construct blind signature schemes, however, have remained unclear. Here, we rule out black-box constructions of blind signature schemes from one-way functions. In fact, we rule out constructions even from a random permutation oracle, and our results hold even for blind signature schemes for 1-bit messages that achieve security only against honest-but-curious behavior.

Book ChapterDOI
28 Mar 2011
TL;DR: It is shown, for queries with output in Rn and with respect to a large class of utilities, that any computationally private mechanism can be converted to a statistically private mechanism that is equally efficient and achieves roughly the same utility.
Abstract: Differential privacy is a well established definition guaranteeing that queries to a database do not reveal "too much" information about specific individuals who have contributed to the database. The standard definition of differential privacy is information theoretic in nature, but it is natural to consider computational relaxations and to explore what can be achieved with respect to such notions. Mironov et al. (Crypto 2009) and McGregor et al. (FOCS 2010) recently introduced and studied several variants of computational differential privacy, and show that in the two-party setting (where data is split between two parties) these relaxations can offer significant advantages. Left open by prior work was the extent, if any, to which computational differential privacy can help in the usual client/server setting where the entire database resides at the server, and the client poses queries on this data. We show, for queries with output in Rn (for constant n) and with respect to a large class of utilities, that any computationally private mechanism can be converted to a statistically private mechanism that is equally efficient and achieves roughly the same utility.

Book ChapterDOI
28 Mar 2011
TL;DR: Average-case strengthenings of the traditional assumption that coNP is not contained in AM are considered, which rule out generic and potentially non-black-box constructions of various cryptographic primitives from one-way functions, assuming the security reductions are black-box.
Abstract: We consider average-case strengthenings of the traditional assumption that coNP is not contained in AM. Under these assumptions, we rule out generic and potentially non-black-box constructions of various cryptographic primitives (e.g., one-way permutations, collision-resistant hash-functions, constant-round statistically hiding commitments, and constant-round black-box zero-knowledge proofs for NP) from one-way functions, assuming the security reductions are black-box.

Book ChapterDOI
28 Mar 2011
TL;DR: Surprisingly, it is shown that when parties have different beliefs, UC security can be achieved with a more limited "trust" than what is necessary in the traditional setting (where all parties have a common belief).
Abstract: Known constructions of UC secure protocols are based on the premise that different parties collectively agree on some trusted setup. In this paper, we consider the following two intriguing questions: Is it possible to achieve UC if the parties do not want to put all their trust in one entity (or more generally, in one setup)? What if the parties have a difference of opinion about what they are willing to trust? The first question has been studied in only a limited way, while the second has never been considered before. In this paper, we initiate a systematic study to answer the above questions. We consider a scenario with multiple setup instances where each party in the system has some individual belief (setup assumption in terms of the given setups). The belief of a party corresponds to what it is willing to trust and its security is guaranteed given that its belief "holds." The question considered is: "Given some setups and the (possibly) different beliefs of all the parties, when can UC security be achieved?" We present a general condition on the setups and the beliefs of all the parties under which UC security is possible. Surprisingly, we show that when parties have different beliefs, UC security can be achieved with a more limited "trust" than what is necessary in the traditional setting (where all parties have a common belief).

Proceedings Article
01 Jan 2011
TL;DR: In this paper, the authors present the results of a study assessing the comparative effectiveness of teaching an undergraduate intermediate accounting course in the online classroom format, where students were required to complete several objective homework assignments and write an essay on what is means to be a professional.
Abstract: This paper presents the results of a study assessing the comparative effectiveness of teaching an undergraduate intermediate accounting course in the online classroom format. Students in a large state university were offered an opportunity to complete the first course in intermediate accounting either online or on-campus. Students were required to complete several objective homework assignments and write an essay on what is means to be a professional. In addition, students were asked to report their progress in achieving seven stated objectives of the course. Students in the online course performed as well as students in the on-campus course.

Book ChapterDOI
28 Mar 2011
TL;DR: Any weak cryptographic protocol whose security is given by the unpredictability of single bits can be strengthened with a natural information theoretic protocol with generalizing theorems in the cryptographic setting.
Abstract: We give new proofs for the hardness amplification of efficiently samplable predicates and of weakly verifiable puzzles which generalize to new settings. More concretely, in the first part of the paper, we give a new proof of Yao's XOR-Lemma that additionally applies to related theorems in the cryptographic setting. Our proof seems simpler than previous ones, yet immediately generalizes to statements similar in spirit such as the extraction lemma used to obtain pseudo-random generators from one-way functions [Hastad, Impagliazzo, Levin, Luby, SIAM J. on Comp. 1999]. In the second part of the paper, we give a new proof of hardness amplification for weakly verifiable puzzles, which is more general than previous ones in that it gives the right bound even for an arbitrary monotone function applied to the checking circuit of the underlying puzzle. Both our proofs are applicable in many settings of interactive cryptographic protocols because they satisfy a property that we call "non-rewinding". In particular, we show that any weak cryptographic protocol whose security is given by the unpredictability of single bits can be strengthened with a natural information theoretic protocol. As an example, we show how these theorems solve the main open question from [Halevi and Rabin, TCC2008] concerning bit commitment.

Book ChapterDOI
28 Mar 2011
TL;DR: This work provides the first construction of a CNMZK protocol that, without any trusted set-up, remains secure even if the attacker may adaptively select the statements to receive proofs of.
Abstract: Concurrent non-malleable zero-knowledge (CNMZK) considers the concurrent execution of zero-knowledge protocols in a setting where the attacker can simultaneously corrupt multiple provers and verifiers. We provide the first construction of a CNMZK protocol that, without any trusted set-up, remains secure even if the attacker may adaptively select the statements to receive proofs of; previous works only handle scenarios where the statements are fixed at the beginning of the execution, or chosen adaptively from a restricted set of statements.

Book ChapterDOI
28 Mar 2011
TL;DR: A stronger counterexample is given ruling out the interesting possibility that for any scheme there exists a constant c < 0, such that n fold repetition remains secure against cċnċl bits of leakage.
Abstract: If a cryptographic primitive remains secure even if l bits about the secret key are leaked to the adversary, one would expect that at least one of n independent instantiations of the scheme remains secure given n ċ l bits of leakage. This intuition has been proven true for schemes satisfying some special information-theoretic properties by Alwen et al. [Eurocrypt' 10]. On the negative side, Lewko and Waters [FOCS'10] construct a CPA secure public-key encryption scheme for which this intuition fails. The counterexample of Lewko and Waters leaves open the interesting possibility that for any scheme there exists a constant c < 0, such that n fold repetition remains secure against cċnċl bits of leakage. Furthermore, their counterexample requires the n copies of the encryption scheme to share a common reference parameter, leaving open the possibility that the intuition is true for all schemes without common setup. In this work we give a stronger counterexample ruling out these possibilities. We construct a signature scheme such that: 1. a single instantiation remains secure given l = log(k) bits of leakage where k is a security parameter 2. any polynomial number of independent instantiations can be broken (in the strongest sense of key-recovery) given l′ = poly(k) bits of leakage. Note that l′ does not depend on the number of instances. The computational assumption underlying our counterexample is that non-interactive computationally sound proofs exist. Moreover, under a stronger (non-standard) assumption about such proofs, our counterexample does not require a common reference parameter. The underlying idea of our counterexample is rather generic and can be applied to other primitives like encryption schemes.