scispace - formally typeset
Search or ask a question

Showing papers presented at "Theory of Cryptography Conference in 2007"


Book ChapterDOI
21 Feb 2007
TL;DR: This work constructs public-key systems that support comparison queries on encrypted data as well as more general queries such as subset queries (x∈ S) and supports arbitrary conjunctive queries without leaking information on individual conjuncts.
Abstract: We construct public-key systems that support comparison queries (x ≥ a) on encrypted data as well as more general queries such as subset queries (x∈ S). Furthermore, these systems support arbitrary conjunctive queries (P1 ∧ ... ∧ Pl) without leaking information on individual conjuncts. We present a general framework for constructing and analyzing public-key systems supporting queries on encrypted data.

1,310 citations


Book ChapterDOI
Melissa Chase1
21 Feb 2007
TL;DR: The question of whether a scheme could be constructed in which multiple authorities were allowed to distribute attributes is answered in the affirmative and how to apply the techniques to achieve a multiauthority version of the large universe fine grained access control ABE is shown.
Abstract: In an identity based encryption scheme, each user is identified by a unique identity string. An attribute based encryption scheme (ABE), in contrast, is a scheme in which each user is identified by a set of attributes, and some function of those attributes is used to determine decryption ability for each ciphertext. Sahai and Waters introduced a single authority attribute encryption scheme and left open the question of whether a scheme could be constructed in which multiple authorities were allowed to distribute attributes [SW05]. We answer this question in the affirmative. Our scheme allows any polynomial number of independent authorities to monitor attributes and distribute secret keys. An encryptor can choose, for each authority, a number dk and a set of attributes; he can then encrypt a message such that a user can only decrypt if he has at least dk of the given attributes from each authority k. Our scheme can tolerate an arbitrary number of corrupt authoritites. We also show how to apply our techniques to achieve a multiauthority version of the large universe fine grained access control ABE presented by Gopal et al. [GPSW06].

1,046 citations


Book ChapterDOI
21 Feb 2007
TL;DR: The notion of universally composable (UC) security is extended in a way that re-establishes its original intuitive guarantee even for protocols that use globally available set-up, and guarantees deniability.
Abstract: Cryptographic protocols are often designed and analyzed under some trusted set-up assumptions, namely in settings where the participants have access to global information that is trusted to have some basic security properties. However, current modeling of security in the presence of such set-up falls short of providing the expected security guarantees. A quintessential example of this phenomenon is the deniability concern: there exist natural protocols that meet the strongest known composable security notions, and are still vulnerable to bad interactions with rogue protocols that use the same set-up. We extend the notion of universally composable (UC) security in a way that re-establishes its original intuitive guarantee even for protocols that use globally available set-up. The new formulation prevents bad interactions even with adaptively chosen protocols that use the same set-up. In particular, it guarantees deniability. While for protocols that use no setup the proposed requirements are the same as in traditional UC security, for protocols that use global set-up the proposed requirements are significantly stronger. In fact, realizing Zero Knowledge or commitment becomes provably impossible, even in the Common Reference String model. Still, we propose reasonable alternative set-up assumptions and protocols that allow realizing practically any cryptographic task under standard hardness assumptions even against adaptive corruptions.

392 citations


Book ChapterDOI
21 Feb 2007
TL;DR: The main construction generalizes the approach of Kushilevitz and Ostrovsky for constructing single-server Private Information Retrieval protocols and shows how to strengthen the above so that c′ does not contain additional information about P (other than P(x) for some x) even if the public key and the ciphertext c are maliciously formed.
Abstract: We present a public-key encryption scheme with the following properties. Given a branching program P and an encryption c′ of an input x, it is possible to efficiently compute a succinct ciphertext c′ from which P(x) can be efficiently decoded using the secret key. The size of c′ depends polynomially on the size of x and the length of P, but does not further depend on the size of P. As interesting special cases, one can efficiently evaluate finite automata, decision trees, and OBDDs on encrypted data, where the size of the resulting ciphertext c′ does not depend on the size of the object being evaluated. These are the first general representation models for which such a feasibility result is shown. Our main construction generalizes the approach of Kushilevitz and Ostrovsky (FOCS 1997) for constructing single-server Private Information Retrieval protocols. We also show how to strengthen the above so that c′ does not contain additional information about P (other than P(x) for some x) even if the public key and the ciphertext c are maliciously formed. This yields a two-message secure protocol for evaluating a length-bounded branching program P held by a server on an input x held by a client. A distinctive feature of this protocol is that it hides the size of the server's input P from the client. In particular, the client's work is independent of the size of P.

250 citations


Book ChapterDOI
21 Feb 2007
TL;DR: This work shows a natural obfuscation task that can be achieved under the best-possible definition, but cannot be achieve under the black-box definition, and shows that strong (information-theoretic) best-Possible obfuscation implies a collapse in the polynomial hierarchy.
Abstract: An obfuscator is a compiler that transforms any program (which we will view in this work as a boolean circuit) into an obfuscated program (also a circuit) that has the same input-output functionality as the original program, but is "unintelligible". Obfuscation has applications for cryptography and for software protection. Barak et al. initiated a theoretical study of obfuscation, which focused on black-box obfuscation, where the obfuscated circuit should leak no information except for its (black-box) input-output functionality. A family of functionalities that cannot be obfuscated was demonstrated. Subsequent research has showed further negative results as well as positive results for obfuscating very specific families of circuits, all with respect to black box obfuscation. This work is a study of a new notion of obfuscation, which we call best-possible obfuscation. Best possible obfuscation makes the relaxed requirement that the obfuscated program leaks as little information as any other program with the same functionality (and of similar size). In particular, this definition allows the program to leak non black-box information. Best-possible obfuscation guarantees that any information that is not hidden by the obfuscated program is also not hidden by any other similar-size program computing the same functionality, and thus the obfuscation is (literally) the best possible. In this work we study best-possible obfuscation and its relationship to previously studied definitions. Our main results are: 1. A separation between black-box and best-possible obfuscation. We show a natural obfuscation task that can be achieved under the best-possible definition, but cannot be achieved under the black-box definition. 2. A hardness result for best-possible obfuscation, showing that strong (information-theoretic) best-possible obfuscation implies a collapse in the polynomial hierarchy. 3. An impossibility result for efficient best-possible (and black-box) obfuscation in the presence of random oracles. This impossibility result uses a random oracle to construct hard-to-obfuscate circuits, and thus it does not imply impossibility in the standard model.

209 citations


Book ChapterDOI
21 Feb 2007
TL;DR: This paper guarantees that if an adversary deviates from the protocol in a way that would enable it to "cheat", then the honest parties are guaranteed to detect this cheating with good probability and argues that this level of security is sufficient in many settings.
Abstract: In the setting of secure multiparty computation, a set of mutually distrustful parties wish to securely compute some joint function of their private inputs. The computation should be carried out in a secure way, meaning that no coalition of corrupted parties should be able to learn more than specified or somehow cause the result to be "incorrect". Typically, corrupted parties are either assumed to be semi-honest (meaning that they follow the protocol specification) or malicious (meaning that they may deviate arbitrarily from the protocol). However, in many settings, the assumption regarding semi-honest behavior does not suffice and security in the presence of malicious adversaries is excessive and expensive to achieve. In this paper, we introduce the notion of covert adversaries, which we believe faithfully models the adversarial behavior in many commercial, political, and social settings. Covert adversaries have the property that they may deviate arbitrarily from the protocol specification in an attempt to cheat, but do not wish to be "caught" doing so. We provide a definition of security for covert adversaries and show that it is possible to obtain highly efficient protocols that are secure against such adversaries. We stress that in our definition, we quantify over all (possibly malicious) adversaries and do not assume that the adversary behaves in any particular way. Rather, we guarantee that if an adversary deviates from the protocol in a way that would enable it to "cheat", then the honest parties are guaranteed to detect this cheating with good probability. We argue that this level of security is sufficient in many settings.

200 citations


Book ChapterDOI
21 Feb 2007
TL;DR: This work presents the first positive obfuscation result for a traditional cryptographic functionality that takes a ciphertext for message m encrypted under Alice's public key and transforms it into a cipher Text for the same message m under Bob's public public key.
Abstract: We present the first positive obfuscation result for a traditional cryptographic functionality. This positive result stands in contrast to well-known negative impossibility results [BGI+01] for general obfuscation and recent negative impossibility and improbability [GK05] results for obfuscation of many cryptographic functionalities. Whereas other positive obfuscation results in the standard model apply to very simple point functions, our obfuscation result applies to the significantly more complicated and widely-used re-encryption functionality. This functionality takes a ciphertext for message m encrypted under Alice's public key and transforms it into a ciphertext for the same message m under Bob's public key. To overcome impossibility results and to make our results meaningful for cryptographic functionalities, we use a new definition of obfuscation. This new definition incorporates more security-aware provisions.

150 citations


Book ChapterDOI
21 Feb 2007
TL;DR: In this paper, the authors present a very simple and efficient adaptively sound perfect NIZK argument system for any NP-language, which does not pose any restriction on the statements to be proven.
Abstract: This paper presents a very simple and efficient adaptively-sound perfect NIZK argument system for any NP-language. In contrast to recently proposed schemes by Groth, Ostrovsky and Sahai, our scheme does not pose any restriction on the statements to be proven. Besides, it enjoys a number of desirable properties: it allows to re-use the common reference string (CRS), it can handle arithmetic circuits, and the CRS can be set-up very efficiently without the need for an honest party. We then show an application of our techniques in constructing efficient NIZK schemes for proving arithmetic relations among committed secrets, whereas previous methods required expensive generic NP-reductions. The security of the proposed schemes is based on a strong non-standard assumption, an extended version of the so-called Knowledge-of-Exponent Assumption (KEA) over bilinear groups. We give some justification for using such an assumption by showing that the commonly-used approach for proving NIZK arguments sound does not allow for adaptively-sound statistical NIZK arguments (unless NP ⊂ P/poly). Furthermore, we show that the assumption used in our construction holds with respect to generic adversaries that do not exploit the specific representation of the group elements. We also discuss how to avoid the non-standard assumption in a pre-processing model.

99 citations


Book ChapterDOI
21 Feb 2007
TL;DR: An efficient communication-optimal tworound PSMT protocol for messages of length polynomial in n that is almost optimally resilient in that it requires a number of channels n ≥ (2 + ɚ)t, for any arbitrarily small constant ɛ > 0.
Abstract: Perfectly secure message transmission (PSMT), a problem formulated by Dolev, Dwork, Waarts and Yung, involves a sender S and a recipient R who are connected by n synchronous channels of which up to t may be corrupted by an active adversary. The goal is to transmit, with perfect security, a message from S to R. PSMT is achievable if and only if n > 2t. For the case n >2t, the lower bound on the number of communication rounds between S and R required for PSMT is 2, and the only known efficient (i.e., polynomial in n) two-round protocol involves a communication complexity ofO(n3l) bits, wherel is the lengthof themessage. A recent solution by Agarwal, Cramer and de Haan is provably communication-optimal by achieving an asymptotic communication complexity of O(nl) bits; however, it requires the messages to be exponentially large, i.e., l=ω(2n). In this paper we present an efficient communication-optimal tworound PSMT protocol for messages of length polynomial in n that is almost optimally resilient in that it requires a number of channels n ≥ (2 + ɛ)t, for any arbitrarily small constant ɛ > 0. In this case, optimal communication complexity is O(l) bits.

92 citations


Book ChapterDOI
21 Feb 2007
TL;DR: In particular, this article showed that if the chosen ciphertext attack's decryption algorithm does not query the semantically secure primitive's encryption algorithm, then the proposed construction cannot be CCA secure.
Abstract: We address the question of whether or not semantically secure public-key encryption primitives imply the existence of chosen ciphertext attack (CCA) secure primitives. We show a black-box separation, following the methodology introduced by Impagliazzo and Rudich [23], for a large non-trivial class of constructions. In particular, we show that if the proposed CCA construction's decryption algorithm does not query the semantically secure primitive's encryption algorithm, then the proposed construction cannot be CCA secure.

88 citations


Book ChapterDOI
21 Feb 2007
TL;DR: This work constructs an intrusion-resilient symmetric-key authenticated key exchange (AKE) protocol in the bounded retrieval model, and shows how to instantiate it without random oracles.
Abstract: We construct an intrusion-resilient symmetric-key authenticated key exchange (AKE) protocol in the bounded retrieval model The model employs a long shared private key to cope with an active adversary who can repeatedly compromise the user's machine and perform any efficient computation on the entire shared key However, we assume that the attacker is communication bounded and unable to retrieve too much information during each successive break-in In contrast, the users read only a small portion of the shared key, making the model quite realistic in situations where storage is much cheaper than bandwidth The problem was first studied by Dziembowski [Dzi06a], who constructed a secure AKE protocol using random oracles We present a general paradigm for constructing intrusion-resilient AKE protocols in this model, and show how to instantiate it without random oracles The main ingredients of our construction are UC-secure password authenticated key exchange and tools from the bounded storage model

Book ChapterDOI
21 Feb 2007
TL;DR: A new protocol for blind signatures in which security is preserved even under arbitrarily-many concurrent executions is shown, which is the first to be proven secure in a concurrent setting without random oracles or a trusted setup assumption such as a common reference string.
Abstract: We show a new protocol for blind signatures in which security is preserved even under arbitrarily-many concurrent executions. The protocol can be based on standard cryptographic assumptions and is the first to be proven secure in a concurrent setting (under any assumptions) without random oracles or a trusted setup assumption such as a common reference string. Along the way, we also introduce new definitions of security for blind signature schemes.

Book ChapterDOI
21 Feb 2007
TL;DR: This work constructs public-key obfuscations of a decryption shuffle based on the Boneh-Goh-Nissim (BGN) cryptosystem and a re-encryption shufflebased on the Paillier cryptos system that allow efficient distributed verifiable decryption.
Abstract: We show how to obfuscate a secret shuffle of ciphertexts: shuffling becomes a public operation. Given a trusted party that samples and obfuscates a shuffle before any ciphertexts are received, this reduces the problem of constructing a mix-net to verifiable joint decryption. We construct public-key obfuscations of a decryption shuffle based on the Boneh-Goh-Nissim (BGN) cryptosystem and a re-encryption shuffle based on the Paillier cryptosystem. Both allow efficient distributed verifiable decryption. Finally, we give a distributed protocol for sampling and obfuscating each of the above shuffles and show how it can be used in a trivial way to construct a universally composable mix-net. Our constructions are practical when the number of senders N is small, yet large enough to handle a number of practical cases, e.g. N = 350 in the BGN case and N = 2000 in the Paillier case.

Book ChapterDOI
21 Feb 2007
TL;DR: A new definition of obfuscation is given and it is shown that it is strong enough for cryptographic applications, yet it has the potential for interesting positive results, and argues for its reasonability and usefulness.
Abstract: An obfuscation O of a function F should satisfy two requirements: firstly, using O it should be possible to evaluate F; secondly, O should not reveal anything about F that cannot be learnt from oracle access to F. Several definitions for obfuscation exist. However, most of them are either too weak for or incompatible with cryptographic applications, or have been shown impossible to achieve, or both. We give a new definition of obfuscation and argue for its reasonability and usefulness. In particular, we show that it is strong enough for cryptographic applications, yet we show that it has the potential for interesting positive results. We illustrate this with the following two results: 1. If the encryption algorithm of a secure secret-key encryption scheme can be obfuscated according to our definition, then the result is a secure public-key encryption scheme. 2. A uniformly random point function can be easily obfuscated according to our definition, by simply applying a one-way permutation. Previous obfuscators for point functions, under varying notions of security, are either probabilistic or in the random oracle model (but work for arbitrary distributions on the point function). On the negative side, we show that 1. Following Hada [12] and Wee [25], any family of deterministic functions that can be obfuscated according to our definition must already be "approximately learnable." Thus, many deterministic functions cannot be obfuscated. However, a probabilistic functionality such as a probabilistic secret-key encryption scheme can potentially be obfuscated. In particular, this is possible for a public-key encryption scheme when viewed as a secret-key scheme. 2. There exists a secure probabilistic secret-key encryption scheme that cannot be obfuscated according to our definition. Thus, we cannot hope for a general-purpose cryptographic obfuscator for encryption schemes.

Book ChapterDOI
21 Feb 2007
TL;DR: In this paper, the authors studied the key rate in a QKD protocol with respect to the number of key bits that can be extracted per copy of a tripartite state ρABE.
Abstract: Assume that two distant parties, Alice and Bob, as well as an adversary, Eve, have access to (quantum) systems prepared jointly according to a tripartite state ρABE. In addition, Alice and Bob can use local operations and authenticated public classical communication. Their goal is to establish a key which is unknown to Eve. We initiate the study of this scenario as a unification of two standard scenarios: (i) key distillation (agreement) from classical correlations and (ii) key distillation from pure tripartite quantum states. Firstly, we obtain generalisations of fundamental results related to scenarios (i) and (ii), including upper bounds on the key rate, i.e., the number of key bits that can be extracted per copy of ρABE. Moreover, based on an embedding of classical distributions into quantum states, we are able to find new connections between protocols and quantities in the standard scenarios (i) and (ii). Secondly, we study specific properties of key distillation protocols. In particular, we show that every protocol that makes use of preshared key can be transformed into an equally efficient protocol which needs no pre-shared key. This result is of practical significance as it applies to quantum key distribution (QKD) protocols, but it also implies that the key rate cannot be locked with information on Eve's side. Finally, we exhibit an arbitrarily large separation between the key rate in the standard setting where Eve is equipped with quantum memory and the key rate in a setting where Eve is only given classical memory. This shows that assumptions on the nature of Eve's memory are important in order to determine the correct security threshold in QKD.

Book ChapterDOI
21 Feb 2007
TL;DR: A protocol compiler is described, that transforms any provably secure authenticated 2-party key establishment into a provable secure authenticated group key establishment with 2 more rounds of communication.
Abstract: A protocol compiler is described, that transforms any provably secure authenticated 2-party key establishment into a provably secure authenticated group key establishment with 2 more rounds of communication. The compiler introduces neither idealizing assumptions nor high-entropy secrets, e. g., for signing. In particular, applying the compiler to a password-authenticated 2-party key establishment without random oracle assumption, yields a password-authenticated group key establishment without random oracle assumption. Our main technical tools are non-interactive and non-malleable commitment schemes that can be implemented in the common reference string (CRS) model.

Book ChapterDOI
21 Feb 2007
TL;DR: This work presents a lower bound on the round complexity of a natural class of black-box constructions of statistically hiding commitments from one-way permutations, which matches theround complexity of the protocol studied by Naor et al.
Abstract: We present a lower bound on the round complexity of a natural class of black-box constructions of statistically hiding commitments from one-way permutations. This implies a ω(n/log n) lower bound on the round complexity of a computational form of interactive hashing, which has been used to construct statistically hiding commitments (and related primitives) from various classes of one-way functions, starting with the work of Naor, Ostrovsky, Venkatesan and Yung (J. Cryptology, 1998). Our lower bound matches the round complexity of the protocol studied by Naor et al.

Book ChapterDOI
21 Feb 2007
TL;DR: In this paper, a new complexity-theoretic definition of security for watermarking schemes is proposed, and two weaker security conditions that seem to capture the security goals of practice-oriented work on water-marking are presented.
Abstract: The informal goal of a watermarking scheme is to "mark" a digital object, such as a picture or video, in such a way that it is difficult for an adversary to remove the mark without destroying the content of the object. Although there has been considerable work proposing and breaking watermarking schemes, there has been little attention given to the formal security goals of such a scheme. In this work, we provide a new complexity-theoretic definition of security for watermarking schemes. We describe some shortcomings of previous attempts at defining watermarking security, and show that security under our definition also implies security under previous definitions. We also propose two weaker security conditions that seem to capture the security goals of practice-oriented work on watermarking and show how schemes satisfying these weaker goals can be strengthened to satisfy our definition.

Book ChapterDOI
21 Feb 2007
TL;DR: An OT-combiner is proposed which guarantees secure OT even when only one candidate is secure for both parties, and every remaining candidate is flawed for one of the parties, which shows that the proposed OT-combiners achieve optimal robustness.
Abstract: A (k; n)-robust combiner for a primitive F takes as input n candidate implementations of F and constructs an implementation of F, which is secure assuming that at least k of the input candidates are secure. Such constructions provide robustness against insecure implementations and wrong assumptions underlying the candidate schemes. In a recent work Harnik et al. (Eurocrypt 2005) have proposed a (2; 3)-robust combiner for oblivious transfer (OT), and have shown that (1; 2)-robust OT-combiners of a certain type are impossible. In this paper we propose new, generalized notions of combiners for two-party primitives, which capture the fact that in many two-party protocols the security of one of the parties is unconditional, or is based on an assumption independent of the assumption underlying the security of the other party. This fine-grained approach results in OT-combiners strictly stronger than the constructions known before. In particular, we propose an OT-combiner which guarantees secure OT even when only one candidate is secure for both parties, and every remaining candidate is flawed for one of the parties. Furthermore, we present an efficient uniform OT-combiner, i.e., a single combiner which is secure simultaneously for a wide range of candidates' failures. Finally, our definition allows for a very simple impossibility result, which shows that the proposed OT-combiners achieve optimal robustness.

Book ChapterDOI
21 Feb 2007
TL;DR: The soundness theorem shows that if the encryption scheme used in the protocol is semantically secure, and encryption cycles are absent, then security against adaptive corruptions is achievable via a reduction factor of O(n ċ (2n)l), with n and l being the size and depth of the key graph generated during any protocol execution.
Abstract: We prove a computational soundness theorem for symmetric-key encryption protocols that can be used to analyze security against adaptively corrupting adversaries (that is, adversaries who corrupt protocol participants during protocol execution). Our soundness theorem shows that if the encryption scheme used in the protocol is semantically secure, and encryption cycles are absent, then security against adaptive corruptions is achievable via a reduction factor of O(n ċ (2n)l), with n and l being (respectively) the size and depth of the key graph generated during any protocol execution. Since, in most protocols of practical interest, the depth of key graphs (measured as the longest chain of ciphertexts of the form Ɛk1 (k2), Ɛk2 (k3), Ɛk3 (k4), ...) is much smaller than their size (the total number of keys), this gives us a powerful tool to argue about the adaptive security of such protocols, without resorting to non-standard techniques (like non-committing encryption). We apply our soundness theorem to the security analysis of multicast encryption protocols and show that a variant of the Logical Key Hierarchy (LKH) protocol is adaptively secure (its security being quasi-polynomially related to the security of the underlying encryption scheme).

Proceedings Article
01 Jan 2007
TL;DR: This essay on education in a virtual world begins with a description of the context and concludes with a tour of the educational courses and educators who are teaching in this virtual world, including a list of the resources and organizations that make it possible.
Abstract: The educational use of Second Life’s online virtual world flourished in 2006. More than 100 universities and the New Media Consortium, with over 225 member universities, museums and research centers, have a presence in Second Life. This essay on education in a virtual world begins with a description of the context and concludes with a tour of the educational courses and educators who are teaching in this virtual world, including a list of the resources and organizations that make it possible. Examples of youth education programs and university activities characterize the face of education in Second Life and reflect the successful implementation of coursework in a virtual world. Characteristics of this Multi-User Virtual Environment A stranger flew over onion-topped spires, hovered a moment, then joined me on the cloud where I sat. We chatted and drifted over Svarga’s tropical terrain, bathed in sunset afterglow. Now and then the cloud puffed up, flashed with lightening, and poured rain, germinating seeds in the simulated ecosystem below. Figure 1. The rich visual experience enhances learning and networking, as in this conversation on a cloud over Svarga. Svarga’s lush landscape, created by former Lionhead game developer and SL avatar Laukosargas Svarog, includes interdependent species of artificial life, complete with genes that sometimes mutate and produce new variations. (God Game, 2006) My cloud-riding companion had read my profile. When he saw common interests, he took a seat to tell me of his plans for a research center in Second Life (SL). In this international colony of artists, writers, musicians, educators, entrepreneurs, and other playful people, there is a lot to study. The inhabitants use this world to explore emerging ways to work, play, form social networks, develop their own culture and customs, and create their own surroundings. This paper documents how, in 2006, formal education took root in this online multi-user virtual environment and how new kinds of learning environments and activities are emerging in this rapidly-growing world called Second Life. (http://secondlife.com/) Figure 2. The ever-growing, ever-changing procedural ecosystem at Svarga (Emergent Life, 2006) provides a metaphor for emergent educational activities in Second Life. Background Information Second Life is a virtual world founded by Philip Rosedale (a.k.a. Philip Linden) and hosted and operated by Linden Lab of Linden Research, Inc. This environment is populated with content that is created and owned by its residents. The number of accounts in Second Life grew from 230,000 in April 2006 to over two million by the end of December 2006. It broke three million in January 2007 and passed four million in February (Linden Research, 2007). Support for Educators This phenomenal growth is stimulated by both the ease in which new users can join Second Life and by the support infrastructure from several educational and library groups. These groups include: • New Media Consortium (http://www.nmc.org) • Alliance Library System (ALS, 2007) of Illinois at Info Island (Info Island, 2007) • Second Life Educators Wiki at SimTeach (SimTeach, 2007) • Second Life Education SLED community email list (Educators Info, 2007) • Real Life Education in Second Life group (Second Life | Education, 2007) The ease of use for the software tool combined with free classes on how to use the SL environment have also played a role in the emergence of education in SL. The Second Life software provides free accounts with the ability to design, integrate and texture structures, furnishings, clothing, and avatars using in world resources. Sandboxes offer open spaces for creativity, collecting more free items and sharing ideas. The resources available to SL users include whiteboard and communication tools, language translators, information centers, and support from a variety of educational groups. A wide range of free software tools are available, including the Second Life software (Linden Research, 2007), the (GNU Image manipulation Program) GIMP (The GIMP Team, 2007), Blender (The Blender Foundation, 2007) and Avimator (Avimator, 2007). These tools provide additional graphics, design and animation and their products can be uploaded for use in Second Life. Second Life Libraries Educators can pick up many of these teaching tools at the Information and Communications Technology (ICT) Library, which also has many presentation tools on display to try out. Founded by Ross Perkins (Milosun Czervik in Second Life), the ICT Library is part of a cluster of libraries on Info Island, an initiative originally funded by the Alliance Library System. (Info Island, 2007) The Second Life Library at Info Island, HealthInfo Island, and Caledon Library in Caledon’s Victorian sim support researchers, scholars and residents thanks to the efforts of Lori Bell (Lorelei Junot), Carol Perryman (Carolina Keats), JJ Jacobson (JJ Drinkwater) and other distinguished librarians. Courseware designer Aaron Griffiths (Isa Goodman in SL) helped me study robotics scripts while he designed the library gardens at HealthInfo Island and Info Island. Second Life Publications The SLED Picayune keeps educators and SL residents informed of education, training news in Second Life. This biweekly publication was founded by Ross Perkins (Milosun Czervik), who is also the founder of the ICT Library on Info Island (Info Island, 2007). The Picayune contributors include Sphere Gasser covering places, Troy McLuhan as science advisor, TLTC Palisades covering people, plus John2 Kepler, Deb Regent and Intellagirl Tully, who also moderates the SL Researcher’s List (slrl Info Page, 2007). The Metaverse Messenger (The M2, 2007) publishes Second Life news, including articles about education. Second Opinion (Second Opinion, 2007a) is a newsletter published by Linden Research, Inc. that provides resident news and highlights places and people of interest. The January 2007 edition included a column about Cybrary City (Second Opinion, 2007b) and the collaboration of Talis, the library management systems provider for the UK and Ireland with the Alliance Library System in Illinois. The SLED listserv and a variety of email lists distribute news of interest to educators. Under the support of Pathfinder Linden, (John Lester) and the education support provided by Linden Lab avatars Claudia Linden, Robin Linden, Torley Linden and Jeska Linden, the educational community has flourished in Second Life. The NMC Campus Observer is an elegant and picturesque blog that documents the exploits of the NMC Teachers Buzz group (www.nmc.org/sl), thanks to the efforts of Alan Levine, Nick Noakes, Larry Johnson, the NMC staff and the NMC’s contributing members. It provides information and tours of SL educational sites that demonstrate how educators are using this virtual world to foster creativity and innovation. Second Life educators in the NMC Teachers Buzz group meet every two weeks and share their strategies on education in Second Life. The Arts in Second Life Sasun Steinbeck's guide to galleries in Second Life contains 168 listings. On display you’ll find paintings, photos, sculptures and even videos called machinimas that are made in the virtual world. But that only scratches the surface of SL art. Residents design the buildings, landscape, furniture and clothing, as SL is a world created and owned by its residents. Some make a real-life living selling their creations or working as builders for hire in Second Life, a world that's big enough to attract top-notch talent, yet small enough that students rub elbows with the masters. Figure 3. The Art and Music Center gallery at Ohio University's SL Campus (VITAL Lab, 2007) Second Life Terminology • Avatar—a representation of a real life person in a virtual world • Island—a geographically separated simulated environment that holds up to 15,000 primitives • Listserv—an email list; distributes messages over email to members of the list • Primitive—a shape (cube, cylinder, etc.) that can be built and linked to form objects • Prim—(commonly used); a shortened form of “primitive” • Sim—a simulated environment where avatars gather Examples of Real-Life Education in Second Life Educators from around the world are identifying new ways to leverage virtual world capabilities in the online and campus classroom (ESLteacherlink, 2007; NMC Campus, 2007b; NMC Campus, 2007d; Sheehy, 2007). The following examples illustrate just a few of the ways educators are using Second Life. Montana State University Blends Architecture, Art and Music From his architecture studio floating high above the ground, Terry Beaubois, a.k.a. Tab Scott in Second Life, surveys his students’ projects on University Island and on neighboring Enterprise Island. Beaubois began teaching architecture using Second Life in the Fall of 2005 (Kieran, 2007). As Director of Montana State University’s Creative Resource lab, he collaborated in Fall 2006 to teach a course that blended architecture, art and music and brought together faculty and students from those disciplines. Students developed machinima to demonstrate their final projects, integrating their talents. University of Central Missouri Sends Writers Out Into the (SL) World English composition comes to life in a University of Central Missouri class taught by Bryan Carter (Bryan Mnemonic in SL) (NMC Campus, 2007b), whose students go out into the SL community to find stories to write. Each student chooses 3 themes from 13 that include such topics as crime and punishment, social interaction, rituals, economics, and subcultures. Carter uses SL in conjunction with Skype for voice, the Blackboard learning management system, and blogging. His Missouri students meet in Second Life at Northern Illinois University's Glidden campus, which is administered by Ali Andrews (Ali Andrews, 2007). NIU also has a number of classes using Second Life. (NMC Campus, 2007b) NMC Symposium Tackles “Finding Legitimacy”

Book ChapterDOI
21 Feb 2007
TL;DR: In this article, it was shown that if the n-bit source S allows for a secure encryption of b bits, where b > log n, then one can deterministically extract nearly b almost perfect random bits from S.
Abstract: Most cryptographic primitives require randomness (for example, to generate their secret keys). Usually, one assumes that perfect randomness is available, but, conceivably, such primitives might be built under weaker, more realistic assumptions. This is known to be true for many authentication applications, when entropy alone is typically sufficient. In contrast, all known techniques for achieving privacy seem to fundamentally require (nearly) perfect randomness. We ask the question whether this is just a coincidence, or, perhaps, privacy inherently requires true randomness? We completely resolve this question for the case of (information-theoretic) private-key encryption, where parties wish to encrypt a b-bit value using a shared secret key sampled from some imperfect source of randomness S. Our main result shows that if such n-bit source S allows for a secure encryption of b bits, where b > log n, then one can deterministically extract nearly b almost perfect random bits from S. Further, the restriction that b > log n is nearly tight: there exist sources S allowing one to perfectly encrypt (log n - loglog n) bits, but not to deterministically extract even a single slightly unbiased bit. Hence, to a large extent, true randomness is inherent for encryption: either the key length must be exponential in the message length b, or one can deterministically extract nearly b almost unbiased random bits from the key. In particular, the one-time pad scheme is essentially "universal". Our technique also extends to related computational primitives which are perfectly-binding, such as perfectly-binding commitment and computationally secure private- or public-key encryption, showing the necessity to efficiently extract almost b pseudorandom bits.

Proceedings Article
01 Jan 2007
TL;DR: In this paper, the authors examined research into academic dishonesty in online courses, how to prevent cheating when online testing is done, detecting and preventing plagiarism, and how to design online courses to minimize academic dishonestness.
Abstract: It is common for faculty to believe that academic dishonesty is easier and more prevalent in online courses because of the lack of direct contact with students. This paper examines research into academic dishonesty in online courses, how to prevent cheating when online testing is done, how to detect and prevent plagiarism, how to design online courses to minimize academic dishonesty, and introduces several products and educational practices for preventing dishonesty in the online environment. Research To date, research into academic dishonesty in online courses has been somewhat limited. In 1998, Ridley and Husband studied student records at Christopher Newport University in an attempt to prove these three hypotheses: 1. Students who enroll in both online and traditional classroom courses will earn higher grades in online courses. 2. Students who enroll in on-line courses through two or more semesters will improve their performance over time. 3. If students and spouses or significant others have taken the same online course, their grades in these courses will be more similar than their grades in other courses. Using the data they collected, these hypotheses were all found to be false. In fact, grade point averages in offline coursework were greater than the online coursework by 0.3 points, and the researchers found more Fs in online courses. This early research allayed some fears about the academic rigor of online courses. In 2000, opinion surveys were conducted on 172 students and 69 faculty members at medium-sized universities in the Midwest. The researchers began with the premise that both faculty and students believed it easier to cheat in online classes as opposed to traditional courses. This belief was confirmed by 64% of the faculty and 57% of the students. Among students, this percentage was reduced in students who had previously taken an online class. Among faculty, it was believed that if cheating did occur, it would be someone else taking tests and doing assignments for the dishonest student and that students would purchase or download term papers online (Kennedy, Nowak, Raghuraman, Thomas & Davis, 2000).

Book ChapterDOI
21 Feb 2007
TL;DR: A generalization of the result by Brickell and Davenport by proving that, if the information rate of a secret sharing scheme is greater than 2/3, then its access structure is matroid-related, which generalizes several results that were obtained for particular families of access structures.
Abstract: One of the main open problems in secret sharing is the characterization of the access structures of ideal secret sharing schemes. As a consequence of the results by Brickell and Davenport, every one of those access structures is related in a certain way to a unique matroid. Matroid ports are combinatorial objects that are almost equivalent to matroid-related access structures. They were introduced by Lehman in 1964 and a forbidden minor characterization was given by Seymour in 1976. These and other subsequent works on that topic have not been noticed until now by the researchers interested on secret sharing. By combining those results with some techniques in secret sharing, we obtain new characterizations of matroid-related access structures. As a consequence, we generalize the result by Brickell and Davenport by proving that, if the information rate of a secret sharing scheme is greater than 2/3, then its access structure is matroid-related. This generalizes several results that were obtained for particular families of access structures. In addition, we study the use of polymatroids for obtaining upper bounds on the optimal information rate of access structures. We prove that every bound that is obtained by this technique for an access structure applies to its dual structure as well. Finally, we present lower bounds on the optimal information rate of the access structures that are related to two matroids that are not associated with any ideal secret sharing scheme: the Vamos matroid and the non-Desargues matroid.

Book ChapterDOI
21 Feb 2007
TL;DR: This work gives an interactive protocol to obliviously decide singularity of an encrypted matrix: Bob holds an n×n matrix, encrypted with Alice's secret key, and wants to learn whether or not the matrix is singular (while leaking nothing further).
Abstract: In this work we present secure two-party protocols for various core problems in linear algebra. Our main result is a protocol to obliviously decide singularity of an encrypted matrix: Bob holds an n×n matrix, encrypted with Alice's secret key, and wants to learn whether or not the matrix is singular (while leaking nothing further). We give an interactive protocol between Alice and Bob that solves the above problem in O(log n) communication rounds and with overall communication complexity of roughly O(n2) (note that the input size is n2). Our techniques exploit certain nice mathematical properties of linearly recurrent sequences and their relation to the minimal and characteristic polynomial of the input matrix, following [Wiedemann, 1986]. With our new techniques we are able to improve the round complexity of the communication efficient solution of [Nissim and Weinreb, 2006] from O(n0.275) to O(log n). At the core of our results we use a protocol that securely computes the minimal polynomial of an encrypted matrix. Based on this protocol we exploit certain algebraic reductions to further extend our results to the problems of securely computing rank and determinant, and to solving systems of linear equations (again with low round and communication complexity).

Book ChapterDOI
21 Feb 2007
TL;DR: This paper gives the first computationally sound protocol where k-fold parallel repetition does not decrease the error probability below some constant for any polynomial k (and where the communication complexity does not depend on k).
Abstract: Parallel repetition is well known to reduce the error probability at an exponential rate for single- and multi-prover interactive proofs. Bellare, Impagliazzo and Naor (1997) show that this is also true for protocols where the soundness only holds against computationally bounded provers (e.g. interactive arguments) if the protocol has at most three rounds. On the other hand, for four rounds they give a protocol where this is no longer the case: the error probability does not decrease below some constant even if the protocol is repeated a polynomial number of times. Unfortunately, this protocol is not very convincing as the communication complexity of each instance of the protocol grows linearly with the number of repetitions, and for such protocols the error does not even decrease for some types of interactive proofs. Noticing this, Bellare et al. construct (a quite artificial) oracle relative to which a four round protocol exists whose communication complexity does not depend on the number of parallel repetitions. This shows that there is no "black-box" error reduction theorem for four round protocols. In this paper we give the first computationally sound protocol where k-fold parallel repetition does not decrease the error probability below some constant for any polynomial k (and where the communication complexity does not depend on k). The protocol has eight rounds and uses the universal arguments of Barak and Goldreich (2001). We also give another four round protocol relative to an oracle, unlike the artificial oracle of Bellare et al., we just need a generic group. This group can then potentially be instantiated with some real group satisfying some well defined hardness assumptions (we do not know of any candidate for such a group at the moment).

Book ChapterDOI
21 Feb 2007
TL;DR: It is shown that the usual set-up assumptions used for UC protocols are not sufficient to achieve long-term secure and composable protocols for commitments or general zero knowledge arguments, and nontrivial zero knowledge protocols are possible based on a coin tossing functionality.
Abstract: Algorithmic progress and future technology threaten today's cryptographic protocols. Long-term secure protocols should not even in future reveal more information to a--then possibly unlimited--adversary. In this work we initiate the study of protocols which are long-term secure and universally composable. We show that the usual set-up assumptions used for UC protocols (e.g., a common reference string) are not sufficient to achieve long-term secure and composable protocols for commitments or general zero knowledge arguments. Surprisingly, nontrivial zero knowledge protocols are possible based on a coin tossing functionality: We give a long-term secure composable zero knowledge protocol proving the knowledge of the factorisation of a Blum integer. Furthermore we give practical alternatives (e.g., signature cards) to the usual setup-assumptions and show that these allow to implement the important primitives commitment and zero-knowledge argument.

Book ChapterDOI
21 Feb 2007
TL;DR: Faced by the lack of understanding of the share complexity of secret sharing schemes, a weaker notion of privacy in secrets sharing schemes where each unauthorized set can never rule out any secret is investigated (rather than not learn any "probabilistic" information on the secret).
Abstract: Secret-sharing schemes are an important tool in cryptography that is used in the construction of many secure protocols. However, the shares' size in the best known secret-sharing schemes realizing general access structures is exponential in the number of parties in the access structure, making them impractical. On the other hand, the best lower bound known for sharing of an l-bit secret with respect to an access structure with n parties is ω(ln/ log n) (Csirmaz, EUROCRYPT 94). No major progress on closing this gap has been obtained in the last decade. Faced by our lack of understanding of the share complexity of secret sharing schemes, we investigate a weaker notion of privacy in secrets sharing schemes where each unauthorized set can never rule out any secret (rather than not learn any "probabilistic" information on the secret). Such schemes were used previously to prove lower bounds on the shares' size of perfect secret-sharing schemes. Our main results is somewhat surprising upper-bounds on the shares' size in weakly-private schemes. - For every access structure, we construct a scheme for sharing an l-bit secret with (l+c)-bit shares, where c is a constant depending on the access structure (alas, c can be exponential in n). Thus, our schemes become more efficient as l - the secret size - grows. For example, for the above mentioned access structure of Csirmaz, we construct a scheme with shares' size l + n log n. - We construct efficient weakly-private schemes for threshold access structures for sharing a one bit secret. Most impressively, for the 2- out-of-n threshold access structure, we construct a scheme with 2-bit shares (compared to ω(log n) in any perfect secret sharing scheme).

Book ChapterDOI
21 Feb 2007
TL;DR: It is shown that security in the stand-alone model proven using black-box simulators in the information-theoretic setting does not imply security under concurrent composition, not even security under 2-bounded concurrent self-composition with an inefficient simulator and fixed inputs.
Abstract: We investigate whether security of multiparty computation in the information-theoretic setting implies their security under concurrent composition We show that security in the stand-alone model proven using black-box simulators in the information-theoretic setting does not imply security under concurrent composition, not even security under 2-bounded concurrent self-composition with an inefficient simulator and fixed inputs This in particular refutes recently made claims on the equivalence of security in the stand-alone model and concurrent composition for perfect and statistical security (STOC'06) Our result strongly relies on the question whether every rewinding simulator can be transformed into an equivalent, potentially inefficient non-rewinding (straight-line) simulator We answer this question in the negative by giving a protocol that can be proven secure using a rewinding simulator, yet that is not secure for any non-rewinding simulator

Book ChapterDOI
Douglas Wikström1
21 Feb 2007
TL;DR: A practical scheme is constructed that is provably secure with respect to the security definition under the strong RSA-assumption, the decision composite residuosity assumption, and the decision Diffie-Hellman assumption.
Abstract: Previous definitions of designated confirmer signatures in the literature are incomplete, and the proposed security definitions fail to capture key security properties, such as unforgeability against malicious confirmers and non-transferability We propose new definitions Previous schemes rely on the random oracle model or set-up assumptions, or are secure with respect to relaxed security definitions We construct a practical scheme that is provably secure with respect to our security definition under the strong RSA-assumption, the decision composite residuosity assumption, and the decision Diffie-Hellman assumption To achieve our results we introduce several new relaxations of standard notions We expect these techniques to be useful in the construction and analysis of other efficient cryptographic schemes