scispace - formally typeset
Search or ask a question

Showing papers in "IACR Cryptology ePrint Archive in 2012"


Posted Content
TL;DR: This paper port Brakerski’s fully homomorphic scheme based on the Learning With Errors (LWE) problem to the ring-LWE setting, and provides a detailed, but simple analysis of the various homomorphic operations, such as multiplication, relinearisation and bootstrapping.

1,103 citations


Posted Content
TL;DR: This work proposes the first SSE scheme to satisfy all the properties of searchable symmetric encryption and extends the inverted index approach in several non-trivial ways and introduces new techniques for the design of SSE.
Abstract: Searchable symmetric encryption (SSE) allows a client to encrypt its data in such a way that this data can still be searched. The most immediate application of SSE is to cloud storage, where it enables a client to securely outsource its data to an untrusted cloud provider without sacrificing the ability to search over it. SSE has been the focus of active research and a multitude of schemes that achieve various levels of security and efficiency have been proposed. Any practical SSE scheme, however, should (at a minimum) satisfy the following properties: sublinear search time, security against adaptive chosenkeyword attacks, compact indexes and the ability to add and delete files efficiently. Unfortunately, none of the previously-known SSE constructions achieve all these properties at the same time. This severely limits the practical value of SSE and decreases its chance of deployment in real-world cloud storage systems. To address this, we propose the first SSE scheme to satisfy all the properties outlined above. Our construction extends the inverted index approach (Curtmola et al., CCS 2006 ) in several non-trivial ways and introduces new techniques for the design of SSE. In addition, we implement our scheme and conduct a performance evaluation, showing that our approach is highly efficient and ready for deployment.

671 citations


Posted Content
TL;DR: The ring-LWE distribution is pseudorandom as discussed by the authors, assuming that worst-case problems on ideal lattices are hard for polynomial-time quantum algorithms, which is not the case.
Abstract: The “learning with errors” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worst-case lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for lattice-based hash functions (and related primitives). We resolve this question in the affirmative by introducing an algebraic variant of LWE called ringLWE, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ring-LWE distribution is pseudorandom, assuming that worst-case problems on ideal lattices are hard for polynomial-time quantum algorithms. Applications include the first truly practical lattice-based public-key cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ring-LWE.

457 citations


Posted Content
TL;DR: A covertly secure key generation protocol for obtaining a BGV public key and a shared associated secret key and both a covertly and actively secure preprocessing phase are constructed, both of which compare favourably with previous work in terms of efficiency and provable security.
Abstract: SPDZ (pronounced “Speedz”) is the nickname of the MPC protocol of Damgard et al. from Crypto 2012. SPDZ provided various efficiency innovations on both the theoretical and practical sides compared to previous work in the preprocessing model. In this paper we both resolve a number of open problems with SPDZ; and present several theoretical and practical improvements to the protocol. In detail, we start by designing and implementing a covertly secure key generation protocol for obtaining a BGV public key and a shared associated secret key. In prior work this was assumed to be provided by a given setup functionality. Protocols for generating such shared BGV secret keys are likely to be of wider applicability than to the SPDZ protocol alone. We then construct both a covertly and actively secure preprocessing phase, both of which compare favourably with previous work in terms of efficiency and provable security. We also build a new online phase, which solves a major problem of the SPDZ protocol: namely prior to this work preprocessed data could be used for only one function evaluation and then had to be recomputed from scratch for the next evaluation, while our online phase can support reactive functionalities. This improvement comes mainly from the fact that our construction does not require players to reveal the MAC keys to check correctness of MAC’d values. Since our focus is also on practical instantiations, our implementation offloads as much computation as possible into the preprocessing phase, thus resulting in a faster online phase. Moreover, a better analysis of the parameters of the underlying cryptoscheme and a more specific choice of the field where computation is performed allow us to obtain a better optimized implementation. Improvements are also due to the fact that our construction is in the random oracle model, and the practical implementation is multi-threaded. This article is based on an earlier article: ESORICS 2013, pp 1–18, Springer LNCS 8134, 2013, http://dx.doi.org/10.1007/9783-642-40203-6 1.

324 citations


Posted Content
TL;DR: It is shown how to further increase the number of representations and propose a new information set decoding algorithm with running time 20.0494n, which was improved to 20.0537n by May, Meurer and Thomae.
Abstract: Decoding random linear codes is a well studied problem with many applications in complexity theory and cryptography. The security of almost all coding and LPN/LWE-based schemes relies on the assumption that it is hard to decode random linear codes. Recently, there has been progress in improving the running time of the best decoding algorithms for binary random codes. The ball collision technique of Bernstein, Lange and Peters lowered the complexity of Stern’s information set decoding algorithm to 2 . Using representations this bound was improved to 2 by May, Meurer and Thomae. We show how to further increase the number of representations and propose a new information set decoding algorithm with running time 2 .

271 citations


Posted Content
TL;DR: In this article, the authors used the learning with errors (LWE) problem to build a new simple and provably secure key exchange scheme, which can be viewed as certain extension of DiffieHellman problem with errors.
Abstract: We use the learning with errors (LWE) problem to build a new simple and provably secure key exchange scheme. The basic idea of the construction can be viewed as certain extension of DiffieHellman problem with errors. The mathematical structure behind comes from the commutativity of computing a bilinear form in two different ways due to the associativity of the matrix multiplications: (x ×A) × y = x × (A× y), where x,y are column vectors and A is a square matrix. We show that our new schemes are more efficient in terms of communication and computation complexity compared with key exchange schemes or key transport schemes via encryption schemes based on the LWE problem. Furthermore, we extend our scheme to the ring learning with errors (RLWE) problem, resulting in small key size and better efficiency.

211 citations


Posted Content
TL;DR: In this article, the authors analyze the security of using Bitcoin for fast payments, where the time between the exchange of currency and goods is short (i.e., in the order of few seconds).
Abstract: Bitcoin is a decentralized payment system that is based on Proof-of-Work. Bitcoin is currently gaining popularity as a digital currency; several businesses are starting to accept Bitcoin transactions. An example case of the growing use of Bitcoin was recently reported in the media; here, Bitcoins were used as a form of fast payment in a local fast-food restaurant. In this paper, we analyze the security of using Bitcoin for fast payments, where the time between the exchange of currency and goods is short (i.e., in the order of few seconds). We focus on doublespending attacks on fast payments and demonstrate that these attacks can be mounted at low cost on currently deployed versions of Bitcoin. We further show that the measures recommended by Bitcoin developers for the use of Bitcoin in fast transactions are not always effective in resisting double-spending; we show that if those recommendations are integrated in future Bitcoin implementations, double-spending attacks on Bitcoin will still be possible. Finally, we leverage on our findings and propose a lightweight countermeasure that enables the detection of doublespending attacks in fast transactions.

173 citations


Posted Content
TL;DR: This work gives the first solution providing proofs of retrievability for dynamic storage, where the client can perform arbitrary reads/writes on any location within her data by running an efficient protocol with the server.
Abstract: Proofs of retrievability allow a client to store her data on a remote server (e.g., “in the cloud”) and periodically execute an efficient audit protocol to check that all of the data is being maintained correctly and can be recovered from the server. For efficiency, the computation and communication of the server and client during an audit protocol should be significantly smaller than reading/transmitting the data in its entirety. Although the server is only asked to access a few locations of its storage during an audit, it must maintain full knowledge of all client data to be able to pass. Starting with the work of Juels and Kaliski (CCS ’07), all prior solutions to this problem crucially assume that the client data is static and do not allow it to be efficiently updated. Indeed, they all store a redundant encoding of the data on the server, so that the server must delete a large fraction of its storage to ‘lose’ any actual content. Unfortunately, this means that even a single bit modification to the original data will need to modify a large fraction of the server storage, which makes updates highly inefficient. Overcoming this limitation was left as the main open problem by all prior works. In this work, we give the first solution providing proofs of retrievability for dynamic storage, where the client can perform arbitrary reads/writes on any location within her data by running an efficient protocol with the server. At any point in time, the client can execute an efficient audit protocol to ensure that the server maintains the latest version of the client data. The computation and communication complexity of the server and client in our protocols is only polylogarithmic in the size of the client’s data. The starting point of our solution is to split up the data into small blocks and redundantly encode each block of data individually, so that an update inside any data block only affects a few codeword symbols. The main difficulty is to prevent the server from identifying and deleting too many codeword symbols belonging to any single data block. We do so by hiding where the various codeword symbols for any individual data block are stored on the server and when they are being accessed by the client, using the algorithmic techniques of oblivious RAM. ∗IBM Research, T.J. Watson. Hawthorne, NY, USA. cdc@gatech.edu †Koc University. Istanbul, TURKEY. akupcu@ku.edu.tr ‡IBM Research, T.J. Watson. Hawthorne, NY, USA. wichs@cs.nyu.edu

155 citations


Posted Content
TL;DR: Barak et al. as mentioned in this paper introduced the notion of a differing-input obfuscator for Turing machines and gave a construction for the same (without converting it to a circuit) with input-specific running times.
Abstract: In this paper, we study of the notion of differing-input obfuscation, introduced by Barak et al. (CRYPTO 2001, JACM 2012). For any two circuits C0 and C1, a differing-input obfuscator diO guarantees that the non-existence of an adversary that can find an input on which C0 and C1 differ implies that diO(C0) and diO(C1) are computationally indistinguishable. We show many applications of this notion: We define the notion of a differing-input obfuscator for Turing machines and give a construction for the same (without converting it to a circuit) with input-specific running times. More specifically, for each input, our obfuscated Turning machine takes time proportional to the running time of the Turing machine on that specific input rather than the machine’s worst-case running time. We give a functional encryption scheme that allows for secret-keys to be associated with Turing machines, and thereby achieves input-specific running times. Further, we can equip our functional encryption scheme with delegation properties. We construct a multi-party non-interactive key exchange protocol with no trusted setup where all parties post only logarithmic-size messages. It is the first such scheme with such short messages. We similarly obtain a broadcast encryption system where the ciphertext overhead and secret-key size is constant (i.e. independent of the number of users), and the public key is logarithmic in the number of users. All our constructions make inherent use of the power provided by differing-input obfuscation. It is not currently known how to construct systems with these properties from the weaker notion of indistinguishability obfuscation.

147 citations



Posted Content
TL;DR: It is proved that these average-case problems are at least as hard as standard lattice problems restricted to module lattices (which themselves bridge arbitrary and ideal lattices) as these new problems enlarge the toolbox of the lattice-based cryptographer and could prove useful for designing new schemes.
Abstract: Most lattice-based cryptographic schemes are built upon the assumed hardness of the Short Integer Solution (SIS) and Learning With Errors (LWE) problems. Their efficiencies can be drastically improved by switching the hardness assumptions to the more compact Ring-SIS and RingLWE problems. However, this change of hardness assumptions comes along with a possible security weakening: SIS and LWE are known to be at least as hard as standard (worst-case) problems on euclidean lattices, whereas Ring-SIS and Ring-LWE are only known to be as hard as their restrictions to special classes of ideal lattices, corresponding to ideals of some polynomial rings. In this work, we define the Module-SIS and Module-LWE problems, which bridge SIS with Ring-SIS, and LWE with Ring-LWE, respectively. We prove that these average-case problems are at least as hard as standard lattice problems restricted to module lattices (which themselves bridge arbitrary and ideal lattices). As these new problems enlarge the toolbox of the lattice-based cryptographer, they could prove useful for designing new schemes. Importantly, the worst-case to average-case reductions for the module problems are (qualitatively) sharp, in the sense that there exist converse reductions. This property is not known to hold in the context of Ring-SIS/Ring-LWE: Ideal lattice problems could reveal easy without impacting the hardness of Ring-SIS/Ring-LWE.

Posted Content
TL;DR: Gentry et al. as mentioned in this paper proposed Quadratic Span Programs (QSPs) which are an extension of span programs defined by Karchmer and Wigderson, and constructed a NIZK argument in the CRS model for Circuit-SAT consisting of just 7 group elements.
Abstract: We introduce a new characterization of the NP complexity class, called Quadratic Span Programs (QSPs), which is a natural extension of span programs defined by Karchmer and Wigderson. Our main motivation is the construction of succinct arguments of NP-statements that are quick to construct and verify. QSPs seem well-suited for this task, perhaps even better than Probabilistically Checkable Proofs (PCPs). In 2010, Groth constructed a NIZK argument in the common reference string (CRS) model for Circuit-SAT consisting of only 42 elements in a bilinear group. Interestingly, his argument does not (explicitly) use PCPs. But his scheme has some disadvantages – namely, the CRS size and prover computation are both quadratic in the circuit size. In 2011, Lipmaa reduced the CRS size to quasi-linear, but with prover computation still quadratic. Using QSPs we construct a NIZK argument in the CRS model for Circuit-SAT consisting of just 7 group elements. The CRS size is linear in the circuit size, and prover computation is quasi-linear, making our scheme seemingly quite practical. (The prover only needs to do a linear number of group operations; the quasi-linear computation is a multipoint evaluation and interpolation.) Our results are complementary to those of Valiant (TCC 2008) and Bitansky et al. (2012), who use “bootstrapping” (recursive composition) of arguments to reduce CRS size and prover and verifier computation. QSPs also provide a crisp mathematical abstraction of some of the techniques underlying Groth’s and Lipmaa’s constructions. IBM T.J.Watson Research Center. rosario@us.ibm.com IBM T.J.Watson Research Center. cbgentry@us.ibm.com Microsoft Research. parno@microsoft.com Columbia University. mariana@cs.columbia.edu

Posted Content
TL;DR: In this paper, the authors performed a sanity check of public keys collected on the web and found that the vast majority of the public keys work as intended and that two out of every one thousand RSA moduli that they collected offer no security.
Abstract: We performed a sanity check of public keys collected on the web. Our main goal was to test the validity of the assumption that different random choices are made each time keys are generated. We found that the vast majority of public keys work as intended. A more disconcerting finding is that two out of every one thousand RSA moduli that we collected offer no security. Our conclusion is that the validity of the assumption is questionable and that generating keys in the real world for ``multiple-secrets'' cryptosystems such as RSA is significantly riskier than for ``single-secret'' ones such as ElGamal or (EC)DSA which are based on Diffie-Hellman.

Posted Content
TL;DR: SipHash as mentioned in this paper is a pseudorandom function optimized for short inputs that can be used for hash table lookup and authentication in network traffic authentication and hash-table lookups.
Abstract: SipHash is a family of pseudorandom functions optimized for short inputs. Target applications include network traffic authentication and hash-table lookups protected against hash-flooding denial-of-service attacks. SipHash is simpler than MACs based on universal hashing, and faster on short inputs. Compared to dedicated designs for hash-table lookup, SipHash has well-defined security goals and competitive performance. For example, SipHash processes a 16-byte input with a fresh key in 140 cycles on an AMD FX-8150 processor, which is much faster than state-of-the-art MACs. We propose that hash tables switch to SipHash as a hash function.

Posted Content
TL;DR: In this paper, the authors revisited meet-in-the-middle attacks on AES in the single-key model and improved on Dunkelman, Keller and Shamir attacks of Asiacrypt 2010.
Abstract: In this paper, we revisit meet-in-the-middle attacks on AES in the single-key model and improve on Dunkelman, Keller and Shamir attacks of Asiacrypt 2010. We present the best attack on 7 rounds of AES-128 where data/time/memory complexities are below 2. Moreover, we are able to extend the number of rounds to reach attacks on 8 rounds for both AES-192 and AES-256. This gives the best attacks on those two versions with a data complexity of 2 chosen-plaintexts, a memory complexity of 2 and a time complexity of 2 for AES-192 and 2 for AES-256. Finally, we also describe the best attack on 9 rounds of AES-256 with 2 chosen-plaintexts and time and memory complexities of 2. All these attacks have been found by carefully studying the number of reachable multisets in Dunkelman et al. attacks.

Posted Content
TL;DR: Ginger as discussed by the authors is a built system for unconditional, general-purpose, and nearly practical verification of outsourced computation based on PEPPER, which uses the PCP theorem and cryptographic techniques to implement an efficient argument system.
Abstract: : We describe GINGER, a built system for unconditional, general-purpose, and nearly practical verification of outsourced computation. GINGER is based on PEPPER, which uses the PCP theorem and cryptographic techniques to implement an efficient argument system (a kind of interactive protocol). GINGER slashes the query size and costs via theoretical refinements that are of independent interest; broadens the computational model to include (primitive) floating-point fractions, inequality comparisons, logical operations, and conditional control flow; and includes a parallel GPU-based implementation that dramatically reduces latency.

Posted Content
TL;DR: In this paper, the authors show how to construct garbled RAM programs (GRAM) where its size only depends on fixed polynomial in the security parameter times the program running time.
Abstract: Assuming solely the existence of one-way functions, we show how to construct Garbled RAM Programs (GRAM) where its size only depends on fixed polynomial in the security parameter times the program running time. We stress that we avoid converting the RAM programs into circuits. As an example, our techniques implies the first garbled binary search program (searching over sorted encrypted data stored in a cloud) which is poly-logarithmic in the data size instead of linear. Our result requires the existence of one-way function and enjoys the same non-interactive properties as Yao’s original garbled circuits.

Posted Content
TL;DR: It is argued that many of the next research questions in verified computation are questions in secure systems, and it is shown that Quadratic Arithmetic Programs, a new formalism for representing computations efficiently, can yield a particularly efficient PCP that integrates easily into the core protocols.
Abstract: The area of proof-based verified computation (outsourced computation built atop probabilistically checkable proofs and cryptographic machinery) has lately seen renewed interest. Although recent work has made great strides in reducing the overhead of naive applications of the theory, these schemes still cannot be considered practical. A core issue is that the work for the prover is immense, in general; it is near-practical only for hand-compiled computations that can be expressed in special forms. This paper addresses that problem. Provided one is willing to batch verification, we develop a protocol that achieves the efficiency of the best manually constructed protocols in the literature yet applies to most computations. Our protocol is built on the observation that the recently-proposed QAPs of Gennaro et al. (ePrint 2012/215) yield a linear PCP that works with the efficient argument protocol of Setty et al. (ePrint 2012/598, Security 2012, NDSS 2012), itself based on the proposal of Ishai et al. (CCC 2007). The consequence is a prover whose total work is not much more than linear in the running time of the computation. We implement the protocol in the context of a built system that includes a compiler and a GPU implementation. The result, as indicated by an experimental evaluation, is a system that is almost usable for real problems—without special-purpose tailoring.

Posted Content
TL;DR: In this paper, the authors consider applications scenarios where an untrusted aggregator wishes to continually monitor the heavy-hitters across a set of distributed streams, and they propose protocols that guarantee low memory usage and processing overhead by each data source, and low communication overhead between the data sources and the aggregator.
Abstract: We consider applications scenarios where an untrusted aggregator wishes to continually monitor the heavy-hitters across a set of distributed streams. Since each stream can contain sensitive data, such as the purchase history of customers, we wish to guarantee the privacy of each stream, while allowing the untrusted aggregator to accurately detect the heavy hitters and their approximate frequencies. Our protocols are scalable in settings where the volume of streaming data is large, since we guarantee low memory usage and processing overhead by each data source, and low communication overhead between the data sources and the

Posted Content
TL;DR: In this article, the authors propose a method for automatically deriving upper bounds on the amount of information about the input that an adversary can extract from a program by observing the CPU's cache behavior.
Abstract: The latency gap between caches and main memory has been successfully exploited for recovering sensitive input to programs, such as cryptographic keys from implementation of AES and RSA. So far, there are no practical general-purpose countermeasures against this threat. In this paper we propose a novel method for automatically deriving upper bounds on the amount of information about the input that an adversary can extract from a program by observing the CPU’s cache behavior. At the heart of our approach is a novel technique for efficient counting of concretizations of abstract cache states that enables us to connect state-of-the-art techniques for static cache analysis and quantitative information-flow. We implement our counting procedure on top of the AbsInt TimingExplorer, one of the most advanced engines for static cache analysis. We use our tool to perform a case study where we derive upper bounds on the cache leakage of a 128-bit AES executable on an ARM processor with a realistic cache configuration. We also analyze this implementation with a commonly suggested (but until now heuristic) countermeasure applied, obtaining a formal account of the corresponding increase in security.


Posted Content
TL;DR: In this paper, the authors propose a single-server-aided secure function evaluation (SFE) protocol with a small set of untrusted servers that have no input to the computation and receive no output, but that make their computational resources available to the parties.
Abstract: Secure function evaluation (SFE) allows a set of mutually distrustful parties to evaluate a function of their joint inputs without revealing their inputs to each other. SFE has been the focus of active research and recent work suggests that it can be made practical. Unfortunately, current protocols and implementations have inherent limitations that are hard to overcome using standard and practical techniques. Among them are: (1) requiring participants to do work linear in the size of the circuit representation of the function; (2) requiring all parties to do the same amount of work; and (3) not being able to provide complete fairness. A promising approach for overcoming these limitations is to augment the SFE setting with a small set of untrusted servers that have no input to the computation and that receive no output, but that make their computational resources available to the parties. In this model, referred to as server-aided SFE, the goal is to tradeoff the parties’ work at the expense of the servers. Motivated by the emergence of public cloud services such as Amazon EC2 and Microsoft Azure, recent work has explored the extent to which server-aided SFE can be achieved with a single server. In this work, we revisit the sever-aided setting from a practical perspective and design singleserver-aided SFE protocols that are considerably more efficient than all previously-known protocols. We achieve this in part by introducing several new techniques for garbled-circuit-based protocols, including a new and efficient input-checking mechanism for cut-and-choose and a new pipelining technique that works in the presence of malicious adversaries. Furthermore, we extend the serveraided model to guarantee fairness which is an important property to achieve in practice. Finally, we implement and evaluate our constructions experimentally and show that our protocols (regardless of the number of parties involved) yield implementations that are 4 and 6 times faster than the most optimized two-party SFE implementation when the server is assumed to be malicious and covert, respectively.

Posted Content
TL;DR: In this paper, the authors formalize a new cryptographic primitive, Message-Locked Encryption (MLE), where the key under which encryption and decryption are performed is itself derived from the message.
Abstract: We formalize a new cryptographic primitive, Message-Locked Encryption (MLE), where the key under which encryption and decryption are performed is itself derived from the message. MLE provides a way to achieve secure deduplication (space-ecient secure outsourced storage), a goal currently targeted by numerous cloud-storage providers. We provide denitions both for privacy and for a form of integrity that we call tag consistency. Based on this foundation, we make both practical and theoretical contributions. On the practical side, we provide ROM security analyses of a natural family of MLE schemes that includes deployed schemes. On the theoretical side the challenge is standard model solutions, and we make connections with deterministic encryption, hash functions secure on correlated inputs and the sample-then-extract paradigm to deliver schemes under dierent assumptions and for dierent classes of message sources. Our work shows that MLE is a primitive of both practical and theoretical interest.

Posted Content
TL;DR: In this article, a new elliptic curve signature and key agreement implementation is presented, which achieves record speeds for signatures while remaining relatively compact. But the implementation has a small footprint: the library is under 55kB and the signature can be verified in under 618k Tegra-2 cycles.
Abstract: Elliptic curve cryptosystems have improved greatly in speed over the past few years. In this paper we outline a new elliptic curve signature and key agreement implementation. We achieve record speeds for signatures while remaining relatively compact. For example, on Intel Sandy Bridge, a curve with about 2250 points produces a signature in just under 60k clock cycles, verifies in under 169k clock cycles, and computes a Diffie-Hellman shared secret in under 153k clock cycles. Our implementation has a small footprint: the library is under 55kB. We also post competitive timings on ARM processors, verifying a signature in under 618k Tegra-2 cycles. We introduce faster field arithmetic, a new point compression algorithm, an improved fixed-base scalar multiplication algorithm and a new way to verify signatures without inversions or coordinate recovery. Some of these improvements should be applicable to other systems.

Posted Content
Elli Androulaki1, Ghassan Karame1, Marc Roeschlin1, Tobias Scherer1, Srdjan Capkun1 
TL;DR: In this paper, the authors investigated the privacy guarantees of Bitcoin in the setting where Bitcoin is used as a primary currency for the daily transactions of individuals, and they evaluated the privacy that is provided by Bitcoin by analyzing the genuine Bitcoin system and through a simulator that mimics Bitcoin client's behavior in the context where Bitcoin was used for all transactions within a university.
Abstract: Bitcoin is quickly emerging as a popular digital payment system. However, in spite of its reliance on pseudonyms, Bitcoin raises a number of privacy concerns due to the fact that all of the transactions that take place are publicly announced in the system. In this paper, we investigate the privacy guarantees of Bitcoin in the setting where Bitcoin is used as a primary currency for the daily transactions of individuals. More specifically, we evaluate the privacy that is provided by Bitcoin (i) by analyzing the genuine Bitcoin system and (ii) through a simulator that mimics Bitcoin client’s behavior in the context where Bitcoin is used for all transactions within a university. In this setting, our results show that the profiles of almost 40% of the users can be, to a large extent, recovered even when users adopt privacy measures recommended by Bitcoin. To the best of our knowledge, this is the first work that comprehensively analyzes, and evaluates the privacy implications of Bitcoin. As a by-product, we have designed and implemented the first simulator of Bitcoin; our simulator can be used to model the interaction between Bitcoin users in generic settings.

Posted Content
TL;DR: Recently, Chenet et al. as discussed by the authors proposed the notion of order-preserving symmetric encryption (OPE), a primitive suggested in the database community by Agrawal et al., for allowing efficient range queries on encrypted data.
Abstract: We initiate the cryptographic study of order-preserving symmetric encryption (OPE), a primitive suggested in the database community by Agrawal et al. (SIGMOD ’04) for allowing efficient range queries on encrypted data. Interestingly, we first show that a straightforward relaxation of standard security notions for encryption such as indistinguishability against chosenplaintext attack (IND-CPA) is unachievable by a practical OPE scheme. Instead, we propose a security notion in the spirit of pseudorandom functions (PRFs) and related primitives asking that an OPE scheme look “as-random-as-possible” subject to the order-preserving constraint. We then design an efficient OPE scheme and prove its security under our notion based on pseudorandomness of an underlying blockcipher. Our construction is based on a natural relation we uncover between a random order-preserving function and the hypergeometric probability distribution. In particular, it makes black-box use of an efficient sampling algorithm for the latter. ∗School of Computer Science, Georgia Institute of Technology, 266 Ferst Drive, Atlanta, GA 30332, USA. E-mail: sasha@gatech.edu. †Department of Mathematical Sciences, Clemson University, O-110 Martin Hall, Box 340975, Clemson, SC 29634, USA. E-mail: nchenet@clemson.edu. Most of the work done while at the Georgia Institute of Technology. ‡Department of Information and Communication Engineering Yeungnam University, Republic of Korea. E-mail: younholee@yu.ac.kr. Work done while at the Georgia Institute of Technology. §Department of Computer Science, Boston University, 111 Cummington St., Boston, MA 02215, USA. E-mail: amoneill@bu.edu. Work done while at the Georgia Institute of Technology and University of Texas.

Posted Content
TL;DR: New PUF definitions that require only weak average security properties of the PUF are provided, and it is proved that these definitions suffice to realize secure PUF-based oblivious transfer (OT), bit com mitment (BC) and key exchange (KE) in said setting.
Abstract: We investigate the power of physical unclonable functions (PUFs) as a new primitive in cryptographic protocols. Our contributions split into three parts. Firstly, we focus on the realiza bility of PUF-protocols in a special type of stand-alone setting (the “stand-alone, good PUF setting”) under minimal assumptions. We provide new PUF definitions that require only weak average security properties of the PUF, and prove that these definitions suffice to realize secure PUF-based oblivious transfer (OT), bit com mitment (BC) and key exchange (KE) in said setting. Our protocols for OT, BC and KE are partly new, and have certain practicality and security advantages compared to existing schemes. In the second part of the paper, we formally prove that there are very sharp limits on the usability of PUFs for OT and KE beyond the above stand-alone, good PUF scenario. We introduce two new and realistic attack models, the so-called posterior access model (PAM) and the bad PUF model, and prove several impossibility results in these models. First, OT and KE protocols whose security is solely based on PUFs are generally impossible in the PAM. More precisely, one-time access of an adversary to the PUF after the end of a single protocol (sub-)session makes all previous (sub-)sessions provably insecure. Second, OT whose security is solely based on PUFs is impossible in the bad PUF model, even if only a stand alone execution of the protocol is considered (i.e., even if no adversarial PUF access after the protocol is allowed). Our impossibility proofs do not only hold for the weak PUF definition of the first part of the paper, but even apply if ideal rand omness and unpredictability is assumed in the PUF, i.e., if the PUF is modeled as a random permutation oracle. In the third part, we investigate the feasibility of PUF-based bit commitment beyond the stand-alone, good PUF setting. For a number of reasons, this case is more complicated than OT and KE. We first prove that BC is impossible in the bad PUF model if players have got access to the PUF between the commit and the reveal phase. Again, this result holds even if the PUF is “ideal” and modeled as a random permutation oracle. Secondly, we sketch (without proof) two new BC-protocols, which can deal with bad PUFs or with adversarial access between the commit and reveal phase, but not with both. We hope that our results can contribute to a clarification of the usability of PUF s in cryptographic protocols. They show that new hardware properties such as offline certifiability and the erasure of PUF responses would be required in order to make PUFs a broadly applicable cryptographic tool. These features have not yet been realized in practical PUF-implementations and generally seem hard to achieve at low costs. Our findings also show that the question how PUFs can be modeled comprehensively in a UC-setting must be considered at least partly open.

Posted Content
TL;DR: A flaw in the standard security definitions used in the literature on provable concrete security is discussed in this paper, where the authors analyze the magnitude of the flaw in detail and discuss several strategies for fixing the definitions.
Abstract: There is a flaw in the standard security definitions used in the literature on provable concrete security. The definitions are frequently conjectured to assign a security level of 2^128 to AES, the NIST P-256 elliptic curve, DSA-3072, RSA-3072, and various higher-level protocols, but they actually assign a far lower security level to each of these primitives and protocols. This flaw undermines security evaluations and comparisons throughout the literature. This paper analyzes the magnitude of the flaw in detail and discusses several strategies for fixing the definitions.

Posted Content
TL;DR: In this article, the authors provide a framework enabling the construction of IBE schemes that are secure under related-key attacks (RKAs) for sets of related key derivation functions that are non-linear.
Abstract: We provide a framework enabling the construction of IBE schemes that are secure under related- key attacks (RKAs). Specific instantiations of the framework yield RKA-secure IBE schemes for sets of related key derivation functions that are non-linear, thus overcoming a current barrier in RKA security. In particular, we obtain IBE schemes that are RKA secure for sets consisting of all affine functions and all polynomial functions of bounded degree. Based on this we obtain the first constructions of RKA-secure schemes for the same sets for the following primitives: CCA-secure public-key encryption, CCA-secure symmetric encryption and Signatures. All our results are in the standard model and hold under reasonable hardness assumptions. 1 Department of Computer Science & Engineering, University of California San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA. Email: mihir@eng.ucsd.edu. URL: cseweb.ucsd.edu/~mihir/. Supported in part by NSF grants CNS-0904380, CCF-0915675 and CNS-1116800. 2 Information Security Group, Royal Holloway, University of London, Egham Hill, Egham, Surrey TW20 0EX, UK. Email: kenny.paterson@rhul.ac.uk. URL: www.isg.rhul.ac.uk/~kp. Supported by EPSRC Leadership Fellowship EP/H005455/1. 3 Information Security Group, Royal Holloway, University of London, Egham Hill, Egham, Surrey TW20 0EX, UK. Email: s.thomson@rhul.ac.uk. URL: www.sthomson.co.uk. Supported by EPSRC Leadership Fellowship EP/H005455/1.

Posted Content
TL;DR: This paper proposes the first broadcast encryption scheme with sublinear ciphertexts to attain meaningful guarantees of receiver anonymity, and formalizes the notion of outsider-anonymous broadcast encryption (oABE), and describes generic constructions in the standard model that achieve outsider- anonymity under adaptive corruption in the chosen-plaintext and chosen-ciphertext settings.
Abstract: In the standard setting of broadcast encryption, information about the receivers is transmitted as part of the ciphertext. In several broadcast scenarios, however, the identities of the users authorized to access the content are often as sensitive as the content itself. In this paper, we propose the first broadcast encryption scheme with sublinear ciphertexts to attain meaningful guarantees of receiver anonymity. We formalize the notion of outsider-anonymous broadcast encryption (oABE), and describe generic constructions in the standard model that achieve outsider-anonymity under adaptive corruptions in the chosen-plaintext and chosen-ciphertext settings. We also describe two constructions with enhanced decryption, one under the gap Die-Hellman assumption, in the random oracle model, and the other under the decisional Die-Hellman assumption, in the standard model.