scispace - formally typeset
Search or ask a question

Showing papers in "IACR Cryptology ePrint Archive in 2009"


Posted Content
TL;DR: In this paper, a somewhat homomorphic encryption scheme using elementary modular arithmetic is described. But the main appeal of their approach is the conceptual simplicity. And the security of their scheme is reduced to finding an approximate integer gcd, i.e., given a list of integers that are near-multiples of a hidden integer, output that hidden integer.
Abstract: We describe a very simple “somewhat homomorphic” encryption scheme using only elementary modular arithmetic, and use Gentry’s techniques to convert it into a fully homomorphic scheme. Compared to Gentry’s construction, our somewhat homomorphic scheme merely uses addition and multiplication over the integers rather than working with ideal lattices over a polynomial ring. The main appeal of our approach is the conceptual simplicity. We reduce the security of our somewhat homomorphic scheme to finding an approximate integer gcd – i.e., given a list of integers that are near-multiples of a hidden integer, output that hidden integer. We investigate the hardness of this task, building on earlier work of HowgraveGraham.

1,297 citations


Posted Content
Qian Wang, Cong Wang, Jin Li, Kui Ren, Wenjing Lou 
TL;DR: The task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud is considered, and an elegant verification scheme is constructed for seamless integration of these two salient features in the protocol design.
Abstract: Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of client through the auditing of whether his data stored in the cloud is indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public verifiability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the Proof of Retrievability model [1] by manipulating the classic Merkle Hash Tree (MHT) construction for block tag authentication. Extensive security and performance analysis show that the proposed scheme is highly efficient and provably secure.

969 citations


Posted Content
TL;DR: In this article, the authors presented a new methodology for proving security of encryption systems using what they call Dual System Encryption, which is a new way to prove security of IBE and related encryption systems.
Abstract: We present a new methodology for proving security of encryption systems using what we call Dual System Encryption. Our techniques result in fully secure Identity-Based Encryption (IBE) and Hierarchical Identity-Based Encryption (HIBE) systems under the simple and established decisional Bilinear Diffie-Hellman and decisional Linear assumptions. Our IBE system has ciphertexts, private keys, and public parameters each consisting of a constant number of group elements. These results are the first HIBE system and the first IBE system with short parameters under simple assumptions. In a Dual System Encryption system both ciphertexts and private keys can take on one of two indistinguishable forms. A private key or ciphertext will be normal if they are generated respectively from the system’s key generation or encryption algorithm. These keys and ciphertexts will behave as one expects in an IBE system. In addition, we define semi-functional keys and ciphertexts. A semi-functional private key will be able to decrypt all normally generated ciphertexts; however, decryption will fail if one attempts to decrypt a semi-functional ciphertext with a semi-functional private key. Analogously, semi-functional ciphertexts will be decryptable only by normal private keys. Dual System Encryption opens up a new way to prove security of IBE and related encryption systems. We define a sequence of games where we change first the challenge ciphertext and then the private keys one by one to be semi-functional. We finally end up in a game where the challenge ciphertext and all private keys are semi-functional at which point proving security is straightforward. ∗Supported by NSF CNS-0716199, Air Force Office of Scientific Research (AFOSR) under the MURI award for “Collaborative policies and assured information sharing” (Project PRESIDIO) and the U.S. Department of Homeland Security under Grant Award Number 2006-CS-001-000001.

650 citations


Posted Content
TL;DR: A differential fault attack that can be applied to the AES using a single fault, which demonstrates that when a single random byte fault is induced at the input of the eighth round, the AES key can be deduced using a two stage algorithm.
Abstract: In this paper we present a differential fault attack that can be applied to the AES using a single fault. We demonstrate that when a single random byte fault is induced at the input of the eighth round, the AES key can be deduced using a two stage algorithm. The first step has a statistical expectation of reducing the possible key hypotheses to 2, and the second step to a mere 2. Furthermore, we show that, with certain faults, this can be reduced to two key hypothesis.

273 citations


Posted Content
TL;DR: In this article, the authors presented efficient secure protocols for set intersection and pattern matching based on secure pseudorandom function evaluations, in contrast to previous protocols that are based on polynomials.
Abstract: In this paper we construct efficient secure protocols for set intersection and pattern matching. Our protocols for securely computing the set intersection functionality are based on secure pseudorandom function evaluations, in contrast to previous protocols that are based on polynomials. In addition to the above, we also use secure pseudorandom function evaluation in order to achieve secure pattern matching. In this case, we utilize specific properties of the Naor-Reingold pseudorandom function in order to achieve high efficiency. Our results are presented in two adversary models. Our protocol for secure pattern matching and one of our protocols for set intersection achieve security against malicious adversaries under a relaxed definition where one corruption case is simulatable and for the other only privacy (formalized through indistinguishability) is guaranteed. We also present a protocol for set intersection that is fully simulatable in the model of covert adversaries. Loosely speaking, this means that a malicious adversary can cheat, but will then be caught with good probability.

211 citations


Posted Content
TL;DR: In this article, the authors proposed a block ciphers based hash function called PRESENT, which is the smallest published hash function with a digest size greater than or equal to 160 bits.
Abstract: Increasingly, everyday items are enhanced to pervasive devices by embedding computing power and their interconnection leads to Mark Weiser’s famous vision of ubiquitous computing (ubicomp), which is widely believed to be the next paradigm in information technology. The mass deployment of pervasive devices promises on the one hand many benefits (e.g. optimized supply-chains), but on the other hand, many foreseen applications are security sensitive (military, financial or automotive applications), not to mention possible privacy issues. Even worse, pervasive devices are deployed in a hostile environment, i.e. an adversary has physical access to or control over the devices, which enables the whole field of physical attacks. Not only the adversary model is different for ubicomp, but also its optimisation goals are significantly different from that of traditional application scenarios: high throughput is usually not an issue but power, energy and area are sparse resources. Due to the harsh cost constraints for ubicomp applications only the least required amount of computing power will be realized. If computing power is fixed and cost are variable, Moore’s Law leads to the paradox of an increasing demand for lightweight solutions. In this Thesis different approaches are followed to investigate new lightweight cryptographic designs for block ciphers, hash functions and asymmetric identification schemes. A strong focus is put on lightweight hardware implementations that require as few area (measured in Gate Equivalents (GE)) as possible. We start by scrutinizing the Data Encryption Standard (DES)—a standardized and well-investigated algorithm—and subsequently slightly modify it (yielding DESL) to decrease the area requirements. Then we start from scratch and design a complete new algorithm, called PRESENT, where we could build upon the results of the first step. A variety of implementation results of PRESENT—both in software and hardware—using different design strategies and different platforms is presented. Our serialized ASIC implementation (1, 000 GE) is the smallest published and enabled PRESENT to be considered as a suitable candidate for the upcoming ISO/IEC standard on lightweight cryptography (ISO/IEC JTC1 SC27 WG2). Inspired by these implementation results, we propose several lightweight hash functions that are based on PRESENT in Davies-Meyer-mode (DM-PRESENT-80, DM-PRESENT-128) and in Hirose-mode (H-PRESENT-128). For their security level of 64 (DM-PRESENT-80, DMPRESENT-128) and 128 bits (H-PRESENT-128) the implementation results are the smallest published. Finally, we use PRESENT in output feedback mode (OFB) as a pseudo-random number generator within the asymmetric identification scheme crypto-GPS. Its design trade-offs are discussed and the implementation results of different architectures (starting from 2, 181 GE) are backed with figures from a manufactured prototype ASIC. We conclude that block ciphers drew level with stream-ciphers with regard to low area requirements. Consequently, hash functions that are based on block ciphers can be implemented efficiently in hardware as well. Though it is not easy to obtain lightweight hash functions with a digest size of greater or equal to 160 bits. Given the required parameters, it is very unlikely that the NIST SHA-3 hash competition will lead to a lightweight approach. Hence, lightweight hash functions with a digest size of greater or equal to 160 bits remain an open research problem.

187 citations


Posted Content
TL;DR: In this paper, the chosen-prefix collision construction for MD5 has been improved to allow creation of a rogue CA certificate based on a collision with a regular end-user website certificate provided by a commercial CA.
Abstract: We present a refined chosen-prefix collision construction for MD5 that allowed creation of a rogue Certification Authority (CA) certificate, based on a collision with a regular end-user website certificate provided by a commercial CA. Compared to the previous construction from Eurocrypt 2007, this paper describes a more flexible family of differential paths and a new variable birthdaying search space. Combined with a time-memory trade-off, these improvements lead to just three pairs of near-collision blocks to generate the collision, enabling construction of RSA moduli that are sufficiently short to be accepted by current CAs. The entire construction is fast enough to allow for adequate prediction of certificate serial number and validity period: it can be made to require about 2 49 MD5 compression function calls. Finally, we improve the complexity of identical-prefix collisions for MD5 to about 2 16 MD5 compression function calls and use it to derive a practical single-block chosen-prefix collision construction of which an example is given.

161 citations


Posted Content
TL;DR: This paper shows attacks on reduced-round variants of AES-256 with up to 10 rounds with complexity which is feasible, and increases the understanding of AES security, and focuses on attacks with practical complexity, i.e., attacks that can be experimentally verified.
Abstract: AES is the best known and most widely used block cipher. Its three versions (AES128, AES-192, and AES-256) differ in their key sizes (128 bits, 192 bits and 256 bits) and in their number of rounds (10, 12, and 14, respectively). In the case of AES-128, there is no known attack which is faster than the 2 complexity of exhaustive search. However, AES-192 and AES-256 were recently shown to be breakable by attacks which require 2 and 2 time, respectively. While these complexities are much faster than exhaustive search, they are completely non-practical, and do not seem to pose any real threat to the security of AES-based systems. In this paper we describe several attacks which can break with practical complexity variants of AES-256 whose number of rounds are comparable to that of AES-128. One of our attacks uses only two related keys and 2 time to recover the complete 256-bit key of a 9-round version of AES-256 (the best previous attack on this variant required 4 related keys and 2 time). Another attack can break a 10 round version of AES-256 in 2 time, but it uses a stronger type of related subkey attack (the best previous attack on this variant required 64 related keys and 2 time). While neither AES-128 nor AES-256 can be directly broken by these attacks, the fact that their hybrid (which combines the smaller number of rounds from AES-128 along with the larger key size from AES-256) can be broken with such a low complexity raises serious concern about the remaining safety margin offered by the AES family of cryptosystems.

160 citations


Posted Content
TL;DR: In this paper, a simple way to reduce the key size in McEliece and related cryptosystems using a subclass of Goppa codes was described, while keeping the capability of correcting the full designed number of errors in the binary case.
Abstract: The classical McEliece cryptosystem is built upon the class of Goppa codes, which remains secure to this date in contrast to many other families of codes but leads to very large public keys. Previous proposals to obtain short McEliece keys have primarily centered around replacing that class by other families of codes, most of which were shown to contain weaknesses, and at the cost of reducing in half the capability of error correction. In this paper we describe a simple way to reduce significantly the key size in McEliece and related cryptosystems using a subclass of Goppa codes, while also improving the efficiency of cryptographic operations to O(n) time, and keeping the capability of correcting the full designed number of errors in the binary case.

158 citations


Posted Content
TL;DR: Recently, Naor et al. as discussed by the authors presented a generic construction of a public-key encryption scheme that is resilient to key leakage from any hash proof system, which is based on the decisional Diffie-Hellman assumption and its d-Linear variants.
Abstract: Most of the work in the analysis of cryptographic schemes is concentrated in abstract adversarial models that do not capture side-channel attacks. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. Inspired by recent side-channel attacks, especially the “cold boot attacks” of Halderman et al. (USENIX Security ’08), Akavia, Goldwasser and Vaikuntanathan (TCC ’09) formalized a realistic framework for modeling the security of encryption schemes against a wide class of sidechannel attacks in which adversarially chosen functions of the secret key are leaked. In the setting of public-key encryption, Akavia et al. showed that Regev’s lattice-based scheme (STOC ’05) is resilient to any leakage of L/polylog(L) bits, where L is the length of the secret key. In this paper we revisit the above-mentioned framework and our main results are as follows: • We present a generic construction of a public-key encryption scheme that is resilient to key leakage from any hash proof system. The construction does not rely on additional computational assumptions, and the resulting scheme is as efficient as the underlying hash proof system. Existing constructions of hash proof systems imply that our construction can be based on a variety of number-theoretic assumptions, including the decisional Diffie-Hellman assumption (and its progressively weaker d-Linear variants), the quadratic residuosity assumption, and Paillier’s composite residuosity assumption. • We construct a new hash proof system based on the decisional Diffie-Hellman assumption (and its d-Linear variants), and show that the resulting scheme is resilient to any leakage of L(1−o(1)) bits. In addition, we prove that the recent scheme of Boneh et al. (CRYPTO ’08), constructed to be a “circular-secure” encryption scheme, fits our generic approach and is also resilient to any leakage of L(1− o(1)) bits. • We extend the framework of key leakage to the setting of chosen-ciphertext attacks. On the theoretical side, we prove that the Naor-Yung paradigm is applicable in this setting as well, and obtain as a corollary encryption schemes that are CCA2-secure with any leakage of L(1 − o(1)) bits. On the practical side, we prove that variants of the CramerShoup cryptosystem (along the lines of our generic construction) are CCA1-secure with any leakage of L/4 bits, and CCA2-secure with any leakage of L/6 bits. A preliminary version of this work appeared in Advances in Cryptology – CRYPTO ’09, pages 18–35, 2009. This is the full version that will appear in SIAM Journal on Computing. ∗Incumbent of the Judith Kleeman Professorial Chair, Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot 76100, Israel. Email: moni.naor@weizmann.ac.il. Research supported in part by a grant from the Israel Science Foundation. †Microsoft Research, Mountain View, CA 94043, USA. Email: gil.segev@microsoft.com. Most of this work was completed while the author was a Ph.D. student at the Weizmann Institute of Science, and supported by the Adams Fellowship Program of the Israel Academy of Sciences and Humanities.

157 citations


Posted Content
TL;DR: In this paper, the authors investigate the foundations of PUFs from several perspectives and present alternative definitions and a new formalism, which is based on concrete time bounds and on the concept of a security experiment.
Abstract: We investigate the foundations of Physical Unclonable Functions from several perspectives. Firstly, we discuss formal and conceptual issues in the various current definitions of PUFs. As we argue, they have the effect that many PUF candidates formally meet no existing definition. Next, we present alternative definitions and a new formalism. It avoids asymptotic concepts like polynomial time, but is based on concrete time bounds and on the concept of a security experiment. The formalism splits the notion of a PUF into two new notions, Strong t-PUFs and Obfuscating t-PUFs. Then, we provide a comparative analysis between the existing definitions and our new notions, by classifying existing PUF implementations with respect to them. In this process, we use several new and unpublished machine learning results. The outcome of this comparative classification is that our definitions seem to match the current PUF landscape well, perhaps better than previous definitions. Finally, we analyze the security and practicality features of Strong and Obfuscating t-PUFs in concrete applications, obtaining further justification for the split into two notions.

Posted Content
TL;DR: A protocol for anonymous access to a database where the different records have different access control permissions so that only authorized users can access the record and the database provider does not learn which attributes or roles the user has when she accesses the database.
Abstract: We present a protocol for anonymous access to a database where the different records have different access control permissions. These permissions could be attributes, roles, or rights that the user needs to have in order to access the record. Our protocol offers maximal security guarantees for both the database and the user, namely (1) only authorized users can access the record; (2) the database provider does not learn which record the user accesses; and (3) the database provider does not learn which attributes or roles the user has when she accesses the database. We prove our protocol secure in the standard model (i.e., without random oracles) under the bilinear Diffie-Hellman exponent and the strong Diffie-Hellman assumptions.

Posted Content
TL;DR: In this article, the authors revisit the problem of constructing efficient secure two-party protocols for the problems of set intersection and set-union, focusing on the model of malicious parties, and present constant-round protocols that exhibit linear communication and a (practically) linear number of exponentiations with simulation based security.
Abstract: We revisit the problem of constructing efficient secure two-party protocols for the problems of setintersection and set-union, focusing on the model of malicious parties. Our main results are constantround protocols that exhibit linear communication and a (practically) linear number of exponentiations with simulation based security. In the heart of these constructions is a technique based on a combination of a perfectly hiding commitment and an oblivious pseudorandom function evaluation protocol. Our protocols readily transform into protocols that are UC-secure, and we discuss how to perform these transformations.

Posted Content
TL;DR: The notion of leakage-resilient signatures was introduced in this paper, which strengthened the standard security notion by giving the adversary the additional power to learn a bounded amount of arbitrary information about the secret state accessed during every signature generation.
Abstract: The strongest standard security notion for digital signature schemes is unforgeability under chosen message attacks. In practice, however, this notion can be insufficient due to “side-channel attacks” which exploit leakage of information about the secret internal state of the scheme’s hardware implementation. In this work we put forward the notion of “leakage-resilient signatures,” which strengthens the standard security notion by giving the adversary the additional power to learn a bounded amount of arbitrary information about the secret state that was accessed during every signature generation. This notion naturally implies security against all possible side-channel attacks as long as the amount of information leaked on each invocation is bounded and “only computation leaks information.” The main result of this paper is a construction which gives a (tree-based, stateful) leakage-resilient signature scheme based on any 3-time signature scheme. The amount of information that our scheme can safely leak per signature generation is 1/3 of the information the underlying 3-time signature scheme can leak in total. Based on recent works by Alwen, Dodis, Wichs and by Katz we give several efficient instantiations of 3-time signature schemes with the required security properties, hence yielding the first constructions of provably secure leakage-resilient signature schemes.

Posted Content
TL;DR: The notion of Strong CP-ABE is proposed, allowing each attribute private key to be linked to the corresponding user’s secret that is unknown to the attribute authority, and it is shown how to construct such a Strong CP -ABE scheme and prove its security based on the computational Diffie-Hellman assumption.
Abstract: As a recently proposed public key primitive, attribute-based encryption (ABE) (including Ciphertext-policy ABE (CP-ABE) and Key-policy ABE (KP-ABE)) is a highly promising tool for secure fine-grained access control. For the purpose of secure access control, there is, however, still one critical functionality missing in the existing ABE schemes, which is the prevention of key abuse. In particular, two kinds of key abuse problems are considered in this paper, i) illegal key sharing among colluding users and ii) misbehavior of the semi-trusted attribute authority including illegal key (re-)distribution. Both problems are extremely important as in an ABE-based access control system, the attribute private keys directly imply users’ privileges to the protected resources. To the best of our knowledge, such key abuse problems exist in all current ABE schemes as the attribute private keys assigned to the users are never designed to be linked to any user specific information except the commonly shared user attributes. To be concrete, we focus on the prevention of key abuse in CP-ABE in this paper . The notion of accountable CP-ABE (CP-ABE, in short) is first proposed to prevent illegal key sharing among colluding users. The accountability for user is achieved by embedding additional user specific information in the attribute private key issued to the user. To further obtain accountability for the attribute authority as well, the notion of Strong CP-ABE is proposed, allowing each attribute private key to be linked to the corresponding user’s secret that is unknown to the attribute authority. We show how to construct such a Strong CP-ABE scheme and prove its security based on the computational Diffie-Hellman assumption.

Posted Content
TL;DR: It is shown that inducing a random fault anywhere in one of the four diagonals of the state matrix at the input of the eighth round of the cipher leads to the deduction of the entire AES key.
Abstract: The present paper develops an attack on the AES algorithm, exploiting multiple byte faults in the state matrix. The work shows that inducing a random fault anywhere in one of the four diagonals of the state matrix at the input of the eighth round of the cipher leads to the deduction of the entire AES key. We also propose a more generalized fault attack which works if the fault induction does not stay confined to one diagonal. To the best of our knowledge, we present for the first time actual chip results for a fault attack on an iterative AES hardware running on a Xilinx FPGA platform. We show that when the fault stays within a diagonal, the AES key can be deduced with a brute force complexity of approximately 2, which was successfully performed in about 400 seconds on an Intel Xeon Server with 8 cores. We show further that even if the fault induction corrupts two or three diagonals, 2 and 4 faulty ciphertexts are necessary to uniquely identify the correct key.

Posted Content
TL;DR: In this article, a distributed PKG setup and private key extraction protocols in an asynchronous communication model for three important identity-based encryption (IBE) schemes were proposed, including Boneh and Franklin's IBE, Sakai and Kasahara's ICE, and Boneh Boyen's BB1-IBE.
Abstract: An identity-based encryption (IBE) scheme can greatly reduce the complexity of sending encrypted messages over the Internet. However, an IBE scheme necessarily requires a private-key generator (PKG), which can create private keys for clients, and so can passively eavesdrop on all encrypted communications. Although a distributed PKG has been suggested as a way to mitigate this problem for Boneh and Franklin’s IBE scheme, the security of this distributed protocol has not been proven and the proposed solution does not work over the asynchronous Internet. Further, a distributed PKG has not been considered for any other IBE scheme. In this paper, we design distributed PKG setup and private key extraction protocols in an asynchronous communication model for three important IBE schemes; namely, Boneh and Franklin’s IBE, Sakai and Kasahara’s IBE, and Boneh and Boyen’s BB1-IBE. We give special attention to the applicability of our protocols to all possible types of bilinear pairings and prove their IND-ID-CCA security in the random oracle model. Finally, we also perform a comparative analysis of these protocols and present recommendations for their use.

Posted Content
TL;DR: It is concluded that for 1024-bit RSA the risk is small at least until the year 2014, and that 160-bit ECC may safely be used for much longer – with the current state of the art in cryptanalysis the authors would be surprised if a public effort can make a dent in 160- bit ECC by the year 2020.
Abstract: Meeting the requirements of NIST’s new cryptographic standard ‘Suite B Cryptography’ means phasing out usage of 1024-bit RSA and 160-bit Elliptic Curve Cryptography (ECC) by the year 2010 This write-up comments on the vulnerability of these systems to an open community attack effort and aims to assess the risk of their continued usage beyond 2010 We conclude that for 1024-bit RSA the risk is small at least until the year 2014, and that 160-bit ECC may safely be used for much longer – with the current state of the art in cryptanalysis we would be surprised if a public effort can make a dent in 160-bit ECC by the year 2020 Our assessment is based on the latest practical data of large scale integer factorization and elliptic curve discrete logarithm computation efforts

Posted Content
TL;DR: Wang et al. as discussed by the authors pointed out that Das's protocol is vulnerable to an offline password guessing attack, and also showed a countermeasure to overcome the vulnerability without sacrificing any efficiency and usability.
Abstract: User authentication is essential for customized services and privileged access control in wireless sensor network. In 2009, Das proposed a novel two-factor authentication scheme for wireless sensor network, where a user must prove the possession of both a password and a smart card. His scheme is well-designed for sensor nodes which typically have limited resources in the sense that its authentication procedure requires no public key operations but it utilizes only cryptographic hash function. In this letter, we point out that Das’s protocol is vulnerable to an offline password guessing attack, and also show a countermeasure to overcome the vulnerability without sacrificing any efficiency and usability. Besides the patch, we suggest a method to protect query response messages from wireless a sensor node to a user, which is necessary in serving a user in a confidential and authentic way.

Posted Content
TL;DR: In this paper, the authors proposed a multi-authority attribute-based encryption scheme, in which only the set of recipients defined by the encrypting party can decrypt a corresponding ciphertext.
Abstract: An attribute based encryption scheme capable of handling multiple authorities was recently proposed by Chase. The scheme is built upon a single-authority attribute based encryption scheme presented earlier by Sahai and Waters. Chase’s construction uses a trusted central authority that is inherently capable of decrypting arbitrary ciphertexts created within the system. We present a multi-authority attribute based encryption scheme in which only the set of recipients defined by the encrypting party can decrypt a corresponding ciphertext. The central authority is viewed as “honest-but-curious”: on the one hand it honestly follows the protocol, and on the other hand it is curious to decrypt arbitrary ciphertexts thus violating the intent of the encrypting party. The proposed scheme, which like its predecessors relies on the Bilinear DiffieHellman assumption, has a complexity comparable to that of Chase’s scheme. We prove that our scheme is secure in the selective ID model and can tolerate an honest-but-curious central authority.

Posted Content
TL;DR: The notion of automorphic signatures is introduced, which satisfy the following properties: the verification keys lie in the message space, messages and signatures consist of elements of a bilinear group, and verification is done by evaluating a set of pairing-product equations.
Abstract: We introduce the notion of automorphic signatures, which satisfy the following properties: the verification keys lie in the message space, messages and signatures consist of elements of a bilinear group, and verification is done by evaluating a set of pairing-product equations. These signatures make a perfect counterpart to the powerful proof system by Groth and Sahai (Eurocrypt 2008). We provide practical instantiations of automor- phic signatures under appropriate assumptions and use them to construct the first efficient round-optimal blind signatures. By combining them with Groth-Sahai proofs, we moreover give practical instantiations of various other cryptographic primitives, such as fully-secure group signatures, non-interactive anonymous credentials and anonymous proxy signatures. To do so, we show how to transform signature schemes whose message space is a group to a scheme that signs arbitrarily many messages at once.

Posted Content
TL;DR: In this article, the authors examined the relationship between and the efficiency of different approaches to standard (univariate) DPA attacks and established a link between the correlation coefficient and the conditional entropy in side-channel attacks.
Abstract: In this paper, we examine the relationship between and the efficiency of different approaches to standard (univariate) DPA attacks. We first show that, when feeded with the same assumptions about the target device (i.e. with the same leakage model), the most popular approaches such as using a distance-of-means test, correlation analysis, and Bayes attacks are essentially equivalent in this setting. Differences observed in practice are not due to differences in the statistical tests but due to statistical artifacts. Then, we establish a link between the correlation coefficient and the conditional entropy in side-channel attacks. In a first-order attack scenario, this relationship allows linking currently used metrics to evaluate standard DPA attacks (such as the number of power traces needed to perform a key recovery) with an information theoretic metric (the mutual information). Our results show that in the practical scenario defined formally in this paper, both measures are equally suitable to compare devices in respect to their susceptibility to DPA attacks. Together with observations regarding key and algorithm independence we consequently extend theoretical strategies for the sound evaluation of leaking devices towards the practice of side-channel attacks.

Posted Content
TL;DR: The fastest known algorithm for the shortest lattice vector problem runs in time 2 2.465n+o(n) and space 2 1.233n + o(n), where n is the lattice dimension as mentioned in this paper.
Abstract: The Shortest lattice Vector Problem is central in lattice-based cryptography, as well as in many areas of computational mathematics and computer science, such as computational number theory and combinatorial optimisation. We present an algorithm for solving it in time 2 2.465n+o(n) and space 2 1.233n+o(n) , where n is the lattice dimension. This improves the best previously known algo- rithm, by Micciancio and Voulgaris (SODA 2010), which runs in time 2 3.199n+o(n) and space 2 1.325n+o(n) .

Posted Content
TL;DR: In this article, a fully homomorphic encryption scheme from a somewhat homomorphic scheme was proposed, in which the public and private keys consist of two large integers (one of which is shared by both the private key and the ciphertext consists of one large integer).
Abstract: We present a fully homomorphic encryption scheme which has both relatively small key and ciphertext size. Our construction follows that of Gentry by producing a fully homomorphic scheme from a “somewhat” homomorphic scheme. For the somewhat homomorphic scheme the public and private keys consist of two large integers (one of which is shared by both the public and private key) and the ciphertext consists of one large integer. As such, our scheme has smaller message expansion and key size than Gentry’s original scheme. In addition, our proposal allows efficient fully homomorphic encryption over any field of characteristic two.

Posted Content
TL;DR: In this paper, the authors analyzed the Password Authenticated Connection Establishment (PACE) protocol for authenticated key agreement, recently proposed by the German Federal Office for Information Security (BSI) for the deployment in machine readable travel documents and showed that the PACE protocol is secure in the real or random sense of Abdalla, Fouque and Pointcheval, under a number-theoretic assumption related to the Diffie-Hellman problem and assuming random oracles and ideal ciphers.
Abstract: We analyze the Password Authenticated Connection Establishment (PACE) protocol for authenticated key agreement, recently proposed by the German Federal Office for Information Security (BSI) for the deployment in machine readable travel documents. We show that the PACE protocol is secure in the real-or-random sense of Abdalla, Fouque and Pointcheval, under a number-theoretic assumption related to the Diffie-Hellman problem and assuming random oracles and ideal ciphers.

Posted Content
TL;DR: In this article, the authors present a general attack framework for the McEliece cryptosystem and apply it to both schemes in order to recover the private key for most parameters in a few days on a single PC.
Abstract: The McEliece cryptosystem is a promising alternative to conventional public key encryption systems like RSA and ECC. In particular, it is supposed to resist even attackers equipped with quantum computers. Moreover, the encryption process requires only simple binary operations making it a good candidate for low cost devices like RFID tags. However, McEliece's original scheme has the drawback that the keys are very large. Two promising variants have been proposed to overcome this disadvantage. The rst one is due to Berger et al. presented at AFRICACRYPT 2009 and the second is due to Barreto and Misoczki presented at SAC 2009. In this paper we rst present a general attack framework and apply it to both schemes subsequently. Our framework allows us to recover the private key for most parameters proposed by the authors of both schemes within at most a few days on a single PC.

Posted Content
TL;DR: In this paper, the authors present new software speed records for encryption and decryption using the block cipher AES-128 for different architectures, including 8-bit AVR microcontrollers, NVIDIA graphics processing units (GPUs), and the Cell broadband engine.
Abstract: This paper presents new software speed records for encryption and decryption using the block cipher AES-128 for different architectures. Target platforms are 8-bit AVR microcontrollers, NVIDIA graphics processing units (GPUs) and the Cell broadband engine. The new AVR implementation requires 124.6 and 181.3 cycles per byte for encryption and decryption with a code size of less than two kilobyte. Compared to the previous AVR records for encryption our code is 38 percent smaller and 1.24 times faster. The byte-sliced implementation for the synergistic processing elements of the Cell architecture achieves speed of 11.7 and 14.4 cycles per byte for encryption and decryption. Similarly, our fastest GPU implementation, running on the GTX 295 and handling many input streams in parallel, delivers throughputs of 0.17 and 0.19 cycles per byte for encryption and decryption respectively. Furthermore, this is the first AES implementation for the GPU which implements both encryption and decryption.

Posted Content
TL;DR: In this paper, a new structural attack on the McEliece/Niederreiter public key cryptosystem based on subcodes of generalized ReedSolomon codes proposed by Berger and Loidreau is described.
Abstract: In this paper a new structural attack on the McEliece/Niederreiter public key cryptosystem based on subcodes of generalized ReedSolomon codes proposed by Berger and Loidreau is described. It allows the reconstruction of the private key for almost all practical parameter choices in polynomial time with high probability.

Posted Content
TL;DR: Direct links between Boolean bent functions, generalized Boolean bent programs, and quaternary bent functions are explored and Gray images of bent functions and notions of generalized nonlinearity for functions that are relevant to generalized linear cryptanalysis are studied.
Abstract: Boolean bent functions were introduced by Rothaus (1976) as combinatorial objects related to difference sets, and have since enjoyed a great popularity in symmetric cryptography and low correlation sequence design. In this paper direct links between Boolean bent functions, generalized Boolean bent functions (Schmidt, 2006) and quaternary bent functions (Kumar, Scholtz, Welch, 1985) are explored. We also study Gray images of bent functions and notions of generalized nonlinearity for functions that are relevant to generalized linear cryptanalysis.

Posted Content
TL;DR: A repaired version of the pairing based DAA protocol in the asymmetric setting is given, along with a highly detailed security proof, for the community to check and comment on.
Abstract: In [17, 18] we presented a pairing based DAA protocol in the asymmetric setting, along with a “security proof”. Jiangtao Li has pointed out to us an attack against this published protocol, thus our prior work should not be considered sound. In this paper we give a repaired version, along with a highly detailed security proof. A full paper will be made available shortly. However in the meantime we present this paper for the community to check and comment on.