scispace - formally typeset
Search or ask a question

Showing papers in "IACR Cryptology ePrint Archive in 2008"


Posted Content
TL;DR: In this article, the authors present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model.
Abstract: We present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model. Our solutions allow any encryptor to specify access control in terms of any access formula over the attributes in the system. In our most efficient system, ciphertext size, encryption, and decryption time scales linearly with the complexity of the access formula. The only previous work to achieve these parameters was limited to a proof in the generic group model. We present three constructions within our framework. Our first system is proven selectively secure under a assumption that we call the decisional Parallel Bilinear Diffie-Hellman Exponent (PBDHE) assumption which can be viewed as a generalization of the BDHE assumption. Our next two constructions provide performance tradeoffs to achieve provable security respectively under the (weaker) decisional Bilinear-Diffie-Hellman Exponent and decisional Bilinear Diffie-Hellman assumptions.

1,416 citations


Posted Content
TL;DR: This framework provides an efficient generic transformation from 1-universal to 2-universal hash proof systems and allows to prove IND-CCA2 security of a hybrid version of 1991's Damgard's ElGamal public-key encryption scheme under the DDH assumption.
Abstract: We present a new approach to the design of IND-CCA2 secure hybrid encryption schemes in the standard model. Our approach provides an ecient generic transformation from 1-universal to 2universal hash proof systems. The transformation involves a randomness extractor based on a 4-wise independent hash function as the key derivation function. Our methodology can be instantiated with ecient schemes based on standard intractability assumptions such as Decisional Die-Hellman, Quadratic Residuosity, and Paillier’s Decisional Composite Residuosity. Interestingly, our framework also allows to prove IND-CCA2 security of a hybrid version of 1991’s Damg ard’s ElGamal public-key encryption scheme under the DDH assumption.

361 citations


Posted Content
TL;DR: The cube attack was first proposed in this paper, which is a technique for solving tweakable polynomials over GF (2) which contain both secret variables (e.g., key bits) and public variables (i.e., plaintext bits or IV bits).
Abstract: Almost any cryptographic scheme can be described by tweakable polynomials over GF (2), which contain both secret variables (e.g., key bits) and public variables (e.g., plaintext bits or IV bits). The cryptanalyst is allowed to tweak the polynomials by choosing arbitrary values for the public variables, and his goal is to solve the resultant system of polynomial equations in terms of their common secret variables. In this paper we develop a new technique (called a cube attack) for solving such tweakable polynomials, which is a major improvement over several previously published attacks of the same type. For example, on the stream cipher Trivium with a reduced number of initialization rounds, the best previous attack (due to Fischer, Khazaei, and Meier) requires a barely practical complexity of 2 to attack 672 initialization rounds, whereas a cube attack can find the complete key of the same variant in 2 bit operations (which take less than a second on a single PC). Trivium with 735 initialization rounds (which could not be attacked by any previous technique) can now be broken with 2 bit operations. Trivium with 767 initialization rounds can now be broken with 2 bit operations, and the complexity of the attack can almost certainly be further reduced to about 2 bit operations. Whereas previous attacks were heuristic, had to be adapted to each cryptosystem, had no general complexity bounds, and were not expected to succeed on random looking polynomials, cube attacks are provably successful when applied to random polynomials of degree d over n secret variables whenever the number m of public variables exceeds d + logdn. Their complexity is 2 d−1n + n bit operations, which is polynomial in n and amazingly low when d is small. Cube attacks can be applied to any block cipher, stream cipher, or MAC which is provided as a black box (even when nothing is known about its internal structure) as long as at least one output bit can be represented by (an unknown) polynomial of relatively low degree in the secret and public variables.

350 citations


Posted Content
TL;DR: In this article, the authors considered predicate privacy in the symmetric-key setting and presented a symmetrickey predicate encryption scheme which supports inner product queries, and proved that their scheme achieves both plaintext privacy and predicate privacy.
Abstract: Predicate encryption is a new encryption paradigm which gives a master secret key owner fine-grained control over access to encrypted data. The master secret key owner can generate secret key tokens corresponding to predicates. An encryption of data x can be evaluated using a secret token corresponding to a predicate f; the user learns whether the data satisfies the predicate, i.e., whether f(x) = 1. Prior work on public-key predicate encryption has focused on the notion of data or plaintext privacy, the property that ciphertexts reveal no information about the encrypted data to an attacker other than what is inherently revealed by the tokens the attacker possesses. In this paper, we consider a new notion called predicate privacy, the property that tokens reveal no information about the encoded query predicate. Predicate privacy is inherently impossible to achieve in the public-key setting and has therefore received little attention in prior work. In this work, we consider predicate encryption in the symmetric-key setting and present a symmetrickey predicate encryption scheme which supports inner product queries. We prove that our scheme achieves both plaintext privacy and predicate privacy.

340 citations


Posted Content
TL;DR: In this article, the problem of generating a hard random lattice together with a basis of relatively short vectors was revisited and improved in several ways, most notably by making the output basis asymptotically as short as possible.
Abstract: We revisit the problem of generating a ‘hard’ random lattice together with a basis of relatively short vectors. This problem has gained in importance lately due to new cryptographic schemes that use such a procedure to generate public/secret key pairs. In these applications, a shorter basis corresponds to milder underlying complexity assumptions and smaller key sizes. The contributions of this work are twofold. First, we simplify and modularize an approach originally due to Ajtai (ICALP 1999). Second, we improve the construction and its analysis in several ways, most notably by making the output basis asymptotically as short as possible.

318 citations


Posted Content
TL;DR: The solution removes the burden of verification from the customer, alleviates both the customer and storage service’s fear of data leakage, and provides a method for independent arbitration of data retention contracts.
Abstract: A growing number of online services, such as Google, Yahoo!, and Amazon, are starting to charge users for their storage Customers often use these services to store valuable data such as email, family photos and videos, and disk backups Today, a customer must entirely trust such external services to maintain the integrity of hosted data and return it intact Unfortunately, no service is infallible To make storage services accountable for data loss, we present protocols that allow a thirdparty auditor to periodically verify the data stored by a service and assist in returning the data intact to the customer Most importantly, our protocols are privacy-preserving, in that they never reveal the data contents to the auditor Our solution removes the burden of verification from the customer, alleviates both the customer’s and storage service’s fear of data leakage, and provides a method for independent arbitration of data retention contracts

264 citations


Posted Content
TL;DR: In this article, the authors presented several improvements to Stern's attack on the McEliece cryptosystem and achieved results considerably better than Canteaut et al. This attack has been implemented and is now in progress.
Abstract: This paper presents several improvements to Stern’s attack on the McEliece cryptosystem and achieves results considerably better than Canteaut et al. This paper shows that the system with the originally proposed parameters can be broken in just 1400 days by a single 2.4GHz Core 2 Quad CPU, or 7 days by a cluster of 200 CPUs. This attack has been implemented and is now in progress. This paper proposes new parameters for the McEliece and Niederreiter cryptosystems achieving standard levels of security against all known attacks. The new parameters take account of the improved attack; the recent introduction of list decoding for binary Goppa codes; and the possibility of choosing code lengths that are not a power of 2. The resulting public-key sizes are considerably smaller than previous parameter choices for the same level of security.

243 citations


Posted Content
TL;DR: In this article, the authors proposed a new dynamic accumulator scheme based on bilinear maps and showed how to apply it to the problem of revocation of anonymous credentials, proving a credential's validity and updating witnesses both come at (virtually) no cost for credential owners and verifiers.
Abstract: The success of electronic authentication systems, be it e-ID card systems or Internet authentication systems such as CardSpace, highly depends on the provided level of user-privacy. Thereby, an important requirement is an efficient means for revocation of the authentication credentials. In this paper we consider the problem of revocation for certificate-based privacy-protecting authentication systems. To date, the most efficient solutions for revocation for such systems are based on cryptographic accumulators. Here, an accumulate of all currently valid certificates is published regularly and each user holds a witness enabling her to prove the validity of her (anonymous) credential while retaining anonymity. Unfortunately, the users' witnesses must be updated at least each time a credential is revoked. For the know solutions, these updates are computationally very expensive for users and/or certificate issuers which is very problematic as revocation is a frequent event as practice shows. In this paper, we propose a new dynamic accumulator scheme based on bilinear maps and show how to apply it to the problem of revocation of anonymous credentials. In the resulting scheme, proving a credential's validity and updating witnesses both come at (virtually) no cost for credential owners and verifiers. In particular, updating a witness requires the issuer to do only one multiplication per addition or revocation of a credential and can also be delegated to untrusted entities from which a user could just retrieve the updated witness. We believe that thereby we provide the first authentication system offering privacy protection suitable for implementation with electronic tokens such as eID cards or drivers' licenses.

233 citations


Posted Content
TL;DR: Attribute-Based Signatures (ABS) as discussed by the authors is a new cryptographic primitive, in which a signature attests not to the identity of the individual who endorsed a message, but instead to a (possibly complex) claim regarding the attributes she posseses.
Abstract: We introduce a new and versatile cryptographic primitive called Attribute-Based Signatures (ABS), in which a signature attests not to the identity of the individual who endorsed a message, but instead to a (possibly complex) claim regarding the attributes she posseses. ABS offers: – A strong unforgeability guarantee for the verifier, that the signature was produced by a single party whose attributes satisfy the claim being made; i.e., not by a collusion of individuals who pooled their attributes together. – A strong privacy guarantee for the signer, that the signature reveals nothing about the identity or attributes of the signer beyond what is explicitly revealed by the claim being made. We formally define the security requirements of ABS as a cryptographic primitive, and then describe an efficient ABS construction based on groups with bilinear pairings. We prove that our construction is secure in the generic group model. Finally, we illustrate several applications of this new tool; in particular, ABS fills a critical security requirement in attribute-based messaging (ABM) systems. A powerful feature of our ABS construction is that unlike many other attribute-based cryptographic primitives, it can be readily used in a multi-authority setting, wherein users can make claims involving combinations of attributes issued by independent and mutually distrusting authorities.

185 citations


Posted Content
TL;DR: In this article, the authors studied an adaptive variant of oblivious transfer in which a receiver can adaptively choose to receive k one after the other, in such a way that the sender learns nothing about the receiver's selections, and the receiver only learns about the k requested messages.
Abstract: We study an adaptive variant of oblivious transfer in which a sender has N messages, of which a receiver can adaptively choose to receive k one-after-the-other, in such a way that (a) the sender learns nothing about the receiver’s selections, and (b) the receiver only learns about the k requested messages. We propose two practical protocols for this primitive that achieve a stronger security notion than previous schemes with comparable efficiency. In particular, by requiring full simulatability for both sender and receiver security, our notion prohibits a subtle selective-failure attack not addressed by the security notions achieved by previous practical schemes. Our first protocol is a very efficient generic construction from unique blind signatures in the random oracle model. The second construction does not assume random oracles, but achieves remarkable efficiency with only a constant number of group elements sent during each transfer. This second construction uses novel techniques for building efficient simulatable protocols.

181 citations


Posted Content
TL;DR: Camenisch et al. as discussed by the authors introduced batch verifiers for a wide variety of regular, identity-based, group, ring and aggregate signature schemes, which answers an open problem of batching group signatures and provides an efficient, effective approach to verifying multiple signatures from (possibly) different signers.
Abstract: In many applications, it is desirable to work with signatures that are both short, and yet where many messages from different signers be verified very quickly. RSA signatures satisfy the latter condition, but are generally thousands of bits in length. Recent developments in pairingbased cryptography produced a number of “short” signatures which provide equivalent security in a fraction of the space. Unfortunately, verifying these signatures is computationally intensive due to the expensive pairing operation. In an attempt to simultaneously achieve “short and fast” signatures, Camenisch, Hohenberger and Pedersen (Eurocrypt 2007) showed how to batch verify two pairing-based schemes so that the total number of pairings was independent of the number of signatures to verify. In this work, we present both theoretical and practical contributions. On the theoretical side, we introduce new batch verifiers for a wide variety of regular, identity-based, group, ring and aggregate signature schemes. These are the first constructions for batching group signatures, which answers an open problem of Camenisch et al. On the practical side, we implement each of these algorithms and compare each batching algorithm to doing individual verifications. Our goal is to test whether batching is practical; that is, whether the benefits of removing pairings significantly outweigh the cost of the additional operations required for batching, such as group membership testing, randomness generation, and additional modular exponentiations and multiplications. We experimentally verify that the theoretical results of Camenisch et al. and this work, indeed, provide an efficient, effective approach to verifying multiple signatures from (possibly) different signers.

Posted Content
TL;DR: This work shows how to build a certain “trapdoor test” that allows us to effectively answer decision oracle queries for the twin Diffie–Hellman problem without knowing any of the corresponding discrete logarithms, and presents a new variant of ElGamal encryption with very short ciphertexts, and with a very simple and tight security proof, in the random oracle model, under the assumption that the ordinary Diffie-Hell man problem is hard.
Abstract: We propose a new computational problem called the twin Diffie-Hellman problem. This problem is closely related to the usual (computational) Diffie-Hellman problem and can be used in many of the same cryptographic constructions that are based on the Diffie-Hellman problem. Moreover, the twin Diffie-Hellman problem is at least as hard as the ordinary Diffie-Hellman problem. However, we are able to show that the twin Diffie-Hellman problem remains hard, even in the presence of a decision oracle that recognizes solutions to the problem — this is a feature not enjoyed by the Diffie-Hellman problem in general. Specifically, we show how to build a certain “trapdoor test” that allows us to effectively answer decision oracle queries for the twin Diffie-Hellman problem without knowing any of the corresponding discrete logarithms. Our new techniques have many applications. As one such application, we present a new variant of ElGamal encryption with very short ciphertexts, and with a very simple and tight security proof, in the random oracle model, under the assumption that the ordinary Diffie-Hellman problem is hard. We present several other applications as well, including: a new variant of Diffie and Hellman’s non-interactive key exchange protocol; a new variant of Cramer-Shoup encryption, with a very simple proof in the standard model; a new variant of Boneh-Franklin identity-based encryption, with very short ciphertexts; a more robust version of a password-authenticated key exchange protocol of Abdalla and Pointcheval.

Posted Content
TL;DR: This paper constructs a highly efficient and provably secure PDP technique based entirely on symmetric key cryptography, while not requiring any bulk encryption, and allows outsourcing of dynamic data, i.e, it efficiently supports operations, such as block modification, deletion and append.
Abstract: Storage outsourcing is a rising trend which prompts a number of interesting security issues, many of which have been extensively investigated in the past. However, Provable Data Possession (PDP) is a topic that has only recently appeared in the research literature. The main issue is how to frequently, efficiently and securely verify that a storage server is faithfully storing its client’s (potentially very large) outsourced data. The storage server is assumed to be untrusted in terms of both security and reliability. (In other words, it might maliciously or accidentally erase hosted data; it might also relegate it to slow or off-line storage.) The problem is exacerbated by the client being a small computing device with limited resources. Prior work has addressed this problem using either public key cryptography or requiring the client to outsource its data in encrypted form. In this paper, we construct a highly efficient and provably secure PDP technique based entirely on symmetric key cryptography, while not requiring any bulk encryption. Also, in contrast with its predecessors, our PDP technique allows outsourcing of dynamic data, i.e, it efficiently supports operations, such as block modification, deletion and append.

Posted Content
TL;DR: In this article, the authors introduce twisted Edwards curves, a generalization of the recently introduced Edwards curve, which includes more curves over finite fields, and in particular every elliptic curve in Montgomery form.
Abstract: This paper introduces “twisted Edwards curves,” a generalization of the recently introduced Edwards curves; shows that twisted Edwards curves include more curves over finite fields, and in particular every elliptic curve in Montgomery form; shows how to cover even more curves via isogenies; presents fast explicit formulas for twisted Edwards curves in projective and inverted coordinates; and shows that twisted Edwards curves save time for many curves that were already expressible as Edwards curves.


Posted Content
TL;DR: Dodis et al. as mentioned in this paper proposed a two-round authenticated key agreement protocol in which Alice and Bob use W to agree on a nearly uniform key R, by communicating over a public channel controlled by an active adversary Eve.
Abstract: We study the question of basing symmetric key cryptography on weak secrets. In this setting, Alice and Bob share an n-bit secretW , which might not be uniformly random, but the adversary has at least k bits of uncertainty about it (formalized using conditional min-entropy). Since standard symmetrickey primitives require uniformly random secret keys, we would like to construct an authenticated key agreement protocol in which Alice and Bob use W to agree on a nearly uniform key R, by communicating over a public channel controlled by an active adversary Eve. We study this question in the information theoretic setting where the attacker is computationally unbounded. We show that single-round (i.e. one message) protocols do not work when k ≤ n2 , and require poor parameters even when n2 < k n. On the other hand, for arbitrary values of k, we design a communication efficient two-round (challenge-response) protocol extracting nearly k random bits. This dramatically improves the previous construction of Renner and Wolf [RW03], which requires Θ(λ+ log(n)) rounds where λ is the security parameter. Our solution takes a new approach by studying and constructing “non-malleable” seeded randomness extractors — if an attacker sees a random seed X and comes up with an arbitrarily related seed X ′, then we bound the relationship between R = Ext(W ;X) and R′ = Ext(W ;X ′). We also extend our two-round key agreement protocol to the “fuzzy” setting, where Alice and Bob share “close” (but not equal) secrets WA and WB , and to the Bounded Retrieval Model (BRM) where the size of the secret W is huge. ∗Computer Science Dept. NYU. Email: dodis@cs.nyu.edu. †Computer Science Dept. NYU. Email: wichs@cs.nyu.edu.

Posted Content
TL;DR: This work presents an efficient and UC-secure adaptive k -out-of-N OT protocol in the same model as Peikert et al.
Abstract: In an oblivious transfer (OT) protocol, a Sender with messages M1, . . . ,MN and a Receiver with indices σ1, . . . , σk ∈ [1, N ] interact in such a way that at the end the Receiver obtains Mσ1 , . . . ,Mσk without learning anything about the other messages and the Sender does not learn anything about σ1, . . . , σk. In an adaptive protocol, the Receiver may obtain Mσi−1 before deciding on σi. Efficient adaptive OT protocols are interesting both as a building block for secure multiparty computation and for enabling oblivious searches on medical and patent databases. Historically, adaptive OT protocols were analyzed with respect to a “half-simulation” definition which Naor and Pinkas showed to be flawed. In 2007, Camenisch, Neven, and shelat, and subsequent other works, demonstrated efficient adaptive protocols in the full-simulation model. These protocols, however, all use standard rewinding techniques in their proofs of security and thus are not universally composable. Recently, Peikert, Vaikuntanathan and Waters presented universally composable (UC) non-adaptive OT protocols (for the 1-out-of-2 variant). However, it is not clear how to preserve UC security while extending these protocols to the adaptive k-outof-N setting. Further, any such attempt would seem to require O(N) computation per transfer for a database of size N . In this work, we present an efficient and UC-secure adaptive k-out-of-N OT protocol, where after an initial commitment to the database, the cost of each transfer is constant. Our construction is secure under bilinear assumptions in the standard model.

Posted Content
TL;DR: This document consists of a description of attack methodologies and a collection of detailed attacks upon RFID protocols to serve as a quick and easy reference and it will be updated as new attacks are found.
Abstract: This document consists of a description of attack methodologies and a collection of detailed attacks upon RFID protocols. It is meant to serve as a quick and easy reference and it will be updated as new attacks are found. Currently the only attacks on protocols shown in full detail are the authors’ original attacks with references to similar attacks on other protocols. The main security properties considered are authentication, untraceability, and desynchronization resistance.

Posted Content
TL;DR: Two desynchronization attacks to break the ultralightweight RFID authentication protocol are found and two patches that slightly modify the protocol are presented in the paper.
Abstract: Recently, Chien proposed an ultralightweight RFID authentication protocol to prevent all possible attacks. However, we find two de-synchronization attacks to break the protocol.

Posted Content
TL;DR: In this paper, it was shown that the Feistel construction with 6 rounds is sufficient to construct an ideal cipher from a random oracle and that 5 rounds are insufficient by providing a simple attack.
Abstract: The Random Oracle Model and the Ideal Cipher Model are two well known idealised models of computation for proving the security of cryptosystems. At Crypto 2005, Coron et al. showed that security in the random oracle model implies security in the ideal cipher model; namely they showed that a random oracle can be replaced by a block cipher-based construction, and the resulting scheme remains secure in the ideal cipher model. The other direction was left as an open problem, i.e. constructing an ideal cipher from a random oracle. In this paper we solve this open problem and show that the Feistel construction with 6 rounds is enough to obtain an ideal cipher; we also show that 5 rounds are insufficient by providing a simple attack. This contrasts with the classical Luby-Rackoff result that 4 rounds are necessary and sufficient to obtain a (strong) pseudo-random permutation from a pseudo-random function.

Posted Content
TL;DR: In this paper, the authors show how to change one coordinate function of an almost perfect nonlinear (APN) function in order to obtain new examples, and show that the approach can be used to construct "non-quadratic" APN functions.
Abstract: Following an example in [13], we show how to change one coordinate function of an almost perfect nonlinear (APN) function in order to obtain new examples. It turns out that this is a very powerful method to construct new APN functions. In particular, we show that the approach can be used to construct “non-quadratic” APN functions. This new example is in remarkable contrast to all recently constructed functions which have all been quadratic. 1 Preliminaries In this paper, we consider functions F : F n 2 → F n 2 with “good” differential and linear properties. Motivated by applications in cryptography, a lot of research has been done to construct functions which are “as nonlinear as possible”. We discuss two possibilities to define nonlinearity: One approach uses differential properties of linear functions, the other measures the “distance” to linear functions. Let us begin with the differential properties. Given F : F n 2 → F n 2 , we define ∆F (a, b) := |{x : F (x+ a)− F (x) = b}|. We have ∆F (0, 0) = 2 , and ∆F (0, b) = 0 if b 6= 0. Since we are working in fields of characteristic 2, we may replace the “−” by + and write F (x+a)+F (x) instead of F (x−a)−F (x). We say that F is almost perfect nonlinear (APN) if ∆F (a, b) ∈ {0, 2} for all a, b ∈ F n 2 , a 6= 0. Note that ∆F (a, b) ∈ {0, 2} if F is linear, hence the condition ∆F (a, b) ∈ {0, 2} identifies functions which are quite different from linear mappings. Since we are working in characteristic 2, it is impossible that ∆F (a, b) = 1 for some a, b, since the values ∆F (a, b) must be even: If x is a solution of F (x + a)− F (x) = b, then x + a, too. In the case of odd characteristic, functions F : F n q → F n q with ∆F (a, b) = 1 for all a 6= 0 do exist, and they are called perfect nonlinear or planar. In the last few years, many new APN functions have been constructed. The first example of a non-power mapping has been described in [26]. Infinite series are contained in [5, 10, 11, 12, 13, 16, 17]. Also some new planar functions have been found, see [15, 22, 36]. There may be a possibility for a unified treatment of (some of) these constructions in the even and odd characteristic case. In particular, we suggest to look more carefully at the underlying design of an APN function, similar to the designs corresponding to planar functions, which are projective planes, see [29]. Department of Pure Mathematics and Computer Algebra, Ghent University, Krijgslaan 281, S22, B-9000 Ghent, Belgium. The research is supported by the Interuniversitary Attraction Poles Programme-Belgian State-Belgian Science Policy: project P6/26-Bcrypt. Department of Mathematics, Otto-von-Guericke-University Magdeburg, D-39016 Magdeburg, Germany An equivalent function has been found independently by Brinkmann and Leander [7]. However, they claimed that their function is CCZ equivalent to a quadratic one. In this paper we give several reasons why this new function is not equivalent to a quadratic one

Posted Content
TL;DR: The first proof-of-retrievability schemes with full proofs of security against arbitrary adversaries in the strongest model, that of Juels and Kaliski as mentioned in this paper, were constructed from BLS signatures and secure in the random oracle model.
Abstract: In a proof-of-retrievability system, a data storage center must prove to a verifier that he is actually storing all of a client’s data. The central challenge is to build systems that are both efficient and provably secure — that is, it should be possible to extract the client’s data from any prover that passes a verification check. In this paper, we give the first proof-of-retrievability schemes with full proofs of security against arbitrary adversaries in the strongest model, that of Juels and Kaliski. Our first scheme, built from BLS signatures and secure in the random oracle model, has the shortest query and response of any proof-of-retrievability with public verifiability. Our second scheme, which builds elegantly on pseudorandom functions (PRFs) and is secure in the standard model, has the shortest response of any proof-of-retrievability scheme with private verifiability (but a longer query). Both schemes rely on homomorphic properties to aggregate a proof into one small authenticator value.

Posted Content
TL;DR: In this paper, the authors propose a new methodology for rational secret sharing leading to various instantiations (in both the two-party and multi-party settings) that are simple and efficient in terms of computation, share size, and round complexity.
Abstract: We propose a new methodology for rational secret sharing leading to various instantiations (in both the two-party and multi-party settings) that are simple and efficient in terms of computation, share size, and round complexity. Our protocols do not require physical assumptions or simultaneous channels, and can even be run over asynchronous, point-to-point networks. We also propose new equilibrium notions (namely, computational versions of strict Nash equilibrium and stability with respect to trembles) and prove that our protocols satisfy them. These notions guarantee, roughly speaking, that at each point in the protocol there is a unique legal message a party can send. This, in turn, ensures that protocol messages cannot be used as subliminal channels, something achieved in prior work only by making strong assumptions on the communication network.

Posted Content
TL;DR: A wide variety of common CPU architectures--amd64, ppc32, sparcv9, and x86--are discussed in detail, along with several specific microarchitectures.
Abstract: This paper presents new speed records for AES software, taking advantage of (1) architecture-dependent reduction of instructions used to compute AES and (2) microarchitecture-dependent reduction of cycles used for those instructions. A wide variety of common CPU architectures—amd64, ppc32, sparcv9, and x86—are discussed in detail, along with several specific microarchitectures.

Posted Content
TL;DR: In this paper, a pseudorandom generator with small locality was constructed by connecting the outputs to the inputs using any sufficiently good unbalanced expander, and it is hard to distinguish between a random graph that is such an expander and random graph where a (planted) random logarithmic-sized subset of the outputs is connected to fewer than |S| inputs.
Abstract: We construct a new public key encryption based on two assumptions: 1. One can obtain a pseudorandom generator with small locality by connecting the outputs to the inputs using any sufficiently good unbalanced expander. 2. It is hard to distinguish between a random graph that is such an expander and a random graph where a (planted) random logarithmic-sized subset S of the outputs is connected to fewer than |S| inputs. The validity and strength of the assumptions raise interesting new algorithmic and pseudorandomness questions, and we explore their relation to the current state-of-art.

Posted Content
TL;DR: In this article, the authors show that the question of complete fairness in two-party secure computation without an honest majority is far from closed, and show feasibility of obtaining complete fairness when computing any function over polynomial-size domains that does not contain an embedded XOR.
Abstract: In the setting of secure two-party computation, two mutually distrusting parties wish to compute some function of their inputs while preserving, to the extent possible, various security properties such as privacy, correctness, and more. One desirable property is fairness which guarantees, informally, that if one party receives its output, then the other party does too. Cleve (STOC 1986) showed that complete fairness cannot be achieved in general without an honest majority. Since then, the accepted folklore has been that nothing non-trivial can be computed with complete fairness in the two-party setting, and the problem has been treated as closed since the late ’80s. In this paper, we demonstrate that this folklore belief is false by showing completely-fair protocols for various non-trivial functions in the two-party setting based on standard cryptographic assumptions. We first show feasibility of obtaining complete fairness when computing any function over polynomial-size domains that does not contain an “embedded XOR”; this class of functions includes boolean AND/OR as well as Yao’s “millionaires’ problem”. We also demonstrate feasibility for certain functions that do contain an embedded XOR, and prove a lower bound showing that any completely-fair protocol for such functions must have round complexity super-logarithmic in the security parameter. Our results demonstrate that the question of completely-fair secure computation without an honest majority is far from closed.

Posted Content
TL;DR: This paper exhibits similar algebraic and diffential attacks, that will reduce published Rainbow-like schemes below their security levels, and discusses how parameters for Rainbow and TTS schemes should be chosen for practical applications.
Abstract: 1 Dept. of Mathematical Sciences, University of Cincinnati, USA, ding@math.uc.edu 2 IIS, Academia Sinica, Taiwan, [byyang,owenhsin,mschen]@iis.sinica.edu.tw 3 Dept. of Elec. Eng., Nat'l Taiwan University, Taiwan, ccheng@cc.ee.ntu.edu.tw Abstract. A recently proposed class of multivariate Public-Key Cryptosystems, the Rainbow-Like Digital Signature Schemes, in which successive sets of central variables are obtained from previous ones by solving linear equations, seem to lead to e cient schemes (TTS, TRMS, and Rainbow) that perform well on systems of low computational resources. Recently SFLASH (C∗−) was broken by Dubois, Fouque, Shamir, and Stern via a di erential attack. In this paper, we exhibit similar algebraic and di ential attacks, that will reduce published Rainbow-like schemes below their security levels. We will also discuss how parameters for Rainbow and TTS schemes should be chosen for practical applications.

Posted Content
TL;DR: SMS4 is a Chinese block cipher standard, mandated for use in protecting wireless networks, and issued in January 2006 as discussed by the authors, which has 32 rounds, each of which modifies one of the four 32-bit words by xoring it with a keyed function of the other three words.
Abstract: SMS4 is a Chinese block cipher standard, mandated for use in protecting wireless networks , and issued in January 2006. The input, output, and key of SMS4 are each 128 bits. The algorithm has 32 rounds, each of which modifies one of the four 32-bit words that make up the block by xoring it with a keyed function of the other three words. Encryption and decryption have the same structure except that the round key schedule for decryption is the reverse of the round key schedule for encryption.

Posted Content
TL;DR: In this article, the authors investigate how template attacks can be applied to implementations of an asymmetric cryptographic algorithm on a 32-bit platform and show that even SPA-resistant implementations of ECDSA on a typical 32bit platform succumb to template-based SPA attacks.
Abstract: Template attacks have been considered exclusively in the context of implementations of symmetric cryptographic algorithms on 8-bit devices. Within these scenarios, they have proven to be the most powerful attacks. This is not surprising because they assume the most powerful adversaries. In this article we investigate how template attacks can be applied to implementations of an asymmetric cryptographic algorithm on a 32-bit platform. The asymmetric cryptosystem under scrutiny is the elliptic curve digital signature algorithm (ECDSA). ECDSA is particularly suitable for 32-bit platforms. In this article we show that even SPA resistant implementations of ECDSA on a typical 32-bit platform succumb to template-based SPA attacks. The only way to secure such implementations against template-based SPA attacks is to make them resistant against DPA attacks.

Posted Content
TL;DR: In this paper, the authors improved a discrete variant of Tardos's codes and gave a security proof of their codes under an assumption weaker than the original Marking Assumption.
Abstract: It has been proven that the code lengths of Tardos's collusion-secure fingerprinting codes are of theoretically minimal order with respect to the number of adversarial users (pirates). However, the code lengths can be further reduced as some preceding studies have revealed. In this article we improve a recent discrete variant of Tardos's codes, and give a security proof of our codes under an assumption weaker than the original Marking Assumption. Our analysis shows that our codes have significantly shorter lengths than Tardos's codes. For example, when c = 8, our code length is about 4.94% of Tardos's code in a practical setting and about 4.62% in a certain limit case. Our code lengths for large c are asymptotically about 5.35% of Tardos's codes.