scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Cryptology in Africa in 2009"


Book ChapterDOI
19 Jun 2009
TL;DR: The result suggests that decoding attack against the variant has little chance to be better than the general one against the classical McEliece cryptosystem, and a new NP-complete decision problem called quasi-cyclic syndrome decoding is introduced.
Abstract: The McEliece cryptosystem is one of the oldest public-key cryptosystems ever designed. It is also the first public-key cryptosystem based on linear error-correcting codes. Its main advantage is to have very fast encryption and decryption functions. However it suffers from a major drawback. It requires a very large public key which makes it very difficult to use in many practical situations. A possible solution is to advantageously use quasi-cyclic codes because of their compact representation. On the other hand, for a fixed level of security, the use of optimal codes like Maximum Distance Separable ones allows to use smaller codes. The almost only known family of MDS codes with an efficient decoding algorithm is the class of Generalized Reed-Solomon (GRS) codes. However, it is well-known that GRS codes and quasi-cyclic codes do not represent secure solutions. In this paper we propose a new general method to reduce the public key size by constructing quasi-cyclic Alternant codes over a relatively small field like ${\mathbb{F}}_{2^8}$. We introduce a new method of hiding the structure of a quasi-cyclic GRS code. The idea is to start from a Reed-Solomon code in quasi-cyclic form defined over a large field. We then apply three transformations that preserve the quasi-cyclic feature. First, we randomly block shorten the RS code. Next, we transform it to get a Generalised Reed Solomon, and lastly we take the subfield subcode over a smaller field. We show that all existing structural attacks are infeasible. We also introduce a new NP-complete decision problem called quasi-cyclic syndrome decoding. This result suggests that decoding attack against our variant has little chance to be better than the general one against the classical McEliece cryptosystem. We propose a system with several sizes of parameters from 6,800 to 20,000 bits with a security ranging from 280 to 2120.

235 citations


Book ChapterDOI
19 Jun 2009
TL;DR: In this paper, the authors proposed threshold attribute-based signatures (t-ABS), which enables a signature holder to prove possession of signatures by revealing only the relevant attributes of the signer, hence providing signer-attribute privacy for the signature holder.
Abstract: In this paper we propose threshold attribute-based signatures (t-ABS). A t-ABS scheme enables a signature holder to prove possession of signatures by revealing only the relevant attributes of the signer, hence providing signer-attribute privacy for the signature holder. We define t-ABS schemes, formalize their security and propose two t-ABS schemes: a basic scheme secure against selective forgery and a second one secure against existential forgery, both provable in the standard model, assuming hardness of the CDH problem. We show that our basic t-ABS scheme can be augmented with two extra protocols that are used for efficiently issuing and verifying t-ABS signatures on committed values. We call the augmented scheme a threshold attribute based c-signature scheme (t-ABCS). We show how a t-ABCS scheme can be used to realize a secure threshold attribute-based anonymous credential system (t-ABACS) providing issuer-attribute privacy. We propose a security model for t-ABACS, give a concrete scheme using t-ABCS scheme, and prove that the credential system is secure if the t-ABCS scheme is secure.

222 citations


Book ChapterDOI
19 Jun 2009
TL;DR: Simulations show that inducing a single random byte fault at the input of the eighth round of the AES algorithm the block cipher key can be deduced without any brute-force search.
Abstract: In the present paper a new fault based attack has been proposed against AES-Rijndael. The paper shows that inducing a single random byte fault at the input of the eighth round of the AES algorithm the block cipher key can be deduced. Simulations show that when two faulty ciphertext pairs are generated, the key can be exactly deduced without any brute-force search. Further results show that with one single faulty ciphertext pair, the AES key can be ascertained with a brute-force search of 232.

166 citations


Book ChapterDOI
19 Jun 2009
TL;DR: The conclusion is drawn that the Schnorr-like identity-based signature scheme is arguably the most efficient such scheme known to date.
Abstract: The use of concatenated Schnorr signatures [Sch91] for the hierarchical delegation of public keys is a well-known technique. In this paper we carry out a thorough analysis of the identity-based signature scheme that this technique yields. The resulting scheme is of interest since it is intuitive, simple and does not require pairings. We prove that the scheme is secure against existential forgery on adaptive chosen message and adaptive identity attacks using a variant of the Forking Lemma [PS00]. The security is proven in the Random Oracle Model under the discrete logarithm assumption. Next, we provide an estimation of its performance, including a comparison with the state of the art on identity-based signatures. We draw the conclusion that the Schnorr-like identity-based signature scheme is arguably the most efficient such scheme known to date.

109 citations


Book ChapterDOI
Ueli Maurer1
19 Jun 2009
TL;DR: It is shown that a single simple treatment, at a high level of abstraction, can replace the individual previous treatments of a protocol, and one can devise new instantiations of the protocol.
Abstract: We present a simple zero-knowledge proof of knowledge protocol of which many protocols in the literature are instantiations. These include Schnorr's protocol for proving knowledge of a discrete logarithm, the Fiat-Shamir and Guillou-Quisquater protocols for proving knowledge of a modular root, protocols for proving knowledge of representations (like Okamoto's protocol), protocols for proving equality of secret values, a protocol for proving the correctness of a Diffie-Hellman key, protocols for proving the multiplicative relation of three commitments (as required in secure multi-party computation), and protocols used in credential systems. This shows that a single simple treatment (and proof), at a high level of abstraction, can replace the individual previous treatments. Moreover, one can devise new instantiations of the protocol.

84 citations


Book ChapterDOI
19 Jun 2009
TL;DR: This work presents how an adaptive approach to modular exponentiation involving implementations based on both a radix and a residue number system gives the best all-around performance on the GPU both in terms of latency and throughput.
Abstract: Graphics processing units (GPU) are increasingly being used for general purpose computing We present implementations of large integer modular exponentiation, the core of public-key cryptosystems such as RSA, on a DirectX 10 compliant GPU DirectX 10 compliant graphics processors are the latest generation of GPU architecture, which provide increased programming flexibility and support for integer operations We present high performance modular exponentiation implementations based on integers represented in both standard radix form and residue number system form We show how a GPU implementation of a 1024-bit RSA decrypt primitive can outperform a comparable CPU implementation by up to 4 times and also improve the performance of previous GPU implementations by decreasing latency by up to 7 times and doubling throughput We present how an adaptive approach to modular exponentiation involving implementations based on both a radix and a residue number system gives the best all-around performance on the GPU both in terms of latency and throughput We also highlight the usage criteria necessary to allow the GPU to reach peak performance on public key cryptographic operations

82 citations


Book ChapterDOI
19 Jun 2009
TL;DR: Methods of recoding exponents to allow for regular implementations of m -ary exponentiation algorithms that use both signed and unsigned exponent digits are described.
Abstract: This paper describes methods of recoding exponents to allow for regular implementations of m -ary exponentiation algorithms. Recoding algorithms previously proposed in the literature do not lend themselves to being implemented in a regular manner, which is required if the implementation needs to resist side-channel attacks based on simple power analysis. The advantage of the algorithms proposed in this paper over previous work is that the recoding can be readily implemented in a regular manner. Recoding algorithms are proposed for exponentiation algorithms that use both signed and unsigned exponent digits.

77 citations


Book ChapterDOI
19 Jun 2009
TL;DR: A natural formalization is given to capture the notion of known-key distinguishers in an effort to view block cipher security from an alternative perspective e.g. a block cipher viewed as a primitive underlying some other cryptographic construction such as a hash function.
Abstract: Knudsen and Rijmen introduced the notion of known-key distinguishers in an effort to view block cipher security from an alternative perspective e.g. a block cipher viewed as a primitive underlying some other cryptographic construction such as a hash function; and applied this new concept to construct a 7-round distinguisher for the AES and a 7-round Feistel cipher. In this paper, we give a natural formalization to capture this notion, and present new distinguishers that we then use to construct known-key distinguishers for Rijndael with Large Blocks up to 7 and 8 rounds.

50 citations


Book ChapterDOI
19 Jun 2009
TL;DR: This work presents the first simple power analysis of software implementations of KeeLoq, and introduces techniques for effectively realizing an automatic SPA and a method for circumventing a simple countermeasure that can also be applied for analyzing other implementations of cryptography on microcontrollers.
Abstract: We present the first simple power analysis (SPA) of software implementations of KeeLoq . Our attack drastically reduces the efforts required for a complete break of remote keyless entry (RKE) systems based on KeeLoq . We analyze implementations of KeeLoq on microcontrollers and exploit timing vulnerabilities to develop an attack that allows for a practical key recovery within seconds of computation time, thereby significantly outperforming all existing attacks: Only one single measurement of a section of a KeeLoq decryption is sufficient to extract the 64 bit master key of commercial products, without the prior knowledge of neither plaintext nor ciphertext. We further introduce techniques for effectively realizing an automatic SPA and a method for circumventing a simple countermeasure, that can also be applied for analyzing other implementations of cryptography on microcontrollers.

49 citations


Book ChapterDOI
19 Jun 2009
TL;DR: In this paper, the authors investigated the power of the Cell Broadband Engine for state-of-the-art public-key cryptography and showed that it is competitive in terms of cost-performance ratio to other recent processors such as the Intel Core 2 for public key cryptography.
Abstract: This paper is the first to investigate the power of the Cell Broadband Engine for state-of-the-art public-key cryptography. We present a high-speed implementation of elliptic-curve Diffie-Hellman (ECDH) key exchange for this processor, which needs 697080 cycles on one Synergistic Processor Unit for a scalar multiplication on a 255-bit elliptic curve, including the costs for key verification and key compression. This cycle count is independent of inputs therefore protecting against timing attacks. This speed relies on a new representation of elements of the underlying finite field suited for the unconventional instruction set of this architecture. Furthermore we demonstrate that an implementation based on the multi-precision integer arithmetic functions provided by IBM's multi-precision math (MPM) library would take at least 2227040 cycles. Comparison with implementations of the same function for other architectures shows that the Cell Broadband Engine is competitive in terms of cost-performance ratio to other recent processors such as the Intel Core 2 for public-key cryptography. Specifically, the state-of-the-art Galbraith-Lin-Scott ECDH software performs 27370 scalar multiplications per second using all four cores of a 2.5GHz Intel Core 2 Quad Q9300 inside a $296 computer, while the new software reported in this paper performs 27474 scalar multiplications per second on a Playstation 3 that costs just $221. Both of these speed reports are for high-security 256-bit elliptic-curve cryptography.

46 citations


Book ChapterDOI
19 Jun 2009
TL;DR: This paper gives a formal definition of contributory protocols and defines an ideal functionality for password-based group key exchange with explicit authentication and contributiveness in the UC framework and provides the first steps toward realizing this functionality in the above strong adaptive setting.
Abstract: Adaptively-secure key exchange allows the establishment of secure channels even in the presence of an adversary that can corrupt parties adaptively and obtain their internal states. In this paper, we give a formal definition of contributory protocols and define an ideal functionality for password-based group key exchange with explicit authentication and contributiveness in the UC framework. As with previous definitions in the same framework, our definitions do not assume any particular distribution on passwords or independence between passwords of different parties. We also provide the first steps toward realizing this functionality in the above strong adaptive setting by analyzing an efficient existing protocol and showing that it realizes the ideal functionality in the random-oracle and ideal-cipher models based on the CDH assumption.

Book ChapterDOI
19 Jun 2009
TL;DR: This paper describes generic attacks on Feistel networks with internal permutations, instead of Feistels with internal functions as designed originally, and describes some attacks enabling to distinguish a k -roundFeistel network generator from a random permutation generator.
Abstract: In this paper, we describe generic attacks on Feistel networks with internal permutations, instead of Feistel networks with internal functions as designed originally. By generic attacks, we mean that in these attacks the internal permutations are supposed to be random. Despite the fact that some real Feistel ciphers actually use internal permutations like Twofish, Camellia, or DEAL, these ciphers have not been studied much. We will see that they do not always behave like the original Feistel networks with round functions. More precisely, we will see that the attacks (known plaintext attacks or chosen plaintext attacks) are often less efficient, namely on all 3i rounds, i *** ****. For a plaintext of size 2n bits, the complexity of the attacks will be strictly less than 22n when the number of rounds is less than or equal to 5. When the number k of rounds is greater, we also describe some attacks enabling to distinguish a k -round Feistel network generator from a random permutation generator.

Book ChapterDOI
19 Jun 2009
TL;DR: This paper presents two variations of the notion of co-soundness previously defined and used by [Groth et al. - EUROCRYPT 2006] in the common reference string model and shows a constant-round resettable zero-knowledge argument system in the Bare Public-Key model using black-box techniques only.
Abstract: In this paper we present two variations of the notion of co-soundness previously defined and used by [Groth et al. - EUROCRYPT 2006] in the common reference string model. The first variation holds in the Bare Public-Key (BPK, for short) model and closely follows the one of [Groth et al. - EUROCRYPT 2006]. The second variation (which we call weak co-soundness) is a weaker notion since it has a stronger requirement, and it holds in the Registered Public-Key model (RPK, for short). We then show techniques to construct co-sound argument systems that can be proved secure under standard assumptions, more specifically: 1 in the main result of this paper we show a constant-round resettable zero-knowledge argument system in the BPK model using black-box techniques only (previously it was achieved in [Canetti et al. - STOC 2000, Di Crescenzo et al. - CRYPTO 2004] with complexity leveraging); 1 additionally, we show an efficient statistical non-interactive zero- knowledge argument system in the RPK model (previously it was achieved in [Damgard et al. - TCC 2006] with complexity leveraging). We stress that no alternative solution preserving all properties enjoyed by ours is currently known using the classical notion of soundness.

Book ChapterDOI
19 Jun 2009
TL;DR: This paper designs and analyzes some new and practical (selectively) convertible undeniable signature (SCUS) schemes in both random oracle and standard model, and introduces the first practical RSA-based SCUS schemes secure in the standard model.
Abstract: In this paper, we design and analyze some new and practical (selectively) convertible undeniable signature (SCUS) schemes in both random oracle and standard model, which enjoy several merits over existing schemes in the literature. In particular, we design the first practical RSA-based SCUS schemes secure in the standard model. On the path, we also introduce two moduli RSA assumptions, including the strong twin RSA assumption, which is the RSA symmetry of the strong twin Diffie-Hellman assumption (Eurocrypt '08).

Book ChapterDOI
19 Jun 2009
TL;DR: If unlinkability is achieved from any other property of group signature schemes, then it becomes possible to construct a chosen-ciphertext secure cryptosystem from any one-way function, implying that it would be possible to drastically improve efficiency of group signing schemes if unlinkable is not taken into account.
Abstract: We investigate a theoretical gap between unlinkability of group signature schemes and their other requirements, and show that this gap is significantly large. Specifically, we clarify that if unlinkability is achieved from any other property of group signature schemes, then it becomes possible to construct a chosen-ciphertext secure cryptosystem from any one-way function . This result implies that it would be possible to drastically improve efficiency of group signature schemes if unlinkability is not taken into account. We also demonstrate to construct a significantly more efficient scheme (without unlinkability) than the best known full-fledged scheme.

Book ChapterDOI
19 Jun 2009
TL;DR: It is shown that anonymity and indistinguishability are not as orthogonal to each other (i.e., independent) as previously believed, and they are equivalent under certain circumstances.
Abstract: Anonymity or "key privacy" was introduced in [1] as a new security notion a cryptosystem must fulfill, in some settings, in addition to the traditional indistinguishability property. It requires an adversary not be able to distinguish pairs of ciphertexts based on the keys under which they are created. Anonymity for undeniable signatures is defined along the same lines, and is considered a relevant requirement for such signatures. Our results in this paper are twofold. First, we show that anonymity and indistinguishability are not as orthogonal to each other (i.e., independent) as previously believed. In fact, they are equivalent under certain circumstances. Consequently, we confirm the results of [1] on the anonymity of ElGamal's and of Cramer-Shoup's schemes, based on existing work about their indistinguishability. Next, we constructively use anonymous encryption together with secure digital signature schemes to build anonymous convertible undeniable signatures. In this context, we revisit a well known undeniable signature scheme, whose security remained an open problem for over than a decade, and prove that it is not anonymous. Moreover, we repair this scheme so that it provides the anonymity feature and analyze its security in our proposed framework. Finally, we analyze an efficient undeniable signature scheme, which was proposed recently, in our framework; we confirm its security results and show that it also enjoys the selective conversion feature.

Book ChapterDOI
19 Jun 2009
TL;DR: A generic modelling technique that can be used to extend existing frameworks for theoretical security analysis in order to capture the use of timestamps is proposed and applied to two of the most popular models adopted in literature.
Abstract: We propose a generic modelling technique that can be used to extend existing frameworks for theoretical security analysis in order to capture the use of timestamps. We apply this technique to two of the most popular models adopted in literature (Bellare-Rogaway and Canetti-Krawczyk). We analyse previous results obtained using these models in light of the proposed extensions, and demonstrate their application to a new class of protocols. In the timed CK model we concentrate on modular design and analysis of protocols, and propose a more efficient timed authenticator relying on timestamps. The structure of this new authenticator implies that an authentication mechanism standardised in ISO-9798 is secure. Finally, we use our timed extension to the BR model to establish the security of an efficient ISO protocol for key transport and unilateral entity authentication.

Book ChapterDOI
19 Jun 2009
TL;DR: If the public exponent e satisfies an equation eX *** (N *** (ap + bq ))Y = Z with suitably small integers X, Y, Z, then N can be factored efficiently and the number of such exponents is at least $N^{\frac{3}{4}-\varepsilon}$ where *** is arbitrarily small for large N .
Abstract: Let N = pq be an RSA modulus, ie the product of two large unknown primes of equal bit-size In the X 931-1997 standard for public key cryptography, Section 412, there are a number of recommendations for the generation of the primes of an RSA modulus Among them, the ratio of the primes shall not be close to the ratio of small integers In this paper, we show that if the public exponent e satisfies an equation eX *** (N *** (ap + bq ))Y = Z with suitably small integers X , Y , Z , where $\frac{a}{b}$ is an unknown convergent of the continued fraction expansion of $\frac{q}{p}$, then N can be factored efficiently In addition, we show that the number of such exponents is at least $N^{\frac{3}{4}-\varepsilon}$ where *** is arbitrarily small for large N

Book ChapterDOI
19 Jun 2009
TL;DR: The relationships between security models for these two primitives, IBE and CLE, and that for certified encryption are explored, it is shown that an identity-based encryption scheme is secure if and only if it is secure when viewed as a certified encryption scheme and an extension is proposed where the adversary is allowed to partially modify the secret keys of honest parties.
Abstract: The notion of certified encryption had recently been suggested as a suitable setting for analyzing the security of encryption against adversaries that tamper with the key-registration process. The flexible syntax afforded by certified encryption suggests that identity-based and certificateless encryption schemes can be analyzed using the models for certified encryption. In this paper we explore the relationships between security models for these two primitives and that for certified encryption. We obtain the following results. We show that an identity-based encryption scheme is secure if and only if it is secure when viewed as a certified encryption scheme. This result holds under the (unavoidable) restriction that registration occurs over private channels. In the case of certificateless encryption we observe that a similar result cannot hold. The reason is that existent models explicitly account for attacks against the non-monolithic structure of the secret keys whereas certified encryption models treat secret keys as whole entities. We propose an extension for certified encryption where the adversary is allowed to partially modify the secret keys of honest parties. The extension that we propose is very general and may lead to unsatisfiable notions. Nevertheless, we exhibit one instantiation for which we can prove the desired result: a certificateless encryption is secure if and only if its associated certified encryption scheme is secure. As part of our analysis, and a result of separate interest we confirm the folklore belief that for both IBE and CLE, security in the single-user setting (as captured by existent models) is equivalent to security in the multi-user setting.

Book ChapterDOI
19 Jun 2009
TL;DR: A general protocol enabling polynomial evaluations is introduced and the protocol for Hamming distance computation is revisited to obtain a simpler construction.
Abstract: Extended Private Information Retrieval (EPIR) has been introduced at CANS'07 by Bringer et al. as a generalization of the notion of Private Information Retrieval (PIR). The principle is to enable a user to privately evaluate a fixed and public function with two inputs, a chosen block from a database and an additional string. The main contribution of our work is to extend this notion in order to add more flexibility during the system life. As an example, we introduce a general protocol enabling polynomial evaluations. We also revisit the protocol for Hamming distance computation which was described at CANS'07 to obtain a simpler construction. As to practical concern, we explain how amortizing database computations when dealing with several requests.

Book ChapterDOI
19 Jun 2009
TL;DR: This paper investigates how threshold cryptography can be conducted with any linear secret sharing scheme and presents a function sharing scheme for the RSA cryptosystem, a generalization of Shoup's Shamir-based scheme that is similarly robust and provably secure under the static adversary model.
Abstract: Function sharing deals with the problem of distribution of the computation of a function (such as decryption or signature) among several parties. The necessary values for the computation are distributed to the participating parties using a secret sharing scheme (SSS). Several function sharing schemes have been proposed in the literature, with most of them using Shamir secret sharing as the underlying SSS. In this paper, we investigate how threshold cryptography can be conducted with any linear secret sharing scheme and present a function sharing scheme for the RSA cryptosystem. The challenge is that constructing the secret in a linear SSS requires the solution of a linear system, which normally involves computing inverses, while computing an inverse modulo φ (N ) cannot be tolerated in a threshold RSA system in any way. The threshold RSA scheme we propose is a generalization of Shoup's Shamir-based scheme. It is similarly robust and provably secure under the static adversary model. At the end of the paper, we show how this scheme can be extended to other public key cryptosystems and give an example on the Paillier cryptosystem.

Book ChapterDOI
19 Jun 2009
TL;DR: A preimage attack on Tiger with two passes (16 rounds) with a complexity of about 2174 compression function evaluations is shown, which is only slightly faster than brute force search, they present a step forward in the cryptanalysis of Tiger.
Abstract: Tiger is a cryptographic hash function proposed by Anderson and Biham in 1996. and produces a 192-bit hash value. Recently, weaknesses have been shown in round-reduced variants of the Tiger hash function. Collision attacks have been presented for Tiger reduced to 16 and 19 (out of 24) rounds at FSE 2006 and Indocrypt 2006. Furthermore, Mendel and Rijmen presented a 1-bit pseudo-near-collision for the full Tiger hash function at ASIACRYPT 2007. The attack has a complexity of about 247 compression function evaluations. While there exist several collision-style attacks for Tiger, the picture is different for preimage attacks. At WEWoRC 2007, Indesteege and Preneel presented a preimage attack on Tiger reduced to 12 and 13 rounds with a complexity of 264.5 and 2128.5, respectively. In this article, we show a preimage attack on Tiger with two passes (16 rounds) with a complexity of about 2174 compression function evaluations. Furthermore, we show how the attack can be extended to 17 rounds with a complexity of about 2185. Even though the attacks are only slightly faster than brute force search, they present a step forward in the cryptanalysis of Tiger.

Book ChapterDOI
19 Jun 2009
TL;DR: This is the first analysis on distribution of random decompositions in GLV allowing the derivation of the entropy and thus an answer to the question first posed by Gallant in 1999.
Abstract: At Crypto 2001, Gallant et al. showed how to exploit fast endomorphisms on some specific classes of elliptic curves to obtain fast scalar multiplication. The GLV method works by decomposing scalars into two small portions using multiplications, divisions, and rounding operations in the rationals. We present a new simple method based on the extended Euclidean algorithm that uses notably different operations than that of traditional decomposition. We obtain strict bounds on each component. Additionally, we examine the use of random decompositions, useful for key generation or cryptosystems requiring ephemeral keys. Specifically, we provide a complete description of the probability distribution of random decompositions and give bounds for each component in such a way that ensures a concrete level of entropy. This is the first analysis on distribution of random decompositions in GLV allowing the derivation of the entropy and thus an answer to the question first posed by Gallant in 1999.

Book ChapterDOI
19 Jun 2009
TL;DR: It is shown that the proposed oblivious transfer protocol realizes universally composable security in the presence of static adversaries in the common reference model assuming that the decisional Diffie-Hellman problem over a squared composite modulus of the form N =pq is hard.
Abstract: In this paper, a new implementation of universally composable, 1-out-of-2 oblivious transfer in the presence of static adversaries is presented and analyzed. Our scheme is constructed from the state-of-the-art Bresson-Catalano-Pointcheval's double trap-door public-key encryption scheme, where a trapdoor string comprises a master key and local keys. The idea behind our implementation is that the master key is used to extract input messages of a corrupted sender (as a result, a simulator designated for the corrupted sender is constructed) while the local keys are used to extract input messages of a corrupted receiver (as a result, a simulator designated for the corrupted receiver is defined). We show that the proposed oblivious transfer protocol realizes universally composable security in the presence of static adversaries in the common reference model assuming that the decisional Diffie-Hellman problem over a squared composite modulus of the form N =pq is hard.

Book ChapterDOI
19 Jun 2009
TL;DR: A second pre image differential path is presented for 5-pass HAVAL with probability 2*** 227 and exploited to devise a second preimage attack on 5- pass HAVal .
Abstract: HAVAL is a cryptographic hash function with variable hash value sizes proposed by Zheng, Pieprzyk and Seberry in 1992. It has 3, 4, or 5 passes, and each pass contains 32 steps. There was a collision attack on 5-pass HAVAL, but no second preimage attack. In this paper, we present a second preimage differential path for 5-pass HAVAL with probability 2*** 227 and exploit it to devise a second preimage attack on 5-pass HAVAL . Furthermore, we utilize the path to recover the partial key of HMAC/NMAC-5-pass HAVAL with 2235 oracle queries and 235 memory bytes.

Book ChapterDOI
19 Jun 2009
TL;DR: Several attacks on both versions of Vortex are described, including collisions, second preimages, pre images, and distinguishers, which exploit flaws both in the high-level design and in the lower-level algorithms.
Abstract: Vortex is a hash function that was first presented at ISC'2008, then submitted to the NIST SHA-3 competition after some modifications. This paper describes several attacks on both versions of Vortex, including collisions, second preimages, preimages, and distinguishers. Our attacks exploit flaws both in the high-level design and in the lower-level algorithms.