scispace - formally typeset
Search or ask a question

Showing papers on "Collision resistance published in 2010"


Book
01 Jan 2010
TL;DR: Cryptosystems I and II: Cryptography between Wonderland and Underland as discussed by the authors, a simple BGN-type Cryptosystem from LWE, or Bonsai Trees, or how to delegate a Lattice Basis.
Abstract: Cryptosystems I.- On Ideal Lattices and Learning with Errors over Rings.- Fully Homomorphic Encryption over the Integers.- Converting Pairing-Based Cryptosystems from Composite-Order Groups to Prime-Order Groups.- Fully Secure Functional Encryption: Attribute-Based Encryption and (Hierarchical) Inner Product Encryption.- Obfuscation and Side Channel Security.- Secure Obfuscation for Encrypted Signatures.- Public-Key Encryption in the Bounded-Retrieval Model.- Protecting Circuits from Leakage: the Computationally-Bounded and Noisy Cases.- 2-Party Protocols.- Partial Fairness in Secure Two-Party Computation.- Secure Message Transmission with Small Public Discussion.- On the Impossibility of Three-Move Blind Signature Schemes.- Efficient Device-Independent Quantum Key Distribution.- Cryptanalysis.- New Generic Algorithms for Hard Knapsacks.- Lattice Enumeration Using Extreme Pruning.- Algebraic Cryptanalysis of McEliece Variants with Compact Keys.- Key Recovery Attacks of Practical Complexity on AES-256 Variants with up to 10 Rounds.- IACR Distinguished Lecture.- Cryptography between Wonderland and Underland.- Automated Tools and Formal Methods.- Automatic Search for Related-Key Differential Characteristics in Byte-Oriented Block Ciphers: Application to AES, Camellia, Khazad and Others.- Plaintext-Dependent Decryption: A Formal Security Treatment of SSH-CTR.- Computational Soundness, Co-induction, and Encryption Cycles.- Models and Proofs.- Encryption Schemes Secure against Chosen-Ciphertext Selective Opening Attacks.- Cryptographic Agility and Its Relation to Circular Encryption.- Bounded Key-Dependent Message Security.- Multiparty Protocols.- Perfectly Secure Multiparty Computation and the Computational Overhead of Cryptography.- Adaptively Secure Broadcast.- Universally Composable Quantum Multi-party Computation.- Cryptosystems II.- A Simple BGN-Type Cryptosystem from LWE.- Bonsai Trees, or How to Delegate a Lattice Basis.- Efficient Lattice (H)IBE in the Standard Model.- Hash and MAC.- Multi-property-preserving Domain Extension Using Polynomial-Based Modes of Operation.- Stam's Collision Resistance Conjecture.- Universal One-Way Hash Functions via Inaccessible Entropy.- Foundational Primitives.- Constant-Round Non-malleable Commitments from Sub-exponential One-Way Functions.- Constructing Verifiable Random Functions with Large Input Spaces.- Adaptive Trapdoor Functions and Chosen-Ciphertext Security.

320 citations


Book ChapterDOI
17 Aug 2010
TL;DR: This paper proposes a novel design philosophy for lightweight hash functions, based on a single security level and on the sponge construction, to minimize memory requirements, and presents the hash function family QUARK, composed of the three instances U-QUark, D-QUarks, and T-QUARK.
Abstract: The need for lightweight cryptographic hash functions has been repeatedly expressed by application designers, notably for implementing RFID protocols. However not many designs are available, and the ongoing SHA-3 Competition probably won't help, as it concerns general-purpose designs and focuses on software performance. In this paper, we thus propose a novel design philosophy for lightweight hash functions, based on a single security level and on the sponge construction, to minimize memory requirements. Inspired by the lightweight ciphers Grain and KATAN, we present the hash function family QUARK, composed of the three instances U-QUARK, D-QUARK, and T-QUARK. Hardware benchmarks show that QUARK compares well to previous lightweight hashes. For example, our lightest instance U-QUARK conjecturally provides at least 64-bit security against all attacks (collisions, multicollisions, distinguishers, etc.), fits in 1379 gate-equivalents, and consumes in average 2.44 µW at 100 kHz in 0.18 µm ASIC. For 112- bit security, we propose T-QUARK, which we implemented with 2296 gate-equivalents.

202 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: A (strictly) more general version of the Leftover Hash Lemma that is valid even if side information is represented by the state of a quantum system is proved and applies to arbitrary δ-almost two-universal families of hash functions.
Abstract: The Leftover Hash Lemma states that the output of a two-universal hash function applied to an input with sufficiently high entropy is almost uniformly random. In its standard formulation, the lemma refers to a notion of randomness that is (usually implicitly) defined with respect to classical side information. Here, we prove a (strictly) more general version of the Leftover Hash Lemma that is valid even if side information is represented by the state of a quantum system. Furthermore, our result applies to arbitrary δ-almost two-universal families of hash functions. The generalized Leftover Hash Lemma has applications in cryptography, e.g., for key agreement in the presence of an adversary who is not restricted to classical information processing.

115 citations


Journal ArticleDOI
TL;DR: The main idea of this letter is that credentials generated using the collision resistant hash function provide an authenticated ephemeral Diffie-Hellman key exchange only between a mobile node and an access point without communicating with an authentication server whenever a handover authentication occurs.
Abstract: This letter proposes a handover authentication scheme using credentials based on chameleon hashing. The main challenges in handover authentication are to provide robust security and efficiency. The main idea of this letter is that credentials generated using the collision resistant hash function provide an authenticated ephemeral Diffie-Hellman key exchange only between a mobile node and an access point without communicating with an authentication server whenever a handover authentication occurs. Our scheme supports robust key exchange and efficient authentication procedure.

86 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed improving algorithm has strong diffusion and confusion capability, good collision resistance, extreme sensitivity to message and secret key, and the corresponding improving measures are proposed.

77 citations


Book ChapterDOI
12 Aug 2010
TL;DR: This work shows a general construction for transforming any chameleon hash function to a strongly unforgeable one-time signature scheme and demonstrates the usefulness of this general construction by studying and optimizing specific instantiations based on the hardness of factoring, the discrete-log problem, and the worst-case lattice-based assumptions.
Abstract: In this work we show a general construction for transforming any chameleon hash function to a strongly unforgeable one-time signature scheme. Combined with the result of [Bellare and Ristov, PKC 2007], this also implies a general construction of strongly unforgeable one-time signatures from Σ-protocols in the standard model. Our results explain and unify several works in the literature which either use chameleon hash functions or one-time signatures, by showing that several of the constructions in the former category can be interpreted as efficient instantiations of those in the latter. They also imply that any "noticeable" improvement to the efficiency of constructions for chameleon hash functions leads to similar improvements for one-time signatures. This makes such improvements challenging since efficiency of one-time signatures has been studied extensively. We further demonstrate the usefulness of our general construction by studying and optimizing specific instantiations based on the hardness of factoring, the discrete-log problem, and the worst-case lattice-based assumptions. Some of these signature schemes match or improve the efficiency of the best previous constructions or relax the underlying hardness assumptions. Two of the schemes have very fast signing (no exponentiations) which makes them attractive in scenarios where the signer has limited computational resources.

69 citations


Journal ArticleDOI
TL;DR: In the proposed scheme, both the combiner and the participants can verify the correctness of the information exchanged among themselves, and high complexity operations like modular multiplication, exponentiation and inversion are avoided to increase its efficiency.

50 citations


Book ChapterDOI
01 Mar 2010
TL;DR: In this paper, the first cryptanalytic attacks on reduced-round versions of Grostl hash functions were presented by several extensions of the rebound attack, including collision attacks on 4/10 rounds and 5/14 rounds, respectively.
Abstract: Grostl is one of 14 second round candidates of the NIST SHA-3 competition. Cryptanalytic results on the wide-pipe compression function of Grostl-256 have already been published. However, little is known about the hash function, arguably a much more interesting cryptanalytic setting. Also, Grostl-512 has not been analyzed yet. In this paper, we show the first cryptanalytic attacks on reduced-round versions of the Grostl hash functions. These results are obtained by several extensions of the rebound attack. We present a collision attack on 4/10 rounds of the Grostl-256 hash function and 5/14 rounds of the Grostl-512 hash functions. Additionally, we give the best collision attack for reduced-round (7/10 and 7/14) versions of the compression function of Grostl-256 and Grostl-512.

48 citations


Book ChapterDOI
25 Oct 2010
TL;DR: In this paper, the authors compare the state-of-the-art provable security reductions of the second round SHA-3 candidates and derive some security bounds from the literature, which the hash function designers seem to be unaware of.
Abstract: In 2007, the US National Institute for Standards and Technology announced a call for the design of a new cryptographic hash algorithm in response to vulnerabilities identified in existing hash functions, such as MD5 and SHA-1. NIST received many submissions, 51 of which got accepted to the first round. At present, 14 candidates are left in the second round. An important criterion in the selection process is the SHA-3 hash function security and more concretely, the possible security reductions of the hash function to the security of its underlying building blocks. While some of the candidates are supported with firm security reductions, for most of the schemes these results are still incomplete. In this paper, we compare the state of the art provable security reductions of the second round SHA-3 candidates. Surprisingly, we derive some security bounds from the literature, which the hash function designers seem to be unaware of. Additionally, we generalize the well-known proof of collision resistance preservation, such that all SHA-3 candidates with a suffix-free padding are covered.

47 citations


Proceedings ArticleDOI
23 Aug 2010
TL;DR: This paper generates a combination of symmetric hash functions, which increases the security of fingerprint matching by an exponential factor and suggests that the EER obtained using the combination of hash functions is comparable with the baseline system, with the added advantage of being more secure.
Abstract: Fingerprint based secure biometric authentication systems have received considerable research attention lately, where the major goal is to provide an anonymous, multipliable and easily revocable methodology for fingerprint verification. In our previous work, we have shown that symmetric hash functions are very effective in providing such secure fingerprint representation and matching since they are independent of order of minutiae triplets as well as location of singular points (e.g. core and delta). In this paper, we extend our prior work by generating a combination of symmetric hash functions, which increases the security of fingerprint matching by an exponential factor. Firstly, we extract kplets from each fingerprint image and generate a unique key for combining multiple hash functions up to an order of (k-1). Each of these keys is generated using the features extracted from minutiae k-plets such as bin index of smallest angles in each k-plet. This combination provides us an extra security in the face of brute force attacks, where the compromise of few hash functions as well do not compromise the overall matching. Our experimental results suggest that the EER obtained using the combination of hash functions (4.98%) is comparable with the baseline system (3.0%), with the added advantage of being more secure.

40 citations


Posted Content
TL;DR: A package of statistical tests are designed based on certain cryptographic properties of block ciphers and hash functions to evaluate their randomness, and are applied to the AES finalists, and produced more precise results than those obtained in similar applications.
Abstract: One of the most basic properties expected from block ciphers and hash functions is passing statistical randomness testing, as they are expected to behave like random mappings. Previously, testing of AES candidate block ciphers was done by concatenating the outputs of the algorithms obtained from various input types. In this work, a more convenient method, namely the cryptographic randomness testing is introduced. A package of statistical tests are designed based on certain cryptographic properties of block ciphers and hash functions to evaluate their randomness. The package is applied to the AES finalists, and produced more precise results than those obtained in similar applications.

Book ChapterDOI
08 Aug 2010
TL;DR: This paper proves a conjecture which was left as an open problem in Icart's paper that a deterministic function Fq → E(Fq) which can be computed efficiently, and allowed him and Coron to define well-behaved hash functions with values in E( Fq).
Abstract: Let E be a non-supersingular elliptic curve over a finite field Fq. At CRYPTO 2009, Icart introduced a deterministic function Fq → E(Fq) which can be computed efficiently, and allowed him and Coron to define well-behaved hash functions with values in E(Fq). Some properties of this function rely on a conjecture which was left as an open problem in Icart's paper. We prove this conjecture below as well as analogues for other hash functions.

Book ChapterDOI
12 Aug 2010
TL;DR: This paper presents two algorithms for computing preimages, each algorithm having its own advantages in terms of speed and preimage lengths and produces theoretical and experimental evidence that both are very efficient and succeed with a very large probability on the function parameters.
Abstract: After 15 years of unsuccessful cryptanalysis attempts by the research community, Grassl et al. have recently broken the collision resistance property of the Tillich-Zemor hash function. In this paper, we extend their cryptanalytic work and consider the preimage resistance of the function. We present two algorithms for computing preimages, each algorithm having its own advantages in terms of speed and preimage lengths. We produce theoretical and experimental evidence that both our algorithms are very efficient and succeed with a very large probability on the function parameters. Furthermore, for an important subset of these parameters, we provide a full proof that our second algorithm always succeeds in deterministic cubic time. Our attacks definitely break the Tillich-Zemor hash function and show that it is not even one-way. Nevertheless, we point out that other hash functions based on a similar design may still be secure.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors analyzed the security of a parallel keyed hash function based on chaotic neural network and showed that weak keys and forgery attacks against Xiao et al.'s scheme are demonstrated.

Book ChapterDOI
05 Jul 2010
TL;DR: In this paper, the authors present a study of Hamsi's resistance to differential and higher-order differential cryptanalysis, with focus on the 256-bit version of the hash function.
Abstract: Hamsi is one of 14 remaining candidates in NIST's Hash Competition for the future hash standard SHA-3. Until now, little analysis has been published on its resistance to differential cryptanalysis, the main technique used to attack hash functions. We present a study of Hamsi's resistance to differential and higher-order differential cryptanalysis, with focus on the 256-bit version of Hamsi. Our main results are efficient distinguishers and near-collisions for its full (3-round) compression function, and distinguishers for its full (6-round) finalization function, indicating that Hamsi's building blocks do not behave ideally.

Book ChapterDOI
30 May 2010
TL;DR: In this paper, a new double-piped mode of operation was proposed for multi-property-preserving domain extension of MACs (message authentication codes), PRFs (pseudorandom functions) and PROs (pseudoorandom oracles).
Abstract: In this paper, we propose a new double-piped mode of operation for multi-property-preserving domain extension of MACs (message authentication codes), PRFs (pseudorandom functions) and PROs (pseudorandom oracles). Our mode of operation performs twice as fast as the original double-piped mode of operation of Lucks [15] while providing comparable security. Our construction, which uses a class of polynomial-based compression functions proposed by Stam [22,23], makes a single call to a 3n-bit to n-bit primitive at each iteration and uses a finalization function f2 at the last iteration, producing an n-bit hash function H[f1,f2] satisfying the following properties. H[f1,f2] is unforgeable up to O(2n/n) query complexity as long as f1 and f2 are unforgeable. H[f1,f2] is pseudorandom up to O(2n/n) query complexity as long as f1 is unforgeable and f2 is pseudorandom. H[f1,f2] is indifferentiable from a random oracle up to O(22n/3) query complexity as long as f1 and f2 are public random functions. To our knowledge, our result constitutes the first time O(2n/n) unforgeability has been achieved using only an unforgeable primitive of n-bit output length. (Yasuda showed unforgeability of O(25n/6) for Lucks’ construction assuming an unforgeable primitive, but the analysis is sub-optimal; in the appendix, we show how Yasuda’s bound can be improved to O(2n).) In related work, we strengthen Stam’s collision resistance analysis of polynomial-based compression functions (showing that unforgeability of the primitive suffices) and discuss how to implement our mode by replacing f1 with a 2n-bit key blockcipher in Davies-Meyer mode or by replacing f1 with the cascade of two 2n-bit to n-bit compression functions.

Patent
20 Jan 2010
TL;DR: In this article, one or more blank rounds (iterations) of the quasi-group operation are concatenated to the EDON-R hash function operations, to overcome perceived security weaknesses.
Abstract: In the computer data security field, a cryptographic hash function process is embodied in a computer system or computer software or logic circuitry and is keyless, but highly secure. The process is based on (mathematical) quasi-group operations such as in the known “EDON-R” hash function. But here one or more blank rounds (iterations) of the quasi-group operation are concatenated to the EDON-R hash function operations, to overcome perceived security weaknesses in EDON-R.

Journal ArticleDOI
TL;DR: This Letter takes a chaos-based hash function proposed very recently in Amin, Faragallah and Abd El-Latif (2009) as a sample to analyze its computational collision problem, and generalizes the construction method of one kind of chaos- based hash function and summarize some attentions to avoid the collision problem.

Journal ArticleDOI
TL;DR: This paper presents a survey of 17 extenders in the literature and considers the natural question whether these preserve the security properties of the compression function, and more in particular collision resistance, second preimage resistance, pre image resistance and the pseudo-random oracle property.
Abstract: Cryptographic hash functions reduce inputs of arbitrary or very large length to a short string of fixed length. All hash function designs start from a compression function with fixed length inputs. The compression function itself is designed from scratch, or derived from a block cipher or a permutation. The most common procedure to extend the domain of a compression function in order to obtain a hash function is a simple linear iteration; however, some variants use multiple iterations or a tree structure that allows for parallelism. This paper presents a survey of 17 extenders in the literature. It considers the natural question whether these preserve the security properties of the compression function, and more in particular collision resistance, second preimage resistance, preimage resistance and the pseudo-random oracle property.

Book ChapterDOI
05 Jul 2010
TL;DR: In this article, the authors show how to prove a hash construction secure and insecure at the same time in the indifferentiability setting, using well known examples like NMAC and the Mix-Compress-Mix (MCM) construction.
Abstract: At Crypto 2005, Coron et al. introduced a formalism to study the presence or absence of structural flaws in iterated hash functions. If one cannot differentiate a hash function using ideal primitives from a random oracle, it is considered structurally sound, while the ability to differentiate it from a random oracle indicates a structural weakness. This model was devised as a tool to see subtle real world weaknesses while in the random oracle world. In this paper we take in a practical point of view. We show, using well known examples like NMAC and the Mix-Compress-Mix (MCM) construction, how we can prove a hash construction secure and insecure at the same time in the indifferentiability setting. These constructions do not differ in their implementation but only on an abstract level. Naturally, this gives rise to the question what to conclude for the implemented hash function. Our results cast doubts about the notion of "indifferentiability from a random oracle" to be a mandatory, practically relevant criterion (as e.g., proposed by Knudsen [17] for the SHA-3 competition) to separate good hash structures from bad ones.

Book ChapterDOI
13 Oct 2010
TL;DR: The generic analysis gives a simpler proof as in the FSE'09 analysis of TANDEM-DM by also tightening the security bound, and the collision resistance bound for CYCLIC-DM diminishes with an increasing cycle length c.
Abstract: We give collision resistance bounds for blockcipher based, double-call, double-length hash functions using (k, n)-bit blockciphers with k > n. Ozen and Stam recently proposed a framework [21] for such hash functions that use 3n-to-2n-bit compression functions and two parallel calls to two independent blockciphers with 2n-bit key and n-bit block size. We take their analysis one step further. We first relax the requirement of two distinct and independent blockciphers. We then extend this framework and also allow to use the ciphertext of the first call to the blockcipher as an input to the second call of the blockcipher. As far as we know, our extended framework currently covers any double-length, double-call blockcipher based hash function known in literature using a (2n, n)-bit blockcipher as, e.g., ABREAST-DM, TANDEM-DM [15], CYCLIC-DM [9] and Hirose's FSE'06 proposal [13]. Our generic analysis gives a simpler proof as in the FSE'09 analysis of TANDEM-DM by also tightening the security bound. The collision resistance bound for CYCLIC-DM given in [9] diminishes with an increasing cycle length c. We improve this bound for cycle lengths larger than 26.

Posted Content
TL;DR: A new approach is presented that produces 192 bit message digest and uses a modified message expansion mechanism which generates more bit difference in each working variable to make the algorithm more secure.
Abstract: Cryptographic hash functions play a central role in cryptography. Hash functions were introduced in cryptology to provide message integrity and authentication. MD5, SHA1 and RIPEMD are among the most commonly used message digest algorithm. Recently proposed attacks on well known and widely used hash functions motivate a design of new stronger hash function. In this paper a new approach is presented that produces 192 bit message digest and uses a modified message expansion mechanism which generates more bit difference in each working variable to make the algorithm more secure. This hash function is collision resistant and assures a good compression and preimage resistance.

Book ChapterDOI
12 Aug 2010
TL;DR: A new kind of attack based on a cancellation property in the round function is described, which allows to efficiently use the degrees of freedom available to attack a hash function.
Abstract: In this paper we study the strength of two hash functions which are based on Generalized Feistels. We describe a new kind of attack based on a cancellation property in the round function. This new technique allows to efficiently use the degrees of freedom available to attack a hash function. Using the cancellation property, we can avoid the non-linear parts of the round function, at the expense of some freedom degrees. Our attacks are mostly independent of the round function in use, and can be applied to similar hash functions which share the same structure but have different round functions. We start with a 22-round generic attack on the structure of Lesamnta, and adapt it to the actual round function to attack 24-round Lesamnta (the full function has 32 rounds). We follow with an attack on 9-round SHAvite-3512 which also works for the tweaked version of SHAvite-3512.

Book ChapterDOI
12 Dec 2010
TL;DR: In this paper, the Fast Wide Pipe (FWP) scheme is proposed to hash messages of arbitrary length, which is shown to be (1) preimage-resistance preserving, (2) collisionresistance-preserving and, most importantly, (3) indifferentiable from a random oracle up to O(n/2) compression function invocations.
Abstract: In this paper we propose a new sequential mode of operation – the Fast wide pipe or FWP for short – to hash messages of arbitrary length. The mode is shown to be (1) preimage-resistance preserving, (2) collision-resistance-preserving and, most importantly, (3) indifferentiable from a random oracle up to \(\mathcal{O}(2^{n/2})\) compression function invocations. In addition, our rigorous investigation suggests that any variants of Joux’s multi-collision, Kelsey-Schneier 2nd preimage and Herding attack are also ineffective on this mode. This fact leads us to conjecture that the indifferentiability security bound of FWP can be extended beyond the birthday barrier. From the point of view of efficiency, this new mode, for example, is always faster than the Wide-pipe mode when both modes use an identical compression function. In particular, it is nearly twice as fast as the Wide-pipe for a reasonable selection of the input and output size of the compression function. We also compare the FWP with several other modes of operation.

Dissertation
23 Mar 2010
TL;DR: This thesis addresses the question if there are security-amplifying combiners where the combined hash function provides a higher security level than the building blocks, thus going beyond the additive limit and proposes a solution that is essentially as efficient as the concatenated combiner.
Abstract: A hash function is an algorithm that compresses messages of arbitrary length into short digests of fixed length If the function additionally satisfies certain security properties, it becomes a powerful tool in the design of cryptographic protocols The most important property is collision-resistance, which requires that it should be hard to find two distinct messages that evaluate to the same hash value When a hash function deploys secret keys, it can also be used as a pseudorandom function or message authentication code However, recent attacks on collision-resistant hash functions caused a decrease of confidence that today’s candidates really have this property and have raised the question how to devise constructions that are more tolerant to cryptanalytic results Hence, approaches like robust combiners which “merge” several candidate functions into a single failure-tolerant one, are of great interest and have triggered a series of research In general, a hash combiner takes two hash functions H0,H1 and combines them in such a way that the resulting function remains secure as long as at least one of the underlying candidates H0 or H1 is secure For example, the classical combiner for collision-resistance simply concatenates the outputs of both hash functions Comb(M) = H0(M)||H1(M) in order to ensure collision-resistance as long as either of H0,H1 obeys the property However, this classical approach is complemented by two negative results: On the one hand, the combiner requires twice the output length of an ordinary hash function and this was even shown to be optimal for collision-resistance On the other hand, the security of the combiner does not increase with the enlarged output length, ie, the combiner is not significantly stronger than the sum of its components In this thesis we address the question if there are security-amplifying combiners where the combined hash function provides a higher security level than the building blocks, thus going beyond the additive limit We show that one can indeed have such combiners and propose a solution that is essentially as efficient as the concatenated combiner Another issue is that, so far, hash function combiners only aim at preserving a single property such as collision-resistance or pseudorandomness However, when hash functions are used in protocols like TLS to secure http and email communication, they are often required to provide several properties simultaneously We therefore introduce the notion of robust multi-property combiners and clarify some aspects on different definitions for such combiners We also propose constructions that are multi-property robust in the strongest sense and provably preserve important properties such as (target) collision-resistance, one-wayness, pseudorandomness, message authentication, and indifferentiability from a random oracle Finally, we analyze the (ad-hoc) hash combiners that are deployed in the TLS and SSL protocols Nowadays, both protocols are ubiquitous as they provide secure communication for a variety of applications in untrusted environments Therein, hash function combiners are deployed to derive shared secret keys and to authenticate the final step in the key-agreement phase As those established secret parameters are subsequently used to protect the communication, their security is of crucial importance We therefore formally fortify the security guarantees of the TLS/SSL combiner constructions and provide the sufficient requirements on the underlying hash functions that make those combiners suitable for their respective purposes

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A code for broadcast channel is constructed based on the the notion of a stronger version of the hash property for an ensemble of functions using sparse matrices to achieve an inner bound of capacity region.
Abstract: The aim of this paper is to construct a code for broadcast channel (independent messages and no common message) based on the the notion of a stronger version of the hash property for an ensemble of functions. Since an ensemble of sparse matrices has a strong hash property, codes using sparse matrices can achieve an inner bound of capacity region.

Journal ArticleDOI
TL;DR: The security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation is analysed to show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimage attack of Kelsey and Schneier, the herding attack of Kohno and the multicollision attack of Joux.
Abstract: We analyse the security of iterated hash functions that compute an input dependent checksum which is processed as part of the hash computation. We show that a large class of such schemes, including those using non-linear or even one-way checksum functions, is not secure against the second preimage attack of Kelsey and Schneier, the herding attack of Kelsey and Kohno and the multicollision attack of Joux. Our attacks also apply to a large class of cascaded hash functions. Our second preimage attacks on the cascaded hash functions improve the results of Joux presented at Crypto’04. We also apply our attacks to the MD2 and GOST hash functions. Our second preimage attacks on the MD2 and GOST hash functions improve the previous best known short-cut second preimage attacks on these hash functions by factors of at least 226 and 254, respectively. Our herding and multicollision attacks on the hash functions based on generic checksum functions (e.g., one-way) are a special case of the attacks on the cascaded iterated hash functions previously analysed by Dunkelman and Preneel and are not better than their attacks. On hash functions with easily invertible checksums, our multicollision and herding attacks (if the hash value is short as in MD2) are more efficient than those of Dunkelman and Preneel.

Book ChapterDOI
TL;DR: This paper presents the design principles of the popular Merkle–Damgard construction, which are followed in almost all widely used standard hash functions such as MD5 and SHA-1.
Abstract: Cryptographic hash functions are an important tool of cryptography and play a fundamental role in efficient and secure information processing. A hash function processes an arbitrary finite length input message to a fixed length output referred to as the hash value. As a security requirement, a hash value should not serve as an image for two distinct input messages and it should be difficult to find the input message from a given hash value. Secure hash functions serve data integrity, non-repudiation and authenticity of the source in conjunction with the digital signature schemes. Keyed hash functions, also called message authentication codes (MACs) serve data integrity and data origin authentication in the secret key setting. The building blocks of hash functions can be designed using block ciphers, modular arithmetic or from scratch. The design principles of the popular Merkle–Damgard construction are followed in almost all widely used standard hash functions such as MD5 and SHA-1.

Book ChapterDOI
12 Aug 2010
TL;DR: First results for the hash function of ECHO are presented, providing a subspace distinguisher for 5 rounds and collisions for 4 out of 8 rounds of the ECHO-256 hash function and mounting a rebound attack with multiple inbound phases to efficiently find according message pairs for ECHO.
Abstract: In this work we present first results for the hash function of ECHO. We provide a subspace distinguisher for 5 rounds and collisions for 4 out of 8 rounds of the ECHO-256 hash function. The complexities are 296 compression function calls for the distinguisher and 264 for the collision attack. The memory requirements are 264 for all attacks. To get these results, we consider new and sparse truncated differential paths through ECHO. We are able to construct these paths by analyzing the combined MixColumns and BigMixColumns transformation. Since in these sparse truncated differential paths at most one fourth of all bytes of each ECHO state are active, missing degrees of freedom are not a problem. Therefore, we are able to mount a rebound attack with multiple inbound phases to efficiently find according message pairs for ECHO.

Book ChapterDOI
12 Dec 2010
TL;DR: A brief outline of the state of the art of hash functions half-way the SHA-3 competition is presented and an attempt to identify open research issues is attempted.
Abstract: Cryptographic hash functions are an essential building block for security applications. Until 2005, the amount of theoretical research and cryptanalysis invested in this topic was rather limited. From the hundred designs published before 2005, about 80% was cryptanalyzed; this includes widely used hash functions such as MD4 and MD5. Moreover, serious shortcomings have been identified in the theoretical foundations of existing designs. In response to this hash function crisis, a large number of papers has been published with theoretical results and novel designs. In November 2007, NIST announced the start of the SHA-3 competition, with as goal to select a new hash function family by 2012. About half of the 64 submissions were broken within months. We present a brief outline of the state of the art of hash functions half-way the competition and attempt to identify open research issues.