scispace - formally typeset
Search or ask a question

Showing papers on "Collision attack published in 2016"


Proceedings ArticleDOI
21 Feb 2016
TL;DR: A new class of transcript collision attacks on key exchange protocols that rely on efficient collision-finding algorithms on the underlying hash constructions are identified, demonstrating the urgent need for disabling all uses of weak hash functions in mainstream protocols.
Abstract: In response to high-profile attacks that exploit hash function collisions, software vendors have started to phase out the use of MD5 and SHA-1 in third-party digital signature applications such as X.509 certificates. However, weak hash constructions continue to be used in various cryptographic constructions within mainstream protocols such as TLS, IKE, and SSH, because practitioners argue that their use in these protocols relies only on second preimage resistance, and hence is unaffected by collisions. This paper systematically investigates and debunks this argument. We identify a new class of transcript collision attacks on key exchange protocols that rely on efficient collision-finding algorithms on the underlying hash constructions. We implement and demonstrate concrete credential-forwarding attacks on TLS 1.2 client authentication, TLS 1.3 server authentication, and TLS channel bindings. We describe almost-practical impersonation and downgrade attacks in TLS 1.1, IKEv2 and SSH-2. As far as we know, these are the first collision-based attacks on the cryptographic constructions used in these popular protocols. Our practical attacks on TLS were responsibly disclosed (under the name SLOTH) and have resulted in security updates to several TLS libraries. Our analysis demonstrates the urgent need for disabling all uses of weak hash functions in mainstream protocols, and our recommendations have been incorporated in the upcoming Token Binding and TLS 1.3 protocols.

114 citations


Proceedings ArticleDOI
24 Oct 2016
TL;DR: In this article, the authors demonstrate two concrete attacks that exploit collisions on short block ciphers, such as 3DES and Blowfish, and evaluate the impact of their attacks by measuring the use of 64-bit blockciphers in real-world protocols.
Abstract: While modern block ciphers, such as AES, have a block size of at least 128 bits, there are many 64-bit block ciphers, such as 3DES and Blowfish, that are still widely supported in Internet security protocols such as TLS, SSH, and IPsec. When used in CBC mode, these ciphers are known to be susceptible to collision attacks when they are used to encrypt around 232 blocks of data (the so-called birthday bound). This threat has traditionally been dismissed as impractical since it requires some prior knowledge of the plaintext and even then, it only leaks a few secret bits per gigabyte. Indeed, practical collision attacks have never been demonstrated against any mainstream security protocol, leading to the continued use of 64-bit ciphers on the Internet. In this work, we demonstrate two concrete attacks that exploit collisions on short block ciphers. First, we present an attack on the use of 3DES in HTTPS that can be used to recover a secret session cookie. Second, we show how a similar attack on Blowfish can be used to recover HTTP BasicAuth credentials sent over OpenVPN connections. In our proof-of-concept demos, the attacker needs to capture about 785GB of data, which takes between 19-38 hours in our setting. This complexity is comparable to the recent RC4 attacks on TLS: the only fully implemented attack takes 75 hours. We evaluate the impact of our attacks by measuring the use of 64-bit block ciphers in real-world protocols. We discuss mitigations, such as disabling all 64-bit block ciphers, and report on the response of various software vendors to our responsible disclosure of these attacks.

106 citations


Book ChapterDOI
08 May 2016
TL;DR: This is the first practical break of the full SHA-1, reaching all 80 out of 80 steps, and it further shows how GPUs can be used very efficiently for this kind of attack.
Abstract: This article presents an explicit freestart colliding pair for SHA-1, i.e. a collision for its internal compression function. This is the first practical break of the full SHA-1, reaching all 80 out of 80 steps. Only 10i¾?days of computation on a 64-GPU cluster were necessary to perform this attack, for a runtime cost equivalent to approximately $$2^{57.5}$$257.5 calls to the compression function of SHA-1 on GPU. This work builds on a continuous series of cryptanalytic advancements on SHA-1 since the theoretical collision attack breakthrough of 2005. In particular, we reuse the recent work on 76-step SHA-1 of Karpman et al. from CRYPTO 2015 that introduced an efficient framework to implement freestart collisions on GPUs; we extend it by incorporating more sophisticated accelerating techniques such as boomerangs. We also rely on the results of Stevens from EUROCRYPT 2013 to obtain optimal attack conditions; using these techniques required further refinements for this work. Freestart collisions do not directly imply a collision for the full hash function. However, this work is an important milestone towards an actual SHA-1 collision and it further shows how GPUs can be used very efficiently for this kind of attack. Based on the state-of-the-art collision attack on SHA-1 by Stevens from EUROCRYPT 2013, we are able to present new projections on the computational and financial cost required for a SHA-1 collision computation. These projections are significantly lower than what was previously anticipated by the industry, due to the use of the more cost efficient GPUs compared to regular CPUs. We therefore recommend the industry, in particular Internet browser vendors and Certification Authorities, to retract SHA-1 quickly. We hope the industry has learned from the events surrounding the cryptanalytic breaks of MD5 and will retract SHA-1 before concrete attacks such as signature forgeries appear in the near future.

69 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe attacks on the Haraka hash functions and show how two colliding messages can be constructed in about 2 16 function evaluations and invalidate the preimage security claim for Haraka-512/256 with an attack finding one preimage in about 192 function evaluations.
Abstract: In this paper, we describe attacks on the recently proposed Haraka hash functions. First, for the two hash functions Haraka-256/256 and Haraka-512/256 in the family, we show how two colliding messages can be constructed in about 2 16 function evaluations. Second, we invalidate the preimage security claim for Haraka-512/256 with an attack finding one preimage in about 2 192 function evaluations. These attacks are possible thanks to symmetries in the internal state that are preserved over several rounds.

52 citations


Journal ArticleDOI
TL;DR: A new chaotic system is proposed and employed to design a secure and fast hash function, which has a dynamic random array of functions and can be implemented by a parallel architecture and proves security of the proposed function.
Abstract: Hash functions play important role in the information security era. Although there are different methods to design these functions, in recent years chaos theory has emerged as a strong solution in this area. Chaotic hash functions use one-dimensional maps such as logistic and tent, or employ complex multi-dimensional maps which are typically insecure or slow and most of them has been successfully attacked. In this paper, we propose a new chaotic system and employ it to design a secure and fast hash function. The improved security factor has roots in the hyper sensitivity of the proposed chaotic map while properties like speed and security can be parameterized. On the other hand, the proposed hash function has a dynamic random array of functions and can be implemented by a parallel architecture. This data-level parallel architecture makes it fast to generate the hash value. Statistical simulations show success of the proposed hashing scheme. Cryptanalysis of proposed function, such as key sensitivity, meet-in-the-middle attack, collision, preimage resistance and high level attacks, proves security of the proposed function.

48 citations


Journal ArticleDOI
TL;DR: The kite generator is introduced as a new tool to attack any dithering sequence over a small alphabet and the second-preimage security of the basic tree hash construction is analysed.
Abstract: In this work, we present several new generic second-preimage attacks on hash functions. Our first attack is based on the herding attack and applies to various Merkle---Damgard-based iterative hash functions. Compared to the previously known long-message second-preimage attacks, our attack offers more flexibility in choosing the second-preimage message at the cost of a small computational overhead. More concretely, our attack allows the adversary to replace only a few blocks in the original target message to obtain the second preimage. As a result, our new attack is applicable to constructions previously believed to be immune to such second-preimage attacks. Among others, these include the dithered hash proposal of Rivest, Shoup's UOWHF, and the ROX constructions. In addition, we also suggest several time-memory-data tradeoff attack variants, allowing for a faster online phase, and even finding second preimages for shorter messages. We further extend our attack to sequences stronger than the ones suggested in Rivest's proposal. To this end we introduce the kite generator as a new tool to attack any dithering sequence over a small alphabet. Additionally, we analyse the second-preimage security of the basic tree hash construction. Here we also propose several second-preimage attacks and their time-memory-data tradeoff variants. Finally, we show how both our new and the previous second-preimage attacks can be applied even more efficiently when multiple short messages, rather than a single long target message, are available.

26 citations


Proceedings ArticleDOI
22 May 2016
TL;DR: This paper performs the first systematic study of the underlying problem causes of Man in the Middle attack vector and defines and quantifies a candidate measure of attack surface by defining "highly-vulnerable domains", which are domains routinely exposing a large number of potential victims, and uses it to perform a systematic assessment of the vulnerability status.
Abstract: Recently, Man in the Middle (MitM) attacks on web browsing have become easier than they have ever been before because of a problem called "Name Collision" and a protocol called the Web Proxy Auto-Discovery (WPAD) protocol. This name collision attack can cause all web traffic of an Internet user to be redirected to a MitM proxy automatically right after the launching of a standard browser. The underlying problem of this attack is internal namespace WPAD query leakage, which itself is a known problem for years. However, it remains understudied since it was not easily exploitable before the recent new gTLD (generic Top-Level Domains) delegation. In this paper, we focus on this newly-exposed MitM attack vector and perform the first systematic study of the underlying problem causes and its vulnerability status in the wild. First, we show the severity of the problem by characterizing leaked WPAD query traffic to the DNS root servers, and find that a major cause of the leakage problem is actually a result of settings on the end user devices. More specifically, we find that under common settings, devices can mistakenly generate internal queries when used outside an internal network (e.g., used at home). Second, we define and quantify a candidate measure of attack surface by defining "highly-vulnerable domains", which are domains routinely exposing a large number of potential victims, and use it to perform a systematic assessment of the vulnerability status. We find that almost all leaked queries are for new gTLD domains we define to be highly-vulnerable, indirectly validating our attack surface definition. We further find that 10% of these highly-vulnerable domains have already been registered, making the corresponding users immediately vulnerable to the exploit at any time. Our results provide a strong and urgent message to deploy proactive protection. We discuss promising directions for remediation at the new gTLD registry, Autonomous System (AS), and end user levels, and use empirical data analysis to estimate and compare their effectiveness and deployment difficulties.

23 citations


Journal ArticleDOI
Yantao Li1
01 May 2016-Optik
TL;DR: This paper utilizes message extension to enhance the correlation of plaintexts in the message and aggregation operation to improve therelation of sequences of message blocks, which significantly increase the sensitivity between message and hash values, thereby greatly resisting the collision.

16 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new cryptanalysis method for double-branch hash functions and applied it on the standard RIPEMD-128, greatly improving over previously known results on this algorithm.
Abstract: In this article we propose a new cryptanalysis method for double-branch hash functions and we apply it on the standard RIPEMD-128, greatly improving over previously known results on this algorithm. Namely, we are able to build a very good differential path by placing one nonlinear differential part in each computation branch of the RIPEMD-128 compression function, but not necessarily in the early steps. In order to handle the low differential probability induced by the nonlinear part located in later steps, we propose a new method for using the available freedom degrees, by attacking each branch separately and then merging them with free message blocks. Overall, we present the first collision attack on the full RIPEMD-128 compression function as well as the first distinguisher on the full RIPEMD-128 hash function. Experiments on reduced number of rounds were conducted, confirming our reasoning and complexity analysis. Our results show that 16-year-old RIPEMD-128, one of the last unbroken primitives belonging to the MD-SHA family, might not be as secure as originally thought.

11 citations


Journal ArticleDOI
15 Jul 2016
TL;DR: This paper investigates the use of hash value truncation in preserving ID anonymity in WSNs and the impact of hashvalue truncation on four criteria attributes (security against brute force attacks, probability of pseudonym collisions, energy trade-off and end-to-end packet delivery delay) and reports the possible impacts of other factors.
Abstract: Hash functions have been used to address security requirements such as integrity, message authentication and non-repudiation. In WSNs, these functions are also used to preserve sensor nodes' identity (ID) anonymity, i.e., they are used to generate and verify dynamic pseudonyms that are used to identify sensor nodes in a communication session. In this latter application, there is an open issue as to how long the output of a hash function (i.e. hash value) we should use in pseudonym generation. The longer the hash value, the longer is the pseudonym, thus the harder it is to guess a pseudonym that is generated by using a hash function. On the other hand, the use of a longer hash value also means that the bandwidth and energy costs in transmitting the pseudonym will be higher. As sensor nodes typically have limited resources and are battery powered, the balance between the protection level of ID anonymity and performance and energy costs incurred in providing such a protection is an open issue. This paper investigates the use of hash value truncation in preserving ID anonymity in WSNs and the impact of hash value truncation on four criteria attributes (security against brute force attacks, probability of pseudonym collisions, energy trade-off and end-to-end packet delivery delay). It reports the possible impacts of other factors including the type and usage of hash functions, sensor node capabilities, adversary capabilities, ability to resolve pseudonym collisions, network density and data collection rate. The results show that the impacts of these factors may be contradictory. Therefore, the determination of an optimal level of hash value truncation should consider all trade-offs brought by these factors.

8 citations


Journal ArticleDOI
TL;DR: A timestamp defined hash algorithm is proposed in the present work for secure data dissemination among vehicles that fulfils all the basic properties such as preimage resistance, collision resistance of a one-way unkeyed hash function.

Patent
08 Jan 2016
TL;DR: In this paper, a method of providing a hash value for a piece of data, where the hash value provides for a time-stamp for the piece-of-data upon verification, is provided.
Abstract: There is provided a method of providing a hash value for a piece of data, where the hash value provides for a time-stamp for the piece of data upon verification. The method comprises deriving one-time signing keys of signer's one-time signing key hash chain by a one-way function of a secret key of the signer and a function of an index of the one-time signing key, and providing the hash value for the piece of data by a hash function including the piece of data and the derived one-time signing key.An electronic device comprising a processor arranged to implement a functional module for deriving a one-time signing key and providing a hash value for apiece of data by a hash function including the piece of data and the derived one-time signing key is also disclosed. The functional module is arranged to perform the method. A computer program for implementing the method on the electronic device is also disclosed.

Journal ArticleDOI
TL;DR: Compared with the Chaum–Heijst–Pfitzmann hash based on a discrete logarithm problem, the new hash is lightweight, and thus it opens a door to convenience for utilization of lightweight digital signing schemes.

Book ChapterDOI
01 Jan 2016
TL;DR: A cryptographic hash function H is a function which takes arbitrary length bit strings as input and produces a fixed-length bit string as output; the output is often called a digest, hashcode or hash value.
Abstract: A cryptographic hash function H is a function which takes arbitrary length bit strings as input and produces a fixed-length bit string as output; the output is often called a digest, hashcode or hash value. Hash functions are used a lot in computer science, but the crucial difference between a standard hash function and a cryptographic hash function is that a cryptographic hash function should at least have the property of being one-way.

Book ChapterDOI
10 Aug 2016
TL;DR: Full-round collision attacks on the proposed Simpira-4 Davies-Meyer hash construction are proposed, which violate the designers’ security claims that there are no structural distinguishers with complexity below \(2^{128}\).
Abstract: Simpira v1 is a recently proposed family of permutations, based on the AES round function. The design includes recommendations for using the Simpira permutations in block ciphers, hash functions, or authenticated ciphers. The designers’ security analysis is based on computer-aided bounds for the minimum number of active S-boxes. We show that the underlying assumptions of independence, and thus the derived bounds, are incorrect. For family member Simpira-4, we provide differential trails with only 40 (instead of 75) active S-boxes for the recommended 15 rounds. Based on these trails, we propose full-round collision attacks on the proposed Simpira-4 Davies-Meyer hash construction, with complexity \(2^{82.62}\) for the recommended full 15 rounds and a truncated 256-bit hash value, and complexity \(2^{110.16}\) for 16 rounds and the full 512-bit hash value. These attacks violate the designers’ security claims that there are no structural distinguishers with complexity below \(2^{128}\).

Book ChapterDOI
04 Dec 2016
TL;DR: Tweakable blockcipher is a powerful tool to design authenticated encryption schemes as illustrated by Minematsu’s Offset Two Rounds (OTR) construction.
Abstract: Tweakable blockcipher (TBC) is a powerful tool to design authenticated encryption schemes as illustrated by Minematsu’s Offset Two Rounds (OTR) construction. It considers an additional input, called tweak, to a standard blockcipher which adds some variability to this primitive. More specifically, each tweak is expected to define a different, independent pseudo-random permutation.

Book ChapterDOI
Changhai Ou1, Zhu Wang1, Degang Sun1, Xinping Zhou1, Juan Ai1 
29 Nov 2016
TL;DR: The authors' group verification chain is combined with MDCA and proposes Group Verification based Multiple-Differential Collision Attack (GV-MDCA), which significantly improves the efficiency of fault tolerant chain.
Abstract: Bogdanov and Kizhvatov proposed the concept of test of chain, but they didn’t give a practical scheme. Wang et al. proposed fault tolerant chain to enhance test of chain and gave a practical scheme. However, the attack efficiency of Correlation enhanced Collision Attack (CCA) is much lower than that of Correlation Power Analysis (CPA). A combination of CCA and CPA in fault tolerant chain proposed by Wang et al. may be unreasonable. Most importantly, when the threshold \(Thr_{\varDelta }\) introduced in Sect. 2.3 is large, the key recovery becomes very complex. Fault tolerant chain is unapplicable to this situation. In order to solve these problems, we propose a kind of new chain named group verification chain in this paper. We combine our group verification chain with MDCA and propose Group Verification based Multiple-Differential Collision Attack (GV-MDCA). Experiments on power trace set downloaded from the website DPA contest v4 show that our group verification chain significantly improves the efficiency of fault tolerant chain.

Journal ArticleDOI
TL;DR: This paper introduces a new type of collision attack on first‐order masked Advanced Encryption Standards that requires significantly fewer power measurements than any second‐order differential power analysis or existing collision attacks.

Book ChapterDOI
08 May 2016
TL;DR: In this article, the concatenation combiner of hash functions with an n-bit internal state does not offer better collision and preimage resistance compared to a single strong nbit hash function, and the problem of devising second preimage attacks faster than $2^n/2n against this combiner has remained open since 2005 when Kelsey and Schneier showed that a single Merkle-Damgard hash function did not offer optimal second image resistance for long messages.
Abstract: We study the security of the concatenation combiner $$H_1M \Vert H_2M$$H1Mi¾?H2M for two independent iterated hash functions with n-bit outputs that are built using the Merkle-Damgard construction. In 2004 Joux showed that the concatenation combiner of hash functions with an n-bit internal state does not offer better collision and preimage resistance compared to a single strong n-bit hash function. On the other hand, the problem of devising second preimage attacks faster than $$2^n$$2n against this combiner has remained open since 2005 when Kelsey and Schneier showed that a single Merkle-Damgard hash function does not offer optimal second preimage resistance for long messages. In this paper, we develop new algorithms for cryptanalysis of hash combiners and use them to devise the first second preimage attack on the concatenation combiner. The attack finds second preimages faster than $$2^n$$2n for messages longer than $$2^{2n/7}$$22n/7 and has optimal complexity of $$2^{3n/4}$$23n/4. This shows that the concatenation of two Merkle-Damgard hash functions is not as strong a single ideal hash function. Our methods are also applicable to other well-studied combiners, and we use them to devise a new preimage attack with complexity of $$2^{2n/3}$$22n/3 on the XOR combiner $$H_1M \oplus H_2M$$H1Mi¾?H2M of two Merkle-Damgard hash functions. This improves upon the attack by Leurent and Wang presented at Eurocrypt 2015 whose complexity is $$2^{5n/6}$$25n/6 but unlike our attack is also applicable to HAIFA hash functions. Our algorithms exploit properties of random mappings generated by fixing the message block input to the compression functions of $$H_1$$H1 and $$H_2$$H2. Such random mappings have been widely used in cryptanalysis, but we exploit them in new ways to attack hash function combiners.


Book ChapterDOI
20 Mar 2016
TL;DR: It is shown that it is actually possible to mount rebound attacks, despite the presence of modular constant additions in the hash function Kupyna, and how to use the rebound attack for creating collisions for the round-reduced hash function itself.
Abstract: The hash function Kupyna was recently published as the Ukrainian standard DSTU 7564:2014. It is structurally very similar to the SHA-3 finalist GrOstl, but differs in details of the round transformations. Most notably, some of the round constants are added with a modular addition, rather than bitwise xor. This change prevents a straightforward application of some recent attacks, in particular of the rebound attacks on the compression function of similar AES-like hash constructions. However, we show that it is actually possible to mount rebound attacks, despite the presence of modular constant additions. More specifically, we describe collision attacks on the compression function for 6 out of 10 rounds of Kupyna-256 with an attack complexity of $$2^{70}$$, and for 7 rounds with complexity $$2^{125.8}$$. In addition, we can use the rebound attack for creating collisions for the round-reduced hash function itself. This is possible for 4 rounds of Kupyna-256 with complexity $$2^{67}$$ and for 5 rounds with complexity $$2^{120}$$.

Proceedings ArticleDOI
01 Sep 2016
TL;DR: A method for designing one-way cryptographic hash function and a block ciphering scheme based on proposed hash codes is proposed and the experimental outcomes justify striking performance of proposed chaotic hash method.
Abstract: Secure hashes have an indispensable purpose to play in modern multimedia image encryptions. Traditional block ciphering techniques are quite complex, command colossal processing time for key generation and sometimes are a source of redundancy. This paper proposes to suggest a method for designing one-way cryptographic hash function and a block ciphering scheme based on proposed hash codes. In the proposed work, we have divided the message into blocks with each block individually processed by chaotic systems. The transitional hashes are created utilizing advanced control and input parameters. The two hash codes are utilized to create a final hash. The experimental outcomes justify striking performance of proposed chaotic hash method. Moreover, the generated hash code is applied for realizing an image block ciphering technique. The encryption process is plain-image dependent thereby exhibits satisfactory encryption effect suitable for practical applications.

Journal ArticleDOI
TL;DR: It is proved that S^r$$Sr achieves asymptotically optimal collision security against semi-adaptive adversaries up to almost 2n/2 queries and that it can be made preimage secure up to $$2^n$$2n queries using a simple tweak.
Abstract: A well-established method of constructing hash functions is to base them on non-compressing primitives, such as one-way functions or permutations. In this work, we present $$S^r$$Sr, an $$rn$$rn-to-$$n$$n-bit compression function (for $$r\ge 1$$r?1) making $$2r-1$$2r-1 calls to $$n$$n-to-$$n$$n-bit primitives (random functions or permutations). $$S^r$$Sr compresses its inputs at a rate (the amount of message blocks per primitive call) up to almost 1/2, and it outperforms all existing schemes with respect to rate and/or the size of underlying primitives. For instance, instantiated with the $$1600$$1600-bit permutation of NIST's SHA-3 hash function standard, it offers about $$800$$800-bit security at a rate of almost 1/2, while SHA-3-512 itself achieves only $$512$$512-bit security at a rate of about $$1/3$$1/3. We prove that $$S^r$$Sr achieves asymptotically optimal collision security against semi-adaptive adversaries up to almost $$2^{n/2}$$2n/2 queries and that it can be made preimage secure up to $$2^n$$2n queries using a simple tweak.

Journal ArticleDOI
TL;DR: The slow diffusion of the AES key schedule for 256-bit keys is observed and weakness is found which can be used in the preimage attack on its Davies-Meyer mode, comparable with Bogdanov et al.
Abstract: We observe the slow diffusion of the AES key schedule for 256-bit keys and find weakness which can be used in the preimage attack on its Davies-Meyer mode. Our preimage attack works for 8 rounds of AES-256 with the computational complexity of 2124.9. It is comparable with Bogdanov et al.'s biclique-based preimage attack on AES-256, which is applicable up to full rounds but has the computational complexity more than 2126.5. We also extend our result to the preimage attack on some well-known double-block-length hash modes assuming the underlying block cipher is 8-round AES-256, whose computational complexity is 2252.9.

Book ChapterDOI
10 Aug 2016
TL;DR: New second preimage attacks on the dithered Merkle-Damgard construction are presented, which consume significantly less memory in the online phase and are presented an essentially memoryless variant of Andreeva et al. attack.
Abstract: Dithered hash functions were proposed by Rivest as a method to mitigate second preimage attacks on Merkle-Damgard hash functions. Despite that, second preimage attacks against dithered hash functions were proposed by Andreeva et al. One issue with these second preimage attacks is their huge memory requirement in the precomputation and the online phases. In this paper, we present new second preimage attacks on the dithered Merkle-Damgard construction. These attacks consume significantly less memory in the online phase (with a negligible increase in the online time complexity) than previous attacks. For example, in the case of MD5 with the Keranen sequence, we reduce the memory complexity from about \(2^{51}\) blocks to about \(2^{26.7}\) blocks (about 545 MB). We also present an essentially memoryless variant of Andreeva et al. attack. In case of MD5-Keranen or SHA1-Keranen, the offline and online memory complexity is \(2^{15.2}\) message blocks (about 188–235 KB), at the expense of increasing the offline time complexity.

Dissertation
01 Mar 2016
TL;DR: This thesis analyzes the security of two cryptographic hash functions and one block cipher used in the new Russian Federation cryptographic hashing and encryption suite GOST and investigates the one wayness of Streebog and the preimage resistance of the AES-based Maelstrom-0 hash function.
Abstract: Current information security systems rely heavily on symmetric key cryptographic primitives as one of their basic building blocks. In order to boost the efficiency of the security systems, designers of the underlying primitives often tend to avoid the use of provably secure designs. In fact, they adopt ad hoc designs with claimed security assumptions in the hope that they resist known cryptanalytic attacks. Accordingly, the security evaluation of such primitives continually remains an open field. In this thesis, we analyze the security of two cryptographic hash functions and one block cipher. We primarily focus on the recent AES-based designs used in the new Russian Federation cryptographic hashing and encryption suite GOST because the majority of our work was carried out during the open research competition run by the Russian standardization body TC26 for the analysis of their new cryptographic hash function Streebog. Although, there exist security proofs for the resistance of AES- based primitives against standard differential and linear attacks, other cryptanalytic techniques such as integral, rebound, and meet-in-the-middle attacks have proven to be effective. The results presented in this thesis can be summarized as follows: Initially, we analyze various security aspects of the Russian cryptographic hash function GOST R 34.11-2012, also known as Streebog or Stribog. In particular, our work investigates five security aspects of Streebog. Firstly, we present a collision analysis of the compression function and its in- ternal cipher in the form of a series of modified rebound attacks. Secondly, we propose an integral distinguisher for the 7- and 8-round compression function. Thirdly, we investigate the one wayness of Streebog with respect to two approaches of the meet-in-the-middle attack, where we present a preimage analysis of the compression function and combine the results with a multicollision attack to generate a preimage of the hash function output. Fourthly, we investigate Streebog in the context of malicious hashing and by utilizing a carefully tailored differential path, we present a backdoored version of the hash function where collisions can be generated with practical complexity. Lastly, we propose a fault analysis attack which retrieves the inputs of the compression function and utilize it to recover the secret key when Streebog is used in the keyed simple prefix and secret-IV MACs, HMAC, or NMAC. All the presented results are on reduced round variants of the function except for our analysis of the malicious version of Streebog and our fault analysis attack where both attacks cover the full round hash function. Next, we examine the preimage resistance of the AES-based Maelstrom-0 hash function which is designed to be a lightweight alternative to the ISO standardized hash function Whirlpool. One of the distinguishing features of the Maelstrom-0 design is the proposal of a new chaining construction called 3CM which is based on the 3C/3C+ family. In our analysis, we employ a 4-stage approach that uses a modified technique to defeat the 3CM chaining construction and generates preimages of the 6-round reduced Maelstrom-0 hash function. Finally, we provide a key recovery attack on the new Russian encryption standard GOST R 34.12- 2015, also known as Kuznyechik. Although Kuznyechik adopts an AES-based design, it exhibits a faster diffusion rate as it employs an optimal diffusion transformation. In our analysis, we propose a meet-in-the-middle attack using the idea of efficient differential enumeration where we construct a three round distinguisher and consequently are able to recover 16-bytes of the master key of the reduced 5-round cipher. We also present partial sequence matching, by which we generate, store, and match parts of the compared parameters while maintaining negligible probability of matching error, thus the overall online time complexity of the attack is reduced.

Book ChapterDOI
31 Aug 2016
TL;DR: A more efficient construction of blockcipher based compression function is proposed, where it provides higher efficiency-rate including a satisfactory collision security bound.
Abstract: A cryptographic hash \(\left( \text {CH}\right) \) is an algorithm that invokes an arbitrary domain of the message and returns fixed size of an output. The numbers of application of cryptographic hash are enormous such as message integrity, password verification, and pseudorandom generation. Furthermore, the \(\mathrm {CH}\) is an efficient primitive of security solution for IoT-end devices, constrained devices, and RfID. The construction of the \(\mathrm {CH}\) depends on a compression function, where the compression function is constructed through a scratch or blockcipher. Generally, the blockcipher based cryptographic hash is more applicable than the scratch based hash because of direct implementation of blockcipher rather than encryption function. Though there are many \(\left( n, 2n\right) \) blockcipher based compression functions, but most of the prominent schemes such as MR, Weimar, Hirose, Tandem, Abreast, Nandi, and ISA09 are focused for rigorous security bound rather than efficiency. Therefore, a more efficient construction of blockcipher based compression function is proposed, where it provides higher efficiency-rate including a satisfactory collision security bound. The efficiency-rate \(\left( r\right) \) of the proposed scheme is \(r \approx 1\). Furthermore, the collision security is bounded by \(q=2^{125.84}\) \(\left( q=\text {numer of query}\right) \). Moreover, the proposed construction requires two calls of blockcipher under single iteration of encryption. Additionally, it has double key scheduling and it’s operational mode is parallel.

Journal ArticleDOI
TL;DR: This work has proposed an (n, 2n) blockcipher compression function, which is secure under the ideal cipher model, weak cipher model and extended strong cipher model (ext.WCM), and the majority of the existing schemes need multiple key schedules.
Abstract: A cryptographic hash is an important tool in the area of a modern cryptography. It comprises a compression function, where the compression function can be built by a scratch or blockcipher. There are some familiar schemes of blockcipher compression function such as Weimar, Hirose, Tandem, Abreast, Nandi, ISA-09. Interestingly, the security proof of all the mentioned schemes are based on the ideal cipher model (ICM), which depends on ideal environment. Therefore, it is desired to use such a proof technique model, which is close to the real world such as weak cipher model (WCM). Hence, we proposed an (n, 2n) blockcipher compression function, which is secure under the ideal cipher model, weak cipher model and extended weak cipher model (ext.WCM). Additionally, the majority of the existing schemes need multiple key schedules, where the proposed scheme and the Hirose-DM follow single key scheduling property. The efficiency-rate of our scheme is r = 1/2. Moreover, the number of blockcipher call of this scheme is 2 and it runs in parallel. key words: cryptographic hash, blockcipher, ideal cipher model, weak cipher model, collision and preimage resistance

Proceedings ArticleDOI
28 Dec 2016
TL;DR: A new, secure and efficient compression function based on a pseudorandom function, that takes in two 2n-bits inputs and produce one n-bit output (2n-to-n bit) and which can be used as candidate for developing security systems.
Abstract: Cryptographic hash functions are used to protect the integrity of information. Hash functions are designed by using existing block ciphers as compression functions. This is due to challenges and difficulties that are encountered in constructing new hash functions from the scratch. However, the key generations for encryption process result to huge computational cost which affects the efficiency of the hash function. This paper proposes a new, secure and efficient compression function based on a pseudorandom function, that takes in two 2n-bits inputs and produce one n-bit output (2n-to-n bit). In addition, a new keyed hash function with three variants is proposed (PinTar 128 bits, 256 bits and 512 bits) which uses the proposed compression as its underlying building block. Statistical analysis shows that the compression function is an efficient one way random function. Similarly, statistical analysis of the keyed hash function shows that the proposed keyed function has strong avalanche property and is resistant to key exhaustive search attack. The proposed key hash function can be used as candidate for developing security systems.

Journal ArticleDOI
TL;DR: Based on the analysis for the Hash-based authentication protocols, some feasible suggestions are proposed to improve the security of the RFID authentication protocols.
Abstract: The low-cost RFID tags have very limited computing and storage resources and this makes it difficult to completely solve their security and privacy problems. Lightweight authentication is considered as one of the most effective methods to ensure the security in the RFID system. Many light-weight authentication protocols use Hash function and pseudorandom generator to ensure the anonymity and confidential communication of the RFID system. But these protocols do not provide such security as they claimed. By analyzing some typical Hash-based RFID authentication protocols, it is found that they are vulnerable to some common attacks. Many protocols cannot resist tracing attack and de-synchronization attack. Some protocols cannot provide forward security. Gy?z? Godor and Sandor Imre proposed a Hash-based authentication protocol and they claimed their protocol could resist the well-known attacks. But by constructing some different attack scenarios, their protocol is shown to be vulnerable to tracing attack and de-synchronization attack. Based on the analysis for the Hash-based authentication protocols, some feasible suggestions are proposed to improve the security of the RFID authentication protocols.