scispace - formally typeset
Search or ask a question

Showing papers on "Differential cryptanalysis published in 2016"


Journal ArticleDOI
TL;DR: It is proved that in all permutationonly image ciphers, regardless of the cipher structure, the correct permutation mapping is recovered completely by a chosenplaintext attack, which significantly outperforms the state-of-theart cryptanalytic methods.
Abstract: Permutation is a commonly used primitive in multimedia (image/video) encryption schemes, and many permutation-only algorithms have been proposed in recent years for the protection of multimedia data. In permutation-only image ciphers, the entries of the image matrix are scrambled using a permutation mapping matrix which is built by a pseudo-random number generator. The literature on the cryptanalysis of image ciphers indicates that the permutation-only image ciphers are insecure against ciphertext-only attacks and/or known/chosen-plaintext attacks. However, the previous studies have not been able to ensure the correct retrieval of the complete plaintext elements. In this paper, we revisited the previous works on cryptanalysis of permutation-only image encryption schemes and made the cryptanalysis work on chosen-plaintext attacks complete and more efficient. We proved that in all permutation-only image ciphers, regardless of the cipher structure, the correct permutation mapping is recovered completely by a chosen-plaintext attack. To the best of our knowledge, for the first time, this paper gives a chosen-plaintext attack that completely determines the correct plaintext elements using a deterministic method. When the plain-images are of size ${M}\times {N}$ and with ${L}$ different color intensities, the number ${n}$ of required chosen plain-images to break the permutation-only image encryption algorithm is ${n}=\lceil \log _{L}$ ( MN ) $\rceil $ . The complexity of the proposed attack is $O$ ( $n\,\cdot \, {M N}$ ) which indicates its feasibility in a polynomial amount of computation time. To validate the performance of the proposed chosen-plaintext attack, numerous experiments were performed on two recently proposed permutation-only image/video ciphers. Both theoretical and experimental results showed that the proposed attack outperforms the state-of-the-art cryptanalytic methods.

169 citations


Proceedings ArticleDOI
24 Oct 2016
TL;DR: In this article, the authors demonstrate two concrete attacks that exploit collisions on short block ciphers, such as 3DES and Blowfish, and evaluate the impact of their attacks by measuring the use of 64-bit blockciphers in real-world protocols.
Abstract: While modern block ciphers, such as AES, have a block size of at least 128 bits, there are many 64-bit block ciphers, such as 3DES and Blowfish, that are still widely supported in Internet security protocols such as TLS, SSH, and IPsec. When used in CBC mode, these ciphers are known to be susceptible to collision attacks when they are used to encrypt around 232 blocks of data (the so-called birthday bound). This threat has traditionally been dismissed as impractical since it requires some prior knowledge of the plaintext and even then, it only leaks a few secret bits per gigabyte. Indeed, practical collision attacks have never been demonstrated against any mainstream security protocol, leading to the continued use of 64-bit ciphers on the Internet. In this work, we demonstrate two concrete attacks that exploit collisions on short block ciphers. First, we present an attack on the use of 3DES in HTTPS that can be used to recover a secret session cookie. Second, we show how a similar attack on Blowfish can be used to recover HTTP BasicAuth credentials sent over OpenVPN connections. In our proof-of-concept demos, the attacker needs to capture about 785GB of data, which takes between 19-38 hours in our setting. This complexity is comparable to the recent RC4 attacks on TLS: the only fully implemented attack takes 75 hours. We evaluate the impact of our attacks by measuring the use of 64-bit block ciphers in real-world protocols. We discuss mitigations, such as disabling all 64-bit block ciphers, and report on the response of various software vendors to our responsible disclosure of these attacks.

106 citations


Book ChapterDOI
08 May 2016
TL;DR: In this paper, the authors proposed a new stream cipher construction that allows constant and smaller noise by applying a Boolean filter function to a public bit permutation of a constant key register, so that the Boolean complexity of the stream cipher outputs is constant.
Abstract: Symmetric ciphers purposed for Fully Homomorphic Encryption FHE have recently been proposed for two main reasons. First, minimizing the implementation time and memory overheads that are inherent to current FHE schemes. Second, improving the homomorphic capacity, i.e. the amount of operations that one can perform on homomorphic ciphertexts before bootstrapping, which amounts to limit their level of noise. Existing solutions for this purpose suggest a gap between block ciphers and stream ciphers. The first ones typically allow a constant but small homomorphic capacity, due to the iteration of rounds eventually leading to complex Boolean functions hence large noise. The second ones typically allow a larger homomorphic capacity for the first ciphertext blocks, that decreases with the number of ciphertext blocks due to the increasing Boolean complexity of the stream ciphers' output. In this paper, we aim to combine the best of these two worlds, and propose a new stream cipher construction that allows constant and smaller noise. Its main idea is to apply a Boolean filter function to a public bit permutation of a constant key register, so that the Boolean complexity of the stream cipher outputs is constant. We also propose an instantiation of the filter function designed to exploit recent 3rd-generation FHE schemes, where the error growth is quasi-additive when adequately multiplying ciphertexts with the same amount of noise. In order to stimulate further investigation, we then specify a few instances of this stream cipher, for which we provide a preliminary security analysis. We finally highlight the good properties of our stream cipher regarding the other goal of minimizing the time and memory complexity of calculus delegation for 2nd-generation FHEi¾?schemes. We conclude the paper with open problems related to the large design space opened by these new constructions.

99 citations


Book ChapterDOI
04 Dec 2016
TL;DR: In this article, the authors proposed the long trail design strategy (LTS), a dual of the wide-trail design strategy that is applicable (but not limited) to ARX constructions, which advocates the use of large S-boxes together with sparse linear layers.
Abstract: We present, for the first time, a general strategy for designing ARX symmetric-key primitives with provable resistance against single-trail differential and linear cryptanalysis. The latter has been a long standing open problem in the area of ARX design. The wide-trail design strategy (WTS), that is at the basis of many S-box based ciphers, including the AES, is not suitable for ARX designs due to the lack of S-boxes in the latter. In this paper we address the mentioned limitation by proposing the long trail design strategy (LTS) – a dual of the WTS that is applicable (but not limited) to ARX constructions. In contrast to the WTS, that prescribes the use of small and efficient S-boxes at the expense of heavy linear layers with strong mixing properties, the LTS advocates the use of large (ARX-based) S-Boxes together with sparse linear layers. With the help of the so-called long-trail argument, a designer can bound the maximum differential and linear probabilities for any number of rounds of a cipher built according to the LTS.

78 citations


Journal ArticleDOI
TL;DR: A comprehensive study of AFA on an ultra-lightweight block cipher called LBlock shows that a single fault injection is enough to recover the master key of LBlock within the affordable complexity in each scenario.
Abstract: Algebraic fault analysis (AFA), which combines algebraic cryptanalysis with fault attacks, has represented serious threats to the security of lightweight block ciphers. Inspired by an earlier framework for the analysis of side-channel attacks presented at EUROCRYPT 2009, a new generic framework is proposed to analyze and evaluate algebraic fault attacks on lightweight block ciphers. We interpret AFA at three levels: 1) the target; 2) the adversary; and 3) the evaluator. We describe the capability of an adversary in four parts: 1) the fault injector; 2) the fault model describer; 3) the cipher describer; and 4) the machine solver. A formal fault model is provided to cover most of current fault attacks. Different strategies of building optimal equation set are also provided to accelerate the solving process. At the evaluator level, we consider the approximate information metric and the actual security metric. These metrics can be used to guide adversaries, cipher designers, and industrial engineers. To verify the feasibility of the proposed framework, we make a comprehensive study of AFA on an ultra-lightweight block cipher called LBlock. Three scenarios are exploited, which include injecting a fault to encryption, to key scheduling, or modifying the round number or counter. Our best results show that a single fault injection is enough to recover the master key of LBlock within the affordable complexity in each scenario. To verify the generic feature of the proposed framework, we apply AFA to three other block ciphers, i.e., Data Encryption Standard, PRESENT, and Twofish. The results demonstrate that our framework can be used for different ciphers with different structures.

62 citations


Book ChapterDOI
20 Mar 2016
TL;DR: The first adaptation of Matsui's algorithm for finding the best differential and linear trails to the class of ARX ciphers is proposed, based on a branch-and-bound search strategy, does not use any heuristics and returns optimal results.
Abstract: We propose the first adaptation of Matsui's algorithm for finding the best differential and linear trails to the class of ARX ciphers. It is based on a branch-and-bound search strategy, does not use any heuristics and returns optimal results. The practical application of the new algorithm is demonstrated on reduced round variants of block ciphers from the Speck family. More specifically, we report the probabilities of the best differential trails for upi¾?to 10, 9, 8, 7, and 7 rounds of Speck32, Speck48, Speck64, Speck96 and Speck128 respectively, together with the exact number of differential trails that have the best probability. The new results are used to compute bounds, under the Markov assumption, on the security of Speck against single-trail differential cryptanalysis. Finally, we propose two new ARX primitives with provable bounds against single-trail differential and linear cryptanalysisi¾?--- a long standing open problem in the area of ARX design.

57 citations


Book ChapterDOI
08 May 2016
TL;DR: A partitioning technique recently proposed by Biham and Carmeli to improve the linear cryptanalysis of addition operations is refined and applied to the differential-linear attack against Chaskey, which greatly reduces the data complexity, and this also results in a reduced time complexity.
Abstract: In this work we study the security of Chaskey, a recent lightweight MAC designed by Mouha et al., currently being considered for standardization by ISO/IEC and ITU-T. Chaskey uses an ARX structure very similar to SipHash. We present the first cryptanalysis of Chaskey in the single user setting, with a differential-linear attack against 6 and 7 rounds, hinting that the full version of Chaskey with 8 rounds has a rather small security margin. In response to these attacks, a 12-round version has been proposed by the designers. To improve the complexity of the differential-linear cryptanalysis, we refine a partitioning technique recently proposed by Biham and Carmeli to improve the linear cryptanalysis of addition operations. We also propose an analogue improvement of differential cryptanalysis of addition operations. Roughly speaking, these techniques reduce the data complexity of linear and differential attacks, at the cost of more processing time per data. It can be seen as the analogue for ARX ciphers of partial key guess and partial decryption for SBox-based ciphers. When applied to the differential-linear attack against Chaskey, this partitioning technique greatly reduces the data complexity, and this also results in a reduced time complexity. While a basic differential-linear attack on 7 round takes $$2^{78}$$278 data and time respectively $$2^{35}$$235 for 6 rounds, the improved attack requires only $$2^{48}$$248 data and $$2^{67}$$267 time respectively $$2^{25}$$225 data and $$2^{29}$$229 time for 6 rounds. We also show an application of the partitioning technique to FEAL-8X, and we hope that this technique will lead to a better understanding of the security of ARX designs.

50 citations


Book ChapterDOI
14 Aug 2016
TL;DR: An attack on the early version of FLIP is presented that exploits the structure of the filter function and the constant internal state of the cipher to allow for a key recovery in basic operations.
Abstract: At Eurocrypt 2016, Meaux et al. proposed FLIP, a new family of stream ciphers intended for use in Fully Homomorphic Encryption systems. Unlike its competitors which either have a low initial noise that grows at each successive encryption, or a high constant noise, the FLIP family of ciphers achieves a low constant noise thanks to a new construction called filter permutator. In this paper, we present an attack on the early version of FLIP that exploits the structure of the filter function and the constant internal state of the cipher. Applying this attack to the two instantiations proposed by Meaux et al. allows for a key recovery in $$2^{54}$$ basic operations resp. $$2^{68}$$, compared to the claimed security of $$2^{80}$$ resp. $$2^{128}$$.

43 citations


Posted Content
TL;DR: The framework for finding differential characteristics by adding a new method to construct long characteristics from short ones is developed, which reduces the searching time a lot and makes it possible to search differential characteristics for ARX block ciphers with large word sizes such as n=48,64.
Abstract: In this paper, we focus on the automatic differential cryptanalysis of ARX block ciphers with respect to XOR-difference, and develop Mouha et al.’s framework for finding differential characteristics by adding a new method to construct long characteristics from short ones. The new method reduces the searching time a lot and makes it possible to search differential characteristics for ARX block ciphers with large word sizes such as n = 48, 64. What’s more, we take the differential effect into consideration and find that the differential probability increases by a factor of 4 ∼ 16 for SPECK and about 2 for LEA when multiple characteristics are counted in. The efficiency of our method is demonstrated by improved attacks of SPECK and LEA, which attack 1, 1, 4 and 6 more rounds of SPECK48, SPECK64, SPECK96 and SPECK128, respectively, and 2 more rounds of LEA than previous works.

43 citations


DOI
01 Dec 2016
TL;DR: In this article, the authors examined the security of symmetric ciphers against quantum attacks and showed that the best attack in the classical world does not necessarily lead to the best quantum one.
Abstract: Quantum computers, that may become available one day, would impact many scientific fields, most notably cryptography since many asymmetric primitives are insecure against an adversary with quantum capabilities. Cryptographers are already anticipating this threat by proposing and studying a number of potentially quantum-safe alternatives for those primitives. On the other hand, symmetric primi-tives seem less vulnerable against quantum computing: the main known applicable result is Grover's algorithm that gives a quadratic speed-up for exhaustive search. In this work, we examine more closely the security of symmetric ciphers against quantum attacks. Since our trust in symmetric ciphers relies mostly on their ability to resist cryptanalysis techniques, we investigate quantum cryptanalysis techniques. More specifically, we consider quantum versions of differential and linear cryptanalysis. We show that it is usually possible to use quantum computations to obtain a quadratic speed-up for these attack techniques, but the situation must be nuanced: we don't get a quadratic speed-up for all variants of the attacks. This allows us to demonstrate the following non-intuitive result: the best attack in the classical world does not necessarily lead to the best quantum one. We give some examples of application on ciphers LAC and KLEIN. We also discuss the important difference between an adversary that can only perform quantum computations, and an adversary that can also make quantum queries to a keyed primitive.

41 citations


Book ChapterDOI
04 Jul 2016
TL;DR: In this article, the authors focus on the automatic differential cryptanalysis of ARX block ciphers with respect to XOR-difference, and develop Mouha et al.'s framework for finding differential characteristics by adding a new method to construct long characteristics from short ones.
Abstract: In this paper, we focus on the automatic differential cryptanalysis of ARX block ciphers with respect to XOR-difference, and develop Mouha et al.'s framework for finding differential characteristics by adding a new method to construct long characteristics from short ones. The new method reduces the searching time a lot and makes it possible to search differential characteristics for ARX block ciphers with large word sizes such as $$n=48,64$$. What's more, we take the differential effect into consideration and find that the differential probability increases by a factor of $$4 \sim 16$$ for SPECK and more than $$2^{10}$$ for LEA when multiple characteristics are counted in. The efficiency of our method is demonstrated by improved attacks of SPECK and LEA, which attack 1, 1, 4 and 6 more rounds of SPECK48, SPECK64, SPECK96 and SPECK128, respectively, and 2 more rounds of LEA than previous works.

Book ChapterDOI
05 Sep 2016
TL;DR: This paper introduces Constraint Programming (CP) models to solve a cryptanalytic problem: the chosen key differential attack against the standard block cipher AES, and shows that Model 2 is much more efficient than Model 1, and that Chuffed is faster than Choco which is slower than Gecode on the hardest instances of this problem.
Abstract: In this paper, we introduce Constraint Programming (CP) models to solve a cryptanalytic problem: the chosen key differential attack against the standard block cipher AES. The problem is solved in two steps: In Step 1, bytes are abstracted by binary values; In Step 2, byte values are searched. We introduce two CP models for Step 1: Model 1 is derived from AES rules in a straightforward way; Model 2 contains new constraints that remove invalid solutions filtered out in Step 2. We also introduce a CP model for Step 2. We evaluate scale-up properties of two classical CP solvers (Gecode and Choco) and a hybrid SAT/CP solver (Chuffed). We show that Model 2 is much more efficient than Model 1, and that Chuffed is faster than Choco which is faster than Gecode on the hardest instances of this problem. Furthermore, we prove that a solution claimed to be optimal in two recent cryptanalysis papers is not optimal by providing a better solution.

Book ChapterDOI
08 May 2016
TL;DR: It is proved that the methodology of standard differential cryptanalysis can unambiguously be extended and transferred to the polytopic case including impossible differentials, and it is shown that impossible polytopic transitions have generic advantages over impossibility differentials.
Abstract: Standard differential cryptanalysis uses statistical dependencies between the difference of two plaintexts and the difference of the respective two ciphertexts to attack a cipher. Here we introduce polytopic cryptanalysis which considers interdependencies between larger sets of texts as they traverse through the cipher. We prove that the methodology of standard differential cryptanalysis can unambiguously be extended and transferred to the polytopic case including impossible differentials. We show that impossible polytopic transitions have generic advantages over impossible differentials. To demonstrate the practical relevance of the generalization, we present new low-data attacks on round-reduced DES and AES using impossible polytopic transitions that are able to compete with existing attacks, partially outperforming these.

Journal ArticleDOI
TL;DR: The margin of safety for two-key triple DES is slim, and efforts to replace it, at least with its three-key variant, and preferably with a more modern cipher such as AES should be pursued with some urgency.
Abstract: This paper reconsiders the security offered by two-key triple DES, an encryption technique that remains widely used despite recently being de-standardised by NIST. A generalization of the 1990 van Oorschot–Wiener attack is described, constituting the first advance in cryptanalysis of two-key triple DES since 1990. We give further attack enhancements that together imply that the widely used estimate that two-key triple DES provides 80 bits of security can no longer be regarded as conservative; the widely stated assertion that the scheme is secure as long as the key is changed regularly is also challenged. The main conclusion is that, whilst not completely broken, the margin of safety for two-key triple DES is slim, and efforts to replace it, at least with its three-key variant, and preferably with a more modern cipher such as AES, should be pursued with some urgency.

Book ChapterDOI
19 Feb 2016
TL;DR: The dynamic key-guessing techniques are converted to a program that can automatically give out the data in dynamicKeyGuessing procedure and, with this tool, the differential security evaluation of SIMON and Simeck like block ciphers becomes very convenient.
Abstract: In CHES 2015, a new lightweight block cipher Simeck was proposed that combines good design components of SIMON and SPECK, two lightweight ciphers designed by NSA. As a great tool to improve differential attack, dynamic key-guessing techniques were proposed by Wang et al. that work well on SIMON. In this paper, we convert the dynamic key-guessing techniques to a program that can automatically give out the data in dynamic key-guessing procedure. With our tool, the differential security evaluation of SIMON and Simeck like block ciphers becomes very convenient. We apply the method to Simeck and four members of SIMON family. With a differential of lower Hamming weight we find by Mixed Integer Linear Programming method and differentials in Kolbl et al.’s work, we launch attacks on 21, 22-round Simeck32, 28-round Simeck48 and 34, 35-round Simeck64. Besides, by use of newly proposed differentials in CRYPTO 2015 we get new attack results on 22-round SIMON32/64, 24-round SIMON48/96, 28, 29-round SIMON64/96 and 29, 30-round SIMON64/128. As far as we are concerned, our results on SIMON64 are currently the best results.

Book ChapterDOI
04 Jul 2016
TL;DR: This paper searches out Simeck's differentials with low Hamming weight and high probability using Kolbl's tool, then exploits the links between differentials and linear characteristics to construct linear hulls for Simeke, giving improved linear hull attack with dynamic key-guessing techniques on Simek on the basis of round function's property.
Abstract: Simeck is a new family of lightweight block cipher proposed by Yang $$et\ al.$$ in CHES'15, which performs efficiently in hardware implementation. In this paper, we search out Simeck's differentials with low Hamming weight and high probability using Kolbl's tool, then exploit the links between differentials and linear characteristics to construct linear hulls for Simeck. We give improved linear hull attack with dynamic key-guessing techniques on Simeck on the basis of round function's property. Our results cover Simeck 32/64 reduced to 23 rounds, Simeck 48/96 reduced to 30 rounds, Simeck 64/128 reduced to 37 rounds, which are the best known results so far for any variant of Simeck.

Book ChapterDOI
01 Jan 2016
TL;DR: This book chapter shows that DFA against AES are practical, and can be prevented using suitable techniques, and concludes with the efficient Concurrent Error Detection (CED) schemes which have been developed utilizing the invariance properties in AES.
Abstract: Fault Attacks exploit malicious or accidental faults injected during the computation of a cryptographic algorithm. Combining the seminal idea by Boneh, DeMillo and Lipton with Differential Cryptanalysis, a new field of Differential Fault Attacks (DFA) has emerged. DFA has shown that several ciphers can be compromised if the faults can be suitably controlled. DFA is not restricted to old ciphers, but can be a powerful attack vector even for modern ciphers, like the Advanced Encryption Standard (AES). In this book chapter, we present an overview on the history of fault attacks and their general principle. The chapter subsequently concentrates on the AES algorithm and explains the developed fault attacks. The chapter covers the entire range of attacks finally showing that a single random byte fault can reduce the AES key to 28 values, with a time complexity of 230. Further extensions of the fault attack to multiple byte fault models and attacks targeting the AES key schedule are also presented in the chapter. These attacks emphasize the requirement of counter-measures to detect the underlying faults and accordingly suppress the invalid output. The chapter then presents a survey of existing DFA countermeasures, concluding with the efficient Concurrent Error Detection (CED) schemes which have been developed utilizing the invariance properties in AES. Such a strategy provides near 100 % fault coverage at a less overhead. The combined chapter shows that DFA against AES are practical, and can be prevented using suitable techniques.

Journal ArticleDOI
TL;DR: This paper furnishes the complete security analysis of the ANU cipher design and shows that ANU can attain ample security level against linear and differential cryptanalysis, biclique attack, zero-correlation attack, and algebraic attack.
Abstract: This paper proposes an ultra lightweight cipher ANU. ANU is a balanced Feistel-based network. ANU supports 64 bit plaintext and 128/80 bit key length, and it has total 25 rounds. It needs only 1015 gate equivalents for 128 bit key length that is less as compared with all existing lightweight ciphers. Its memory size is minimal, and power consumption is very less. It needs only 22 mW of dynamic power, while PRESENT cipher consumes 39 mW of power. This paper furnishes the complete security analysis of the ANU cipher design. Our security analysis shows that ANU can attain ample security level against linear and differential cryptanalysis, biclique attack, zero-correlation attack, and algebraic attack. Biclique cryptanalysis provides maximal data complexity of 264. ANU cipher not only needs less gate equivalents but also it consumes very less power and has less memory requirement. ANU cipher is best suited for applications like Internet of Things. The design of ANU cipher will have a positive impact in the field of lightweight cryptography. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Despite their generic nature, the attacks can be applied to improve the best known attacks on several concrete ciphers, including the full AES2 and reduced-round LED-128 and shown to be faster than the benchmark meet-in-the-middle attack.
Abstract: Iterated Even---Mansour (EM) encryption schemes (also named "key-alternating ciphers") were extensively studied in recent years as an abstraction of commonly used block ciphers. A large amount of previous works on iterated EM concentrated on security in an information-theoretic model. A central question studied in these papers is: What is the minimal number of rounds for which the resulting cipher is indistinguishable from an ideal cipher? In this paper, we study a similar question in the computational model: What is the minimal number of rounds, assuring that no attack can recover the secret key faster than trivial attacks (such as exhaustive search)? We study this question for the two natural key scheduling variants that were considered in most previous papers: the identical subkeys variant and the independent subkeys variant. In the identical subkeys variant, we improve the best known attack by an additional round and show that $$r=3$$r=3 rounds are insufficient for assuring security, by devising a key recovery attack whose running time is about $$n/\log (n)$$n/log(n) times faster than exhaustive search for an $$n$$n-bit key. In the independent subkeys variant, we also extend the known results by one round and show that for $$r=2$$r=2, there exists a key recovery attack whose running time is faster than the benchmark meet-in-the-middle attack. Despite their generic nature, we show that the attacks can be applied to improve the best known attacks on several concrete ciphers, including the full $${\hbox {AES}^{2}}$$AES2 (proposed at Eurocrypt 2012) and reduced-round LED-128 (proposed at CHES 2012).

Journal ArticleDOI
TL;DR: Four simple algorithms for generation key-dependent S-boxes with assumed Matlab function "randperm" as standard of permutation and compared it with permutation possibilities of the proposed algorithms show that the key- dependent S- boxes have good quality and may be applied in cipher systems.
Abstract: S-boxes are used in block ciphers as the important nonlinear components. The nonlinearity provides important protection against linear and differential cryptanalysis. The S-boxes used in encryption process could be chosen to be key-dependent. In this paper, we have presented four simple algorithms for generation key-dependent S-boxes. For quality analysis of the key-dependent S-boxes, we have proposed eight distance metrics. We have assumed the Matlab function “randperm” as standard of permutation and compared it with permutation possibilities of the proposed algorithms. In the second section we describe four algorithms, which generate key-dependent S-boxes. In the third section we analyze eight normalized distance metrics which we have used for evaluation of the quality of the key-dependent generation algorithms. Afterwards, we experimentally investigate the quality of the generated key-dependent S-boxes. Comparison results show that the key-dependent S-boxes have good quality and may be applied in cipher systems.

Journal ArticleDOI
TL;DR: In this paper, the first biclique cryptanalysis of MIBS block cipher and the first BIClique attack on PRESENT block cipher was performed by using matching without matrix technique in the attack on MIBS and choosing a sub-key space of an internal round for key division.
Abstract: PRESENT and MIBS are two lightweight block ciphers that are suitable for low resource devices such as radio-frequency identification tags. In this paper, we present the first biclique cryptanalysis of MIBS block cipher and a new biclique cryptanalysis of PRESENT block cipher. These attacks are performed on full-round MIBS-80 and full-round PRESENT-80. Using matching without matrix technique in the attack on MIBS and choosing a sub-key space of an internal round for key division eventuate to reduce the security of this cipher by 1bit, while the data complexity of attack is 252 chosen plaintext. The attack on PRESENT-80 has a data complexity of at most 222 chosen plaintext and computational complexity of 279.34 encryption that both complexities are lower than of other cryptanalyses of full-round PRESENT-80 so far. Also, in this paper, we use early abort technique to efficiently filter out wrong keys in matching phase of biclique attack of PRESENT-80. Copyright © 2015 John Wiley & Sons, Ltd.

Book ChapterDOI
TL;DR: The design allows a smaller and more efficient hardware implementation, its security margins are not well understood, and the lack of design rationals of its predecessors further leaves some uncertainty on the security of Simeck.
Abstract: Simeck is a new lightweight block cipher design based on combining the design principles of the Simon and Speck block cipher. While the design allows a smaller and more efficient hardware implementation, its security margins are not well understood. The lack of design rationals of its predecessors further leaves some uncertainty on the security of Simeck.

Journal ArticleDOI
TL;DR: An improved chosen- Plaintext attack is presented to further reduce the number of chosen-plaintexts required and is proved to be optimal, and it is found that an elaborately designed known-plain text attack can efficiently compromise the image cipher under study.
Abstract: Recently, an image encryption algorithm based on scrambling and Veginere cipher has been proposed. However, it was soon cryptanalyzed by Zhang et al. using a method composed of both chosen-plaintext attack and differential attacks. This paper briefly reviews the two attack approaches proposed by Zhang et al. and outlines their mathematical interpretations. Based on these approaches, we present an improved chosen-plaintext attack to further reduce the number of chosen-plaintexts required, which is proved to be optimal. Moreover, it is found that an elaborately designed known-plaintext attack can efficiently compromise the image cipher under study. This finding is confirmed by both mathematical analysis and numerical simulations. The cryptanalyzing techniques developed in this paper provide some insights for designing secure and efficient multimedia ciphers.

Proceedings ArticleDOI
15 Sep 2016
TL;DR: Genetic Algorithm is used in the key generation process where key selection depends upon the fitness function and the generated keys using GA are unique and more secure for encryption of data.
Abstract: Cryptography is essential to protect and secure data using a key. Different types of cryptographic techniques are found for data security. Genetic Algorithm is essentially used for obtaining optimal solution. Also, it can be efficiently used for random number generation which are very important in cryptography. This paper discusses the application of genetic algorithms for stream ciphers. Key generation is the most important factor in stream ciphers. In this paper Genetic Algorithm is used in the key generation process where key selection depends upon the fitness function. Here genetic algorithm is repeated for key selection. In each iteration, the key having highest fitness value is selected which further be compared with the threshold value. Selected key was unique and non-repeating. Therefore encryption with selected key are highly encrypted because of more randomness of key. This paper shows that the generated keys using GA are unique and more secure for encryption of data.

Journal ArticleDOI
TL;DR: The existence of such an attack disproves the claims made by the designers that their modified AES-128 cipher improves the security of the AES cipher and that it can subsequently be used to construct a secure image encryption scheme.
Abstract: Wadi and Zainal recently proposed a high definition image encryption algorithm based on a modified AES-128 block cipher in (Wirel Pers Commun 79(2):811---829, 2014). In this paper, we show that the core component of their image encryption algorithm, a modified AES-128 cipher, is insecure against impossible differential attack. The proposed impossible differential attack on the full rounds of the modified AES-128 cipher has a time complexity of around $$2^{88.74}$$288.74 encryptions with $$2^{114.06}$$2114.06 chosen plaintexts and $$2^{99}$$299 bytes of memory, in contrast to the expected security of $$2^{128}$$2128. The existence of such an attack disproves the claims made by the designers that their modified AES-128 cipher improves the security of the AES cipher and that it can subsequently be used to construct a secure image encryption scheme. The root cause of this attack, some other issues with the modified AES cipher and possible solutions are described to serve as important remarks in designing a secure image encryption scheme.

Proceedings ArticleDOI
17 Mar 2016
TL;DR: This article analyzes how a single difference effects after one round to another round and how the reduction can be possible with some particular choices of keys and examines the possibility of reducing the complexity with the existing attack.
Abstract: The eSTREAM project [5] was established to choose new stream ciphers with comparison to existing ciphers (e.g. AES) as to provide a better alternative. The stream cipher Salsa20 [3] as a candidate of the eSTREAM project was accepted for the final phase and again successfully reviewed with 12 round. ChaCha is a variant of Salsa20 aiming at bringing better diffusion for similar performance. Significant effort has been made to analyze & explained Salsa and ChaCha with reduced round in [1] and [2], with some improvements. In this article, first we go through the work done in [1] with complexity 2248 and [2] with complexity 2243 to provide a view of the existing attack for ChaCha7, i.e., 7 rounds. For Salsa20/8 i.e., 8 rounds, complexity is 2247.2 [1] and 2245.5 [2]. Then we go through the method for Differential Cryptanalysis of Salsa20 and ChaCha, improved by correlation attacks and related to the concept of neutral bits to analyze Salsa20/9 & ChaCha with 8 rounds [1]. And to analyze the possibility of reducing the complexity with the existing attack. We determine how a single difference effects after one round to another round and how the reduction can be possible with some particular choices of keys.

Journal ArticleDOI
TL;DR: This paper proposes a method that determines 9 Secret Key bits explicitly, and proves that a suitably introduced difference in the IV leads to a distinguisher for the output bit produced in the 105th round.
Abstract: In this paper we propose conditional differential cryptanalysis of 105 round Grain v1. This improves the attack proposed on 97 round Grain v1 by Knellwolf et al at Asiacrypt 2010. We take the help of the tool ΔGrain KSA, to track the differential trails introduced in the internal state of Grain v1 by any difference in the IV bits. We prove that a suitably introduced difference in the IV leads to a distinguisher for the output bit produced in the 105th round. This helps determine the values of 6 expressions in the Secret Key bits. Using the above attack as a subroutine, we propose a method that determines 9 Secret Key bits explicitly. Thus, the complexity for the Key recovery is proportional to 271 operations, which is faster than exhaustive search by 29.

Posted Content
TL;DR: It is concluded that 12 rounds of Salsa and ChaCha should be considered sufficient for 256-bit keys under the current best known attack models.
Abstract: While Salsa and ChaCha are well known software oriented stream ciphers, since the work of Aumasson et al in FSE 2008 there aren’t many significant results against them. The basic model of their attack was to introduce differences in the IV bits, obtain biases after a few forward rounds, as well as to look at the Probabilistic Neutral Bits (PNBs) while reverting back. In this paper we first consider the biases in the forward rounds, and estimate an upper bound on the number of rounds till such biases can be observed. For this, we propose a hybrid model (under certain assumptions), where initially the nonlinear rounds as proposed by the designer are considered, and then we employ their linearized counterpart. The effect of reverting the rounds with the idea of PNBs is also considered. Based on the assumptions and analysis, we conclude that 12 rounds of Salsa and ChaCha should be considered sufficient for 256-bit keys under the current best known attack models.

Proceedings ArticleDOI
20 Jul 2016
TL;DR: This paper is offering to consider the possibility to use parallel computations based on MPI and NVIDIA CUDA technologies for cryptanalysis of Magma and Kuznyechik, and proposes a fast implementation of Kuzneyechik data encryption based on precomputed tables.
Abstract: The new cryptographic standard GOST R 34.12-2015 "Information technology. Cryptographic Data Security. Block ciphers." [7] came into force on January 1st, 2016. The standard contains two encryption algorithms. One of those is a former standard encryption algorithm GOST 29147-89 (also known as simply GOST) with fixed S-boxes. This algorithm is denoted as Magma in the new standard. The second algorithm is a new symmetric block cipher based on SP-network, which is denoted as Kuznyechik (also transliterated as "Kuznechik"). Nowadays, a lot of attention is paid to the issues of quality of the new cipher, namely its cryptographic strength, performance, portability, implementation, etc. In this paper we are offering to consider the possibility to use parallel computations based on MPI and NVIDIA CUDA technologies for cryptanalysis of Magma and Kuznyechik. We choose slide attack for the implementation. The slide attack is applicable to Magma and Kuznyechik ciphers only with significant weakening modifications to their original descriptions. However, research on applicability of parallel implementation of cryptanalysis is important, because the parallel approach can be applied to other more efficient methods of cryptanalysis. The proposed parallel algorithms implemented for two different technologies demonstrate close to linear growth of analysis speed with the increase of involved processor cores. Also we propose a fast implementation of Kuznyechik data encryption based on precomputed tables.

Proceedings ArticleDOI
06 May 2016
TL;DR: With these integral distinguishers, Simeck32/48/64 reduced to 21/21/24 rounds respectively can be attacked with integral cryptanalysis.
Abstract: Since proposed by NSA in June, 2013, SIMON and SPECK families have attracted the attention of massive cryptographers. More recently, at CHES 2015, a lightweight block cipher Simeck is proposed, which has adopted the merits from both SIMON and SPECK. However, the security level on Simeck against integral cryptanalysis has never been evaluated. This paper firstly proposes some theoretical and experimental integral distinguishes on Simeck. More specifically, 12/14/16-round theoretical integral distinguishers on Simeck32/48/64 and some 15-round experimental integral distinguishers on Simeck32 are firstly presented. With these integral distinguishers, Simeck32/48/64 reduced to 21/21/24 rounds respectively can be attacked with integral cryptanalysis.