scispace - formally typeset
Search or ask a question

Showing papers on "Block cipher published in 2012"


Book ChapterDOI
02 Dec 2012
TL;DR: In this paper, a block cipher called PRINCE is proposed that allows encryption of data within one clock cycle with a very competitive chip area compared to known solutions. But it does not have the α-reflection property, which holds that decryption for one key corresponds to encryption with another key.
Abstract: This paper presents a block cipher that is optimized with respect to latency when implemented in hardware. Such ciphers are desirable for many future pervasive applications with real-time security needs. Our cipher, named PRINCE, allows encryption of data within one clock cycle with a very competitive chip area compared to known solutions. The fully unrolled fashion in which such algorithms need to be implemented calls for innovative design choices. The number of rounds must be moderate and rounds must have short delays in hardware. At the same time, the traditional need that a cipher has to be iterative with very similar round functions disappears, an observation that increases the design space for the algorithm. An important further requirement is that realizing decryption and encryption results in minimum additional costs. PRINCE is designed in such a way that the overhead for decryption on top of encryption is negligible. More precisely for our cipher it holds that decryption for one key corresponds to encryption with a related key. This property we refer to as α-reflection is of independent interest and we prove its soundness against generic attacks.

507 citations


Posted Content
01 Jan 2012
TL;DR: This paper presents a block cipher that is optimized with respect to latency when implemented in hardware and holds that decryption for one key corresponds to encryption with a related key, which is of independent interest and proves its soundness against generic attacks.
Abstract: This paper presents a block cipher that is optimized with respect to latency when implemented in hardware. Such ciphers are desirable for many future pervasive applications with real-time security needs. Our cipher, named PRINCE, allows encryption of data within one clock cycle with a very competitive chip area compared to known solutions. The fully unrolled fashion in which such algorithms need to be implemented calls for innovative design choices. The number of rounds must be moderate and rounds must have short delays in hardware. At the same time, the traditional need that a cipher has to be iterative with very similar round functions disappears, an observation that increases the design space for the algorithm. An important further requirement is that realizing decryption and encryption results in minimum additional costs. PRINCE is designed in such a way that the overhead for decryption on top of encryption is negligible. More precisely for our cipher it holds that decryption for one key corresponds to encryption with a related key. This property we refer to as α-reflection is of independent interest and we prove its soundness against generic attacks.

439 citations


01 Jan 2012
TL;DR: Wang et al. as mentioned in this paper presented a 64-bit lightweight block cipher TWINE supporting 80 and 128-bit keys, which is obtained by the use of generalized Feistel structure combined with an improved block shuffle.
Abstract: This paper presents a 64-bit lightweight block cipher TWINE supporting 80 and 128- bit keys. TWINE realizes quite small hardware implementation similar to the previous lightweight block cipher proposals, yet enables efficient software implementations on various platforms, from micro-controller to high-end CPU. This characteristic is obtained by the use of generalized Feistel structure combined with an improved block shuffle, introduced at FSE 2010. Keywords: lightweight block cipher, generalized Feistel structure, block shuffle

283 citations


BookDOI
21 Jun 2012
TL;DR: This book deals with side-channel analysis and its relevance to fault attacks, which is the first book on this topic and will be of interest to researchers and practitioners engaged with cryptographic engineering.
Abstract: In the 1970s researchers noticed that radioactive particles produced by elements naturally present in packaging material could cause bits to flip in sensitive areas of electronic chips Research into the effect of cosmic rays on semiconductors, an area of particular interest in the aerospace industry, led to methods of hardening electronic devices designed for harsh environments Ultimately various mechanisms for fault creation and propagation were discovered, and in particular it was noted that many cryptographic algorithms succumb to so-called fault attacks Preventing fault attacks without sacrificing performance is nontrivial and this is the subject of this book Part I deals with side-channel analysis and its relevance to fault attacks The chapters in Part II cover fault analysis in secret key cryptography, with chapters on block ciphers, fault analysis of DES and AES, countermeasures for symmetric-key ciphers, and countermeasures against attacks on AES Part III deals with fault analysis in public key cryptography, with chapters dedicated to classical RSA and RSA-CRT implementations, elliptic curve cryptosystems and countermeasures using fault detection, devices resilient to fault injection attacks, lattice-based fault attacks on signatures, and fault attacks on pairing-based cryptography Part IV examines fault attacks on stream ciphers and how faults interact with countermeasures used to prevent power analysis attacks Finally, Part V contains chapters that explain how fault attacks are implemented, with chapters on fault injection technologies for microprocessors, and fault injection and key retrieval experiments on a widely used evaluation board This is the first book on this topic and will be of interest to researchers and practitioners engaged with cryptographic engineering

179 citations


Book ChapterDOI
15 Aug 2012
TL;DR: A 64-bit lightweight block cipher supporting 80 and 128-bit keys, supported by the use of generalized Feistel combined with an improved block shuffle, introduced at FSE 2010 is presented.
Abstract: This paper presents a 64-bit lightweight block cipher \(\textnormal{\textsc{TWINE}}\) supporting 80 and 128-bit keys. \(\textnormal{\textsc{TWINE}}\) realizes quite small hardware implementation similar to the previous lightweight block cipher proposals, yet enables efficient software implementations on various CPUs, from micro-controllers to high-end CPUs. This characteristic is obtained by the use of generalized Feistel combined with an improved block shuffle, introduced at FSE 2010.

178 citations


Book ChapterDOI
15 Apr 2012
TL;DR: Even and Mansour as discussed by the authors showed that the Even-Mansour scheme is not minimal in the sense that it can be simplified into a single key scheme with half as many key bits, which can be argued to be the simplest conceivable provably secure block cipher.
Abstract: In this paper we consider the following fundamental problem: What is the simplest possible construction of a block cipher which is provably secure in some formal sense? This problem motivated Even and Mansour to develop their scheme in 1991, but its exact security remained open for more than 20 years in the sense that the lower bound proof considered known plaintexts, whereas the best published attack (which was based on differential cryptanalysis) required chosen plaintexts. In this paper we solve this open problem by describing the new Slidex attack which matches the T=Ω(2n/D) lower bound on the time T for any number of known plaintexts D. Once we obtain this tight bound, we can show that the original two-key Even-Mansour scheme is not minimal in the sense that it can be simplified into a single key scheme with half as many key bits which provides exactly the same security, and which can be argued to be the simplest conceivable provably secure block cipher. We then show that there can be no comparable lower bound on the memory requirements of such attacks, by developing a new memoryless attack which can be applied with the same time complexity but only in the special case of D=2n/2. In the last part of the paper we analyze the security of several other variants of the Even-Mansour scheme, showing that some of them provide the same level of security while in others the lower bound proof fails for very delicate reasons.

156 citations


Journal ArticleDOI
01 Sep 2012
TL;DR: The capability of the proposed joint encryption/water-marking system to securely make available security attributes in both spatial and encrypted domains while minimizing image distortion is demonstrated.
Abstract: In this paper, we propose a joint encryption/water-marking system for the purpose of protecting medical images. This system is based on an approach which combines a substitutive watermarking algorithm, the quantization index modulation, with an encryption algorithm: a stream cipher algorithm (e.g., the RC4) or a block cipher algorithm (e.g., the AES in cipher block chaining (CBC) mode of operation). Our objective is to give access to the outcomes of the image integrity and of its origin even though the image is stored encrypted. If watermarking and encryption are conducted jointly at the protection stage, watermark extraction and decryption can be applied independently. The security analysis of our scheme and experimental results achieved on 8-bit depth ultrasound images as well as on 16-bit encoded positron emission tomography images demonstrate the capability of our system to securely make available security attributes in both spatial and encrypted domains while minimizing image distortion. Furthermore, by making use of the AES block cipher in CBC mode, the proposed system is compliant with or transparent to the DICOM standard.

137 citations


Book ChapterDOI
19 Mar 2012
TL;DR: This paper introduces the first masking schemes which can be applied in software to efficiently protect any s-box at any order, and gives optimal methods for the set of power functions, and efficient heuristics for the general case.
Abstract: Masking is a common countermeasure against side-channel attacks. The principle is to randomly split every sensitive intermediate variable occurring in the computation into d+1 shares, where d is called the masking order and plays the role of a security parameter. The main issue while applying masking to protect a block cipher implementation is to design an efficient scheme for the s-box computations. Actually, masking schemes with arbitrary order only exist for Boolean circuits and for the AES s-box. Although any s-box can be represented as a Boolean circuit, applying such a strategy leads to inefficient implementation in software. The design of an efficient and generic higher-order masking scheme was hence until now an open problem. In this paper, we introduce the first masking schemes which can be applied in software to efficiently protect any s-box at any order. We first describe a general masking method and we introduce a new criterion for an s-box that relates to the best efficiency achievable with this method. Then we propose concrete schemes that aim to approach the criterion. Specifically, we give optimal methods for the set of power functions, and we give efficient heuristics for the general case. As an illustration we apply the new schemes to the DES and PRESENT s-boxes and we provide implementation results.

128 citations


Book ChapterDOI
19 Mar 2012
TL;DR: The concept of biclique as a tool for preimage attacks was introduced in this paper, which employs many powerful techniques from differential cryptanalysis of block ciphers and hash functions.
Abstract: We present a new concept of biclique as a tool for preimage attacks, which employs many powerful techniques from differential cryptanalysis of block ciphers and hash functions. The new tool has proved to be widely applicable by inspiring many authors to publish new results of the full versions of AES, KASUMI, IDEA, and Square. In this paper, we show how our concept leads to the first cryptanalysis of the round-reduced Skein hash function, and describe an attack on the SHA-2 hash function with more rounds than before.

128 citations


Book ChapterDOI
10 Jul 2012
TL;DR: This paper provides implementations of 12 block ciphers on an ATMEL AVR ATtiny45 8-bit microcontroller, and makes the corresponding source code available on a web page, and evaluates performance figures of the implementations with respect to different metrics, including energy-consumption measurements and shows improvements compared to existing implementations.
Abstract: The design of lightweight block ciphers has been a very active research topic over the last years. However, the lack of comparative source codes generally makes it hard to evaluate the extent to which implementations of different ciphers actually reach their low-cost goals on various platforms. This paper reports on an initiative aiming to relax this issue. First, we provide implementations of 12 block ciphers on an ATMEL AVR ATtiny45 8-bit microcontroller, and make the corresponding source code available on a web page. All implementations are made public under an open-source license. Common interfaces and design goals are followed by all designers to achieve comparable implementation results. Second, we evaluate performance figures of our implementations with respect to different metrics, including energy-consumption measurements and show our improvements compared to existing implementations.

128 citations


Book ChapterDOI
19 Mar 2012
TL;DR: One of the family members, McOEx, which is a design solely based on a standard block cipher, provably guarantees reasonable security against general adversaries as well as standard security against nonce-respecting adversaries is presented.
Abstract: On-Line Authenticated Encryption (OAE) combines privacy with data integrity and is on-line computable Most block cipher-based schemes for Authenticated Encryption can be run on-line and are provably secure against nonce-respecting adversaries But they fail badly for more general adversaries This is not a theoretical observation only --- in practice, the reuse of nonces is a frequent issue In recent years, cryptographers developed misuse-resistant schemes for Authenticated Encryption These guarantee excellent security even against general adversaries which are allowed to reuse nonces Their disadvantage is that encryption can be performed in an off-line way, only This paper considers OAE schemes dealing both with nonce-respecting and with general adversaries It introduces McOE, an efficient design for OAE schemes For this we present in detail one of the family members, McOEx, which is a design solely based on a standard block cipher As all the other member of the McOE family, it provably guarantees reasonable security against general adversaries as well as standard security against nonce-respecting adversaries

Book ChapterDOI
09 Sep 2012
TL;DR: Implementation results show that the most significant differences between lightweight ciphers are observed when considering both encryption and decryption architectures, and the impact of key scheduling algorithms.
Abstract: We provide a comprehensive evaluation of several lightweight block ciphers with respect to various hardware performance metrics, with a particular focus on the energy cost. This case study serves as a background for discussing general issues related to the relative nature of hardware implementations comparisons. We also use it to extract intuitive observations for new algorithm designs. Implementation results show that the most significant differences between lightweight ciphers are observed when considering both encryption and decryption architectures, and the impact of key scheduling algorithms. Yet, these differences are moderated when looking at their amplitude, and comparing them with the impact of physical parameters tuning, e.g. frequency / voltage scaling.

DOI
23 Jan 2012
TL;DR: This National Institute of Standards and Technology Special Publication 800-67, Revision 2: Recommendations for the Triple Data Encryption Algorithm (TDEA) Block Cipher specifies the TDEA, including its primary component cryptographic engine, the Data Enc encryption Algorithm.

Book ChapterDOI
19 Mar 2012
TL;DR: In this paper, a statistical technique was proposed to reduce the data complexity of zero correlation linear cryptanalysis (ZCLC) by using the high number of linear approximations available.
Abstract: Zero correlation linear cryptanalysis is a novel key recovery technique for block ciphers proposed in [5]. It is based on linear approximations with probability of exactly 1/2 (which corresponds to the zero correlation). Some block ciphers turn out to have multiple linear approximations with correlation zero for each key over a considerable number of rounds. Zero correlation linear cryptanalysis is the counterpart of impossible differential cryptanalysis in the domain of linear cryptanalysis, though having many technical distinctions and sometimes resulting in stronger attacks. In this paper, we propose a statistical technique to significantly reduce the data complexity using the high number of zero correlation linear approximations available. We also identify zero correlation linear approximations for 14 and 15 rounds of TEA and XTEA. Those result in key-recovery attacks for 21-round TEA and 25-round XTEA, while requiring less data than the full code book. In the single secret key setting, these are structural attacks breaking the highest number of rounds for both ciphers. The findings of this paper demonstrate that the prohibitive data complexity requirements are not inherent in the zero correlation linear cryptanalysis and can be overcome. Moreover, our results suggest that zero correlation linear cryptanalysis can actually break more rounds than the best known impossible differential cryptanalysis does for relevant block ciphers. This might make a security re-evaluation of some ciphers necessary in the view of the new attack.

Book ChapterDOI
15 Apr 2012
TL;DR: In this article, Even and Mansour's Even-Mansour construction was extended to a provable security setting, where an attacker needs to make at least 22n/3 queries to the underlying permutations to distinguish the construction from random.
Abstract: This paper considers--for the first time--the concept of key-alternating ciphers in a provable security setting. Key-alternating ciphers can be seen as a generalization of a construction proposed by Even and Mansour in 1991. This construction builds a block cipher PX from an n-bit permutation P and two n-bit keys k0 and k1, setting PX{k0,k1} (x) = k1 ⊕ P(x ⊕ k0). Here we consider a (natural) extension of the Even-Mansour construction with t permutations P1,…,Pt and t+1 keys, k0,…, kt. We demonstrate in a formal model that such a cipher is secure in the sense that an attacker needs to make at least 22n/3 queries to the underlying permutations to be able to distinguish the construction from random. We argue further that the bound is tight for t=2 but there is a gap in the bounds for t>2, which is left as an open and interesting problem. Additionally, in terms of statistical attacks, we show that the distribution of Fourier coefficients for the cipher over all keys is close to ideal. Lastly, we define a practical instance of the construction with t=2 using AES referred to as AES2. Any attack on AES2 with complexity below 285 will have to make use of AES with a fixed known key in a non-black box manner. However, we conjecture its security is 2128.

Proceedings ArticleDOI
09 Sep 2012
TL;DR: A new way is proposed to classify fault attacks against block ciphers, allowing them to exhibit their capacity to be combined with observation attacks, and a set of common protections against side-channel and fault attacks, namely higher-order masking schemes, detection and infection countermeasures, and how they can be combined.
Abstract: Recent works show that a combination of perturbation and observation attacks on symmetric ciphers thwarts state-of-the-art countermeasures. In this paper, we first propose a new way - to our knowledge - to classifyfault attacks against block ciphers, allowing us to exhibit their capacity to be combined with observation attacks. We then present a set of common protections against side-channel and fault attacks, namely higher-order masking schemes, detection and infection countermeasures, and how they can be combined. We show that the combination of a higher-order maskingscheme and a detection countermeasure can actually be defeated by a slight variant of the combined attack of Roche et al., even if one applies their patch. Furthermore, we also demonstrate that none of the published infection countermeasures is robust against fault attacks. Finally, using randomness, we propose a set of enhanced countermeasures that thwart considered threats.

Book ChapterDOI
15 Apr 2012
TL;DR: For the first time, an approach is described to noticeably speed-up key-recovery for the full 8.5 round IDEA and it is shown that the biclique approach to block cipher cryptanalysis not only obtains results on more rounds, but also improves time and data complexities over existing attacks.
Abstract: We apply and extend the recently introduced biclique framework to IDEA and for the first time describe an approach to noticeably speed-up key-recovery for the full 8.5 round IDEA. We also show that the biclique approach to block cipher cryptanalysis not only obtains results on more rounds, but also improves time and data complexities over existing attacks. We consider the first 7.5 rounds of IDEA and demonstrate a variant of the approach that works with practical data complexity. The conceptual contribution is the narrow-bicliques technique: the recently introduced independent-biclique approach extended with ways to allow for a significantly reduced data complexity with everything else being equal. For this we use available degrees of freedom as known from hash cryptanalysis to narrow the relevant differential trails. Our cryptanalysis is of high computational complexity, and does not threaten the practical use of IDEA in any way, yet the techniques are practically verified to a large extent.

Book ChapterDOI
19 Aug 2012
TL;DR: In this article, the authors show that a large class of diverse problems have a bicomposite structure which makes it possible to solve them with a new type of algorithm called dissection, which has much better time/memory tradeoffs than previously known algorithms.
Abstract: In this paper we show that a large class of diverse problems have a bicomposite structure which makes it possible to solve them with a new type of algorithm called dissection, which has much better time/memory tradeoffs than previously known algorithms. A typical example is the problem of finding the key of multiple encryption schemes with r independent n-bit keys. All the previous error-free attacks required time T and memory M satisfying $$TM = 2^{rn}$$, and even if "false negatives" are allowed, no attack could achieve $$TM<2^{3rn/4}$$. Our new technique yields the first algorithm which never errs and finds all the possible keys with a smaller product of TM, such as $$T=2^{4n}$$ time and $$M=2^{n}$$ memory for breaking the sequential execution of $$r=7$$ block ciphers. The improvement ratio we obtain increases in an unbounded way as r increases, and if we allow algorithms which can sometimes miss solutions, we can get even better tradeoffs by combining our dissection technique with parallel collision search. To demonstrate the generality of the new dissection technique, we show how to use it in a generic way in order to attack hash functions with a rebound attack, to solve hard knapsack problems, and to find the shortest solution to a generalized version of Rubik's cube with better time complexities for small memory complexities than the best previously known algorithms.

Book ChapterDOI
19 Mar 2012
TL;DR: It is shown that GHASH has much wider classes of weak keys in its 512 multiplicative subgroups, analyze some of their properties, and gives experimental results on AES-GCM weak key search.
Abstract: The Galois/Counter Mode (GCM) of operation has been standardized by NIST to provide single-pass authenticated encryption. The GHASH authentication component of GCM belongs to a class of Wegman-Carter polynomial hashes that operate in the field GF(2128). We present message forgery attacks that are made possible by its extremely smooth-order multiplicative group which splits into 512 subgroups. GCM uses the same block cipher key K to both encrypt data and to derive the generator H of the authentication polynomial for GHASH. In present literature, only the trivial weak key H=0 has been considered. We show that GHASH has much wider classes of weak keys in its 512 multiplicative subgroups, analyze some of their properties, and give experimental results on AES-GCM weak key search. Our attacks can be used not only to bypass message authentication with garbage but also to target specific plaintext bits if a polynomial MAC is used in conjunction with a stream cipher. These attacks can also be applied with varying efficiency to other polynomial hashes and MACs, depending on their field properties. Our findings show that especially the use of short polynomial-evaluation MACs should be avoided if the underlying field has a smooth multiplicative order.

Journal ArticleDOI
TL;DR: New attacks are introduced that find the AES-128 key with two faults in a one-byte fault model without exhaustive search and the AES's 192 and AES-256 keys with six and four faults, respectively.
Abstract: Differential fault analysis (DFA) finds the key of a block cipher using differential information between correct and faulty ciphertexts obtained by inducing faults during the computation of ciphertexts. Among many ciphers, advanced encryption standard (AES) has been the main target of DFA due to its popularity. The naive implementation of AES is known to be vulnerable to DFA, which can be split into two categories depending on the fault location: the DFA on the State and the DFA on the Key Schedule. For the first category, much research has been done and very efficient methods were devised. However, there is still a lack of research in the second category. The advantage of DFA on the Key Schedule is that it can even defeat some fault-protected AES implementations. Research on DFA has been diversified into several directions: reducing the number of required faults, changing fault models (from one-byte fault to multibyte fault and vise versa), extending to AES-192 and AES-256, and exploiting faults induced at an earlier round. This paper deals with all these directions together in DFA on AES Key Schedule. We introduce new attacks that find the AES-128 key with two faults in a one-byte fault model without exhaustive search and the AES-192 and the AES-256 keys with six and four faults, respectively.

01 Jan 2012
TL;DR: This talk will present a perspective on the current state of play in the field of encryption algorithms, in particular on private key block ciphers which are widely used for bulk data and link encryption.
Abstract: This talk will present a perspective on the current state of play in the field of encryption algorithms, in particular on private key block ciphers which are widely used for bulk data and link encryption. we have initially survey some of the more popular and interesting algorithms currently in use. This paper focuses mainly on the different kinds of encryption techniques that are existing, and comparative study all the techniques together as a literature survey. Aim an extensive experimental study of implementations of various available encryption techniques. Also focuses on image encryption techniques, information encryption techniques. This study extends to the performance parameters used in encryption processes and analyzing on their security issues.

Journal Article
TL;DR: In this paper, the EvenMansour construction with t permutations P1,..., Pt and t + 1 keys, k0, k1, k2, k3, k4, k5, k6, k7, k8, k9, k10, k11, k12, k13, k14, k15, k16, k17, k18, k19, k20, k21, k22, k23, k24, k25, k26, k27, k28,
Abstract: This paper considers—for the first time—the concept of keyalternating ciphers in a provable security setting. Key-alternating ciphers can be seen as a generalization of a construction proposed by Even and Mansour in 1991. This construction builds a block cipher PX from an n-bit permutation P and two n-bit keys k0 and k1, setting PXk0,k1(x) = k1 ⊕ P (x ⊕ k0). Here we consider a (natural) extension of the EvenMansour construction with t permutations P1, . . . , Pt and t + 1 keys, k0, . . . , kt. We demonstrate in a formal model that such a cipher is secure in the sense that an attacker needs to make at least 2 queries to the underlying permutations to be able to distinguish the construction from random. We argue further that the bound is tight for t = 2 but there is a gap in the bounds for t > 2, which is left as an open and interesting problem. Additionally, in terms of statistical attacks, we show that the distribution of Fourier coefficients for the cipher over all keys is close to ideal. Lastly, we define a practical instance of the construction with t = 2 using AES referred to as AES. Any attack on AES with complexity below 2 will have to make use of AES with a fixed known key in a non-black box manner. However, we conjecture its security is 2.

Journal ArticleDOI
TL;DR: This study analyzes the security weaknesses of the “C.

Book ChapterDOI
09 Jul 2012
TL;DR: It is demonstrated that the MITM attack is the most powerful attack in the single-key setting on those ciphers with respect to the number of attacked rounds and the possibility of applying the recent speed-up keysearch based onMITM attack to thoseciphers is considered.
Abstract: In this paper, we investigate the security of the lightweight block ciphers against the meet-in-the-middle (MITM) attack. Since the MITM attack mainly exploits low key-dependency in a key expanding function, the block ciphers having a simple key expanding function are likely to be vulnerable to the MITM attack. On the other hand, such a simple key expanding function leads compact implementation, and thus is utilized in several lightweight block ciphers. However, the security of such lightweight block ciphers against the MITM attack has not been studied well so far. We apply the MITM attack to the ciphers, then give more accurate security analysis for them. Specifically, combining thorough analysis with new techniques, we present the MITM attacks on 29, 8, 16, 14 and 21 rounds of XTEA, LED-64, LED-128, Piccolo-80 and Piccolo-128, respectively. Consequently, it is demonstrated that the MITM attack is the most powerful attack in the single-key setting on those ciphers with respect to the number of attacked rounds. Moreover, we consider the possibility of applying the recent speed-up keysearch based on MITM attack to those ciphers.

Book ChapterDOI
09 Sep 2012
TL;DR: The number of rounds, their complexity, and the similarity of encryption and decryption procedures have a strong impact on the results and this paper concludes with a set of recommendations for aspiring low-latency block cipher designers.
Abstract: The processing time required by a cryptographic primitive implemented in hardware is an important metric for its performance but it has not received much attention in recent publications on lightweight cryptography. Nevertheless, there are important applications for cost effective low-latency encryption. As the first step in the field, this paper explores the low-latency behavior of hardware implementations of a set of block ciphers. The latency of the implementations is investigated as well as the trade-offs with other metrics such as circuit area, time-area product, power, and energy consumption. The obtained results are related back to the properties of the underlying cipher algorithm and, as it turns out, the number of rounds, their complexity, and the similarity of encryption and decryption procedures have a strong impact on the results. We provide a qualitative description and conclude with a set of recommendations for aspiring low-latency block cipher designers.

Journal ArticleDOI
TL;DR: This paper presents attacks on up to four rounds of AES that require at most three known/chosen plaintexts, and applies these attacks to cryptanalyze an AES-based stream cipher, and to mount the best known plaintext attack on six-round AES.
Abstract: The majority of current attacks on reduced-round variants of block ciphers seeks to maximize the number of rounds that can be broken, using less data than the entire codebook and less time than exhaustive key search. In this paper, we pursue a different approach, restricting the data available to the adversary to a few plaintext/ciphertext pairs. We argue that consideration of such attacks (which received little attention in recent years) improves our understanding of the security of block ciphers and of other cryptographic primitives based on block ciphers. In particular, these attacks can be leveraged to more complex attacks, either on the block cipher itself or on other primitives (e.g., stream ciphers, MACs, or hash functions) that use a small number of rounds of the block cipher as one of their components. As a case study, we consider the Advanced Encryption Standard (AES)-the most widely used block cipher. The AES round function is used in many cryptographic primitives, such as the hash functions Lane, SHAvite-3, and Vortex or the message authentication codes ALPHA-MAC, Pelican, and Marvin. We present attacks on up to four rounds of AES that require at most three known/chosen plaintexts. We then apply these attacks to cryptanalyze an AES-based stream cipher (which follows the leak extraction methodology), and to mount the best known plaintext attack on six-round AES.

Book ChapterDOI
09 Sep 2012
TL;DR: It is shown that indeed for simpler constructions leakage-resilience can be obtained when the authors aim for relaxed security notions where the leakage-functions and/or the inputs to the primitive are chosen non-adaptively.
Abstract: Leakage resilient cryptography attempts to incorporate side-channel leakage into the black-box security model and designs cryptographic schemes that are provably secure within it. Informally, a scheme is leakage-resilient if it remains secure even if an adversary learns a bounded amount of arbitrary information about the schemes internal state. Unfortunately, most leakage resilient schemes are unnecessarily complicated in order to achieve strong provable security guarantees. As advocated by Yu et al. [CCS'10], this mostly is an artefact of the security proof and in practice much simpler construction may already suffice to protect against realistic side-channel attacks. In this paper, we show that indeed for simpler constructions leakage-resilience can be obtained when we aim for relaxed security notions where the leakage-functions and/or the inputs to the primitive are chosen non-adaptively. For example, we show that a three round Feistel network instantiated with a leakage resilient PRF yields a leakage resilient PRP if the inputs are chosen non-adaptively (This complements the result of Dodis and Pietrzak [CRYPTO'10] who show that if a adaptive queries are allowed, a superlogarithmic number of rounds is necessary.) We also show that a minor variation of the classical GGM construction gives a leakage resilient PRF if both, the leakage-function and the inputs, are chosen non-adaptively.

Proceedings ArticleDOI
19 Aug 2012
TL;DR: This paper provides the first comprehensive hardware architecture comparison between Clefia and Present, as well as a comparison with the current National Institute of Standards and Technology (NIST) standard, the Advanced Encryption Standard.
Abstract: As ubiquitous computing becomes a reality, sensitive information is increasingly processed and transmitted by smart cards, mobile devices and various types of embedded systems. This has led to the requirement of a new class of lightweight cryptographic algorithm to ensure security in these resource constrained environments. The International Organization for Standardization (ISO) has recently standardized two low-cost block ciphers for this purpose, Clefia and Present. In this paper we provide the first comprehensive hardware architecture comparison between these ciphers, as well as a comparison with the current National Institute of Standards and Technology (NIST) standard, the Advanced Encryption Standard.

Book ChapterDOI
07 Oct 2012
TL;DR: In this article, the idea of infective computation is used to prevent the propagation of faults in block ciphers. But the authors do not address the problem of how to ensure that a fault injected into a cipher, dummy or redundant round will infect the ciphertext such that an attacker cannot derive any information on the secret key being used.
Abstract: Implementation attacks pose a serious threat for the security of cryptographic devices and there are a multitude of countermeasures that are used to prevent them. Two countermeasures used in implementations of block ciphers to increase the complexity of such attacks are the use of dummy rounds and redundant computation with consistency checks to prevent fault attacks. In this paper we present several countermeasures based on the idea of infective computation. Our countermeasures ensure that a fault injected into a cipher, dummy, or redundant round will infect the ciphertext such that an attacker cannot derive any information on the secret key being used. This has one clear advantage: the propagation of faults prevents an attacker from being able to conduct any fault analysis on any corrupted ciphertexts. As a consequence, there is no need for any test at the end of an implementation to determine if a fault has been injected and a ciphertext can always be returned.

Book ChapterDOI
26 Jun 2012
TL;DR: This work designs a block cipher that fits well the masking constraints of a proven masking scheme, and chooses an adequate S-box, which is non-bijective.
Abstract: Many papers deal with the problem of constructing an efficient masking scheme for existing block ciphers. We take the reverse approach: that is, given a proven masking scheme (Rivain and Prouff, CHES 2010) we design a block cipher that fits well the masking constraints. The difficulty of implementing efficient masking for a block cipher comes mainly from the S-boxes. Therefore the choice of an adequate S-box is the first and most critical step of our work. The S-box we selected is non-bijective; we discuss the resulting design and security problems. A complete design of the cipher is given, as well as some implementation results.