scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Cryptology in Africa in 2014"


Book ChapterDOI
28 May 2014
TL;DR: Fan and Vercauteren as mentioned in this paper proposed a ring-LWE-based, scale-invariant, leveled homomorphic encryption scheme based on BGV and the YASHE scheme.
Abstract: We conduct a theoretical and practical comparison of two Ring-LWE-based, scale-invariant, leveled homomorphic encryption schemes – Fan and Vercauteren’s adaptation of BGV and the YASHE scheme proposed by Bos, Lauter, Loftus and Naehrig. In particular, we explain how to choose parameters to ensure correctness and security against lattice attacks. Our parameter selection improves the approach of van de Pol and Smart to choose parameters for schemes based on the Ring-LWE problem by using the BKZ-2.0 simulation algorithm.

143 citations


Book ChapterDOI
28 May 2014
TL;DR: In this article, the authors present a new threshold implementation of AES-128 encryption that is 18% smaller, 75% faster and requires 8% less random bits than the implementation from Eurocrypt 2011.
Abstract: Threshold Implementations provide provable security against first-order power analysis attacks for hardware and software implementations Like masking, the approach relies on secret sharing but it differs in the implementation of logic functions At Eurocrypt 2011 Moradi et al published the to date most compact Threshold Implementation of AES-128 encryption Their work shows that the number of required random bits may be an additional evaluation criterion, next to area and speed We present a new Threshold Implementation of AES-128 encryption that is 18% smaller, 75% faster and that requires 8% less random bits than the implementation from Eurocrypt 2011 In addition, we provide results of a practical security evaluation based on real power traces in adversary-friendly conditions They confirm the first-order attack resistance of our implementation and show good resistance against higher-order attacks

95 citations


Book ChapterDOI
28 May 2014
TL;DR: New results for rank-based cryptography are surveyed: cryptosystems which are based on error-correcting codes embedded with the rank metric, together with a zero-knowledge authentication scheme and a new signature scheme based on a mixed errors-erasures decoding of LRPC codes.
Abstract: In this paper we survey new results for rank-based cryptography: cryptosystems which are based on error-correcting codes embedded with the rank metric. These new results results first concern the LRPC cryptosystem, a cryptosystem based on a new class of decodable rank codes: the LRPC codes (for Low Rank Parity Check codes) which can be seen as an analog of the classical LDPC codes but for rank metric. The LRPC cryptosystem can benefit from very small public keys of less than 2,000 bits and is moreover very fast. We also present new optimized attacks for solving the general case of the rank syndrome decoding problem, together with a zero-knowledge authentication scheme and a new signature scheme based on a mixed errors-erasures decoding of LRPC codes, both these systems having public keys of a few thousand bits. These new recent results highlight that rank-based cryptography has many good features that can be used for practical cryptosystems.

62 citations


Book ChapterDOI
28 May 2014
TL;DR: This paper focuses on the scheme proposed by Carlet et al at FSE 2012, and improved by Roy and Vivek at CHES 2013, and shows that this scheme is today the most efficient one to secure a generic S-box at any order.
Abstract: To defeat side-channel attacks, the implementation of block cipher algorithms in embedded devices must include dedicated countermeasures. To this end, security designers usually apply secret sharing techniques and build masking schemes to securely operate an shared data. The popularity of this approach can be explained by the fact that it enables formal security proofs. The construction of masking schemes thwarting higher-order side-channel attacks, which correspond to a powerful adversary able to exploit the leakage of the different shares, has been a hot topic during the last decade. Several solutions have been proposed, usually at the cost of significant performance overheads. As a result, the quest for efficient masked S-box implementations is still ongoing. In this paper, we focus on the scheme proposed by Carlet et al at FSE 2012, and latter improved by Roy and Vivek at CHES 2013. This scheme is today the most efficient one to secure a generic S-box at any order. By exploiting an idea introduced by Coron et al at FSE 2013, we show that Carlet et al’s scheme can still be improved for S-boxes with input dimension larger than four. We obtain this result thanks to a new definition for the addition-chain exponentiation used during the masked S-box processing. For the AES and DES S-boxes, we show that our improvement leads to significant efficiency gains.

43 citations


Book ChapterDOI
28 May 2014
TL;DR: In this article, the authors describe basic software techniques to improve the performance of Montgomery modular multiplication on 8-bit AVR-based microcontrollers and present a new variant of the widely-used hybrid method for multiple-precision multiplication that is 10.6% faster than the original hybrid technique.
Abstract: Modular multiplication of large integers is a performance-critical arithmetic operation of many public-key cryptosystems such as RSA, DSA, Diffie-Hellman (DH) and their elliptic curve-based variants ECDSA and ECDH. The computational cost of modular multiplication and related operations (e.g. exponentiation) poses a practical challenge to the widespread deployment of public-key cryptography, especially on embedded devices equipped with 8-bit processors (smart cards, wireless sensor nodes, etc.). In this paper, we describe basic software techniques to improve the performance of Montgomery modular multiplication on 8-bit AVR-based microcontrollers. First, we present a new variant of the widely-used hybrid method for multiple-precision multiplication that is 10.6% faster than the original hybrid technique of Gura et al. Then, we discuss different hybrid Montgomery multiplication algorithms, including Hybrid Finely Integrated Product Scanning (HFIPS), and introduce a novel approach for Montgomery multiplication, which we call Hybrid Separated Product Scanning (HSPS). Finally, we show how to perform the modular subtraction of Montgomery reduction in a regular fashion without execution of conditional statements so as to counteract Simple Power Analysis (SPA) attacks. Our AVR implementation of the HFIPS and HSPS method outperforms the Montgomery multiplication of the MIRACL Crypto SDK by up to 21.58% and 14.24%, respectively, and is twice as fast as the modular multiplication of the TinyECC library.

35 citations


Book ChapterDOI
28 May 2014
TL;DR: This paper proposes the security notion of Key Unlinkability for IBE, which leads to strong guarantees of trapdoor privacy in PEKS, and constructs a scheme that achieves this security notion.
Abstract: Asymmetric searchable encryption allows searches to be carried over ciphertexts, through delegation, and by means of trapdoors issued by the owner of the data. Public Key Encryption with Keyword Search (PEKS) is a primitive with such functionality that provides delegation of exact-match searches. As it is important that ciphertexts preserve data privacy, it is also important that trapdoors do not expose the user’s search criteria. The difficulty of formalizing a security model for trapdoor privacy lies in the verification functionality, which gives the adversary the power of verifying if a trapdoor encodes a particular keyword. In this paper, we provide a broader view on what can be achieved regarding trapdoor privacy in asymmetric searchable encryption schemes, and bridge the gap between previous definitions, which give limited privacy guarantees in practice against search patterns. Since it is well-known that PEKS schemes can be trivially constructed from any Anonymous IBE scheme, we propose the security notion of Key Unlinkability for IBE, which leads to strong guarantees of trapdoor privacy in PEKS, and we construct a scheme that achieves this security notion.

33 citations


Book ChapterDOI
28 May 2014
TL;DR: The present paper proposes a scheme called DRECON to construct a block cipher with innate protection against differential power attacks (DPA), motivated by tweakable block ciphers and is shown to be secure against first-order DPA using information theoretic metrics.
Abstract: Side-channel attacks are considered as one of the biggest threats against modern crypto-systems. This motivates the design of ciphers which are naturally resistant against side-channel attacks. The present paper proposes a scheme called DRECON to construct a block cipher with innate protection against differential power attacks (DPA). The scheme is motivated by tweakable block ciphers and is shown to be secure against first-order DPA using information theoretic metrics. DRECON is shown to be less expensive than masking and re-keying countermeasures from the implementation perspective and can be efficiently realized in both hardware and software platforms. On FPGAs especially, DRECON can optimally utilize the abundant block RAMs available and therefore have minimal overheads. We estimate the cost overhead of DRECON in micro-controllers and FPGAs, two common targets for cryptographic applications. Finally we demonstrate practical side-channel resistance of a DRECON implementation on a Xilinx Virtex-5 FPGA (SASEBO GII board).

25 citations


Book ChapterDOI
28 May 2014
TL;DR: The preimage resistance of the Stribog hash function was investigated in this paper, where a meet in the middle preimage attack was applied to obtain a 5-round pseudo preimage with time complexity of 2448 and memory complexity of 264.
Abstract: In August 2012, the Stribog hash function was selected as the new Russian cryptographic hash standard (GOST R 34.11-2012). Stribog employs twelve rounds of an AES-based compression function operating in Miyaguchi-Preneel mode. In this paper, we investigate the preimage resistance of the Stribog hash function. In particular, we apply a meet in the middle preimage attack on the compression function which allows us to obtain a 5-round pseudo preimage for a given compression function output with time complexity of 2448 and memory complexity of 264. Additionally, we adopt a guess and determine approach to obtain a 6-round chunk separation that balances the available degrees of freedom and the guess size. The proposed chunk separation allows us to attack 6 out of 12 rounds with time and memory complexities of 2496 and 2112, respectively. Finally, by employing a multicollision attack, we show that preimages of the 5 and 6-round reduced hash function can be generated with time complexity of 2481 and 2505, respectively. The two preimage attacks have equal memory complexity of 2256.

22 citations


Book ChapterDOI
28 May 2014
TL;DR: A new attribute-based signcryption (ABSC) scheme for linear secret-sharing scheme (LSSS)-realizable monotone access structures that is significantly more efficient than existing ABSC schemes in terms of computation cost and ciphertext size.
Abstract: In this paper, we propose a new attribute-based signcryption (ABSC) scheme for linear secret-sharing scheme (LSSS)-realizable monotone access structures that is significantly more efficient than existing ABSC schemes in terms of computation cost and ciphertext size. This new scheme utilizes only 6 pairing operations and the size of ciphertext is constant, i.e., independent of the number of attributes used to signcrypt a message. While the secret key size increases by a factor of number of attributes used in the system, the number of pairing evaluations is reduced to constant. Our protocol is proven to provide ciphertext indistinguishability under adaptive chosen ciphertext attacks assuming the hardness of decisional Bilinear Diffie-Hellman Exponent problem and achieves existential unforgeability under adaptive chosen message attack assuming the hardness of computational Diffie-Hellman Exponent problem. The proposed scheme achieves public verifiability of the ciphertext, enabling any party to verify the integrity and validity of the ciphertext.

18 citations


Book ChapterDOI
28 May 2014
TL;DR: This paper shows how to instantiate the first lattice-based sequential aggregate signature (SAS) scheme that is provably secure in the random oracle model with NTRUSign signature scheme and how to generate aggregate signatures resulting in one single signature.
Abstract: We propose the first lattice-based sequential aggregate signature (SAS) scheme that is provably secure in the random oracle model. As opposed to factoring and number theory based systems, the security of our construction relies on worst-case lattice problems. Generally speaking, SAS schemes enable any group of signers ordered in a chain to sequentially combine their signatures such that the size of the aggregate signature is much smaller than the total size of all individual signatures. This paper shows how to instantiate our construction with trapdoor function families and how to generate aggregate signatures resulting in one single signature. In particular, we instantiate our construction with the provably secure NTRUSign signature scheme presented by Stehle and Steinfeld at Eurocrypt 2011. This setting allows to generate aggregate signatures being asymptotically as large as individual ones and thus provide optimal compression rates as known from RSA based SAS schemes.

17 citations


Book ChapterDOI
28 May 2014
TL;DR: E2 is a block cipher designed by NTT and was a first-round AES candidate, and it has been shown how to improve upon the impossible differential cryptanalysis of Camellia with the zero-correlation linear cryptanalysis.
Abstract: E2 is a block cipher designed by NTT and was a first-round AES candidate. E2’s design principles influenced several more recent block ciphers including Camellia, an ISO/IEC standard cipher. So far the cryptanalytic results for round-reduced E2 have been concentrating around truncated and impossible differentials. At the same time, rather recently at SAC’13, it has been shown how to improve upon the impossible differential cryptanalysis of Camellia with the zero-correlation linear cryptanalysis. Due to some similarities between E2 and Camellia, E2 might also render itself more susceptible to this type of cryptanalysis.

Book ChapterDOI
28 May 2014
Abstract: Identity Based Encryption (IBE) has been constructed from bilinear pairings, lattices and quadratic residuosity. The latter is an attractive basis for an IBE owing to the fact that it is a well-understood hard problem from number theory. Cocks constructed the first such scheme, and subsequent improvements have been made to achieve anonymity and improve space efficiency. However, the anonymous variants of Cocks’ scheme thus far are all less efficient than the original. In this paper, we present a new universally-anonymous IBE scheme based on the quadratic residuosity problem. Our scheme has better performance than the universally anonymous scheme from Ateniese and Gasti (CT-RSA 2009) at the expense of more ciphertext expansion.

Book ChapterDOI
Sourav Das1, Willi Meier1
28 May 2014
TL;DR: It is discovered that low-weight differentials produce a number of biased and fixed difference bits in the state after two rounds and a theoretical explanation for the existence of such a bias is provided.
Abstract: The Keccak hash function is the winner of the SHA-3 competition. In this paper, we examine differential propagation properties of Keccak constituent functions. We discover that low-weight differentials produce a number of biased and fixed difference bits in the state after two rounds and provide a theoretical explanation for the existence of such a bias. We also describe several other propagation properties of Keccak with respect to differential cryptanalysis. Combining our propagation analysis with results from the existing literature we find distinguishers on six rounds of the Keccak hash function with complexity 252 for the first time in this paper.

Book ChapterDOI
28 May 2014
TL;DR: Choi et al. as discussed by the authors proposed a hybrid public key encryption scheme with IND-CCA security under the decisional Diffie-Hellman (DDH) assumption, which was shown to be secure in ISO/IEC 18033-2 by margins of at least 20% in encapsulation speed and 20% ~ 60% in decapsulation speed.
Abstract: While the hybrid public key encryption scheme of Kurosawa and Desmedt (CRYPTO 2004) is provably secure against chosen ciphertext attacks (namely, IND-CCA-secure), its associated key encapsulation mechanism (KEM) is not IND-CCA-secure (Herranz et al. 2006, Choi et al. 2009). In this paper, we show a simple twist on the Kurosawa-Desmedt KEM turning it into a scheme with IND-CCA security under the decisional Diffie-Hellman assumption. Our KEM beats the standardized version of Cramer-Shoup KEM in ISO/IEC 18033-2 by margins of at least 20% in encapsulation speed, and 20% ~ 60% in decapsulation speed. Moreover, the public and secret key sizes in our schemes are at least 160-bit smaller than those of the Cramer-Shoup KEM. We then generalize the technique into hash proof systems, proposing several KEM schemes with IND-CCA security under decision linear and decisional composite residuosity assumptions respectively. All the KEMs are in the standard model, and use standard, computationally secure symmetric building blocks.

Book ChapterDOI
28 May 2014
TL;DR: Fischlin’s transformation is an alternative to the standard Fiat-Shamir transform to turn a certain class of public key identification schemes into digital signatures (in the random oracle model).
Abstract: Fischlin’s transformation is an alternative to the standard Fiat-Shamir transform to turn a certain class of public key identification schemes into digital signatures (in the random oracle model)

Book ChapterDOI
28 May 2014
TL;DR: In this article, the authors presented three new attacks on the RSA cryptosystem and showed that the first two attacks work when k RSA public keys (N i,e i ) are such that there exist k relations of the shape e i x − y i φ(N i ) = z i or of shape e I x i − yφ(n i = Z i where N i = p i q i, φ n i, n i = (p i − 1)(q n − 1)
Abstract: This paper presents three new attacks on the RSA cryptosystem. The first two attacks work when k RSA public keys (N i ,e i ) are such that there exist k relations of the shape e i x − y i φ(N i ) = z i or of the shape e i x i − yφ(N i ) = z i where N i = p i q i , φ(N i ) = (p i − 1)(q i − 1) and the parameters x, x i , y, y i , z i are suitably small in terms of the prime factors of the moduli. We show that our attacks enable us to simultaneously factor the k RSA moduli N i . The third attack works when the prime factors p and q of the modulus N = pq share an amount of their least significant bits (LSBs) in the presence of two decryption exponents d 1 and d 2 sharing an amount of their most significant bits (MSBs). The three attacks improve the bounds of some former attacks that make RSA insecure.

Book ChapterDOI
Liqiang Peng1, Lei Hu1, Jun Xu1, Zhangjie Huang1, Yonghong Xie1 
28 May 2014
TL;DR: This paper presents a method to deal with the case when the number of shared LSBs or MSBs is not large enough to satisfy the bound proposed by May et al. and makes use of a result from Herrmann and May for solving linear equations modulo unknown divisors to get a better lower bound.
Abstract: We investigate the problem of factoring RSA moduli with implicit hint, which was firstly proposed by May and Ritzenhofen in 2009 where unknown prime factors of several RSA moduli shared some number of least significant bits (LSBs) and was considered by Faug\(\grave{e}\)re et al. in 2010 where some most significant bits (MSBs) were shared between the primes. In this paper, we further consider this factorization with implicit hint problem, present a method to deal with the case when the number of shared LSBs or MSBs is not large enough to satisfy the bound proposed by May et al. and Faug\(\grave{e}\)re et al. by making use of a result from Herrmann and May for solving linear equations modulo unknown divisors, and finally get a better lower bound on the the number of shared LSBs or MSBs. To the best of our knowledge, our lower bound is better than all known results and we can theoretically deal with the implicit factorization for the case of balanced RSA moduli.

Book ChapterDOI
28 May 2014
TL;DR: This paper presents new distinguishers against Keccak-f[1600] permutation reaching up to 6-rounds and claims that the proposed distinguisher can penetrate up to 3 rounds and the penetration depends only on the hamming weight of the round-constant of the initial round.
Abstract: This paper presents new distinguishers against Keccak-f[1600] permutation reaching up to 6-rounds. The main intuition is to exploit the self-symmetry of the internal state of Keccak. Formal analysis reveals that the proposed distinguisher can penetrate up to 3 rounds and the penetration depends only on the hamming weight of the round-constant of the initial round. New strategies developed in this work, when combined, are shown to distinguish up to 5-rounds with a probability of 1 using a single query. Finally, the extension to 6-rounds with a complexity of 211 gives us the most efficient 6-round distinguisher reported in literature. All claims and formal arguments conform to the results obtained by extensive experimentation.

Book ChapterDOI
28 May 2014
TL;DR: The Counter-bDM family of multi-block-length compression functions is introduced, which is the first provably secure block-cipher-based compression function with freely scalable output size and generic collisionand preimage-security proofs are presented.
Abstract: Block-cipher-based compression functions serve an important purpose in cryptography since they allow to turn a given block cipher into a one-way hash function. While there are a number of secure double-block-length compression functions, there is little research on generalized constructions. This paper introduces the Counter-bDM family of multi-block-length compression functions, which, to the best of our knowledge, is the first provably secure block-cipher-based compression function with freely scalable output size. We present generic collisionand preimage-security proofs for it, and compare our results with those of existing double-block-length constructions. Our security bounds show that our construction is competitive with the best collision- and equal to the best preimage-security bound of existing double-block-length constructions.

Book ChapterDOI
28 May 2014
TL;DR: The goal of such protocols is to enable a party P to convince a set of verifiers about P’s location in space, using information about the time it takes P to respond to queries sent from different points.
Abstract: We study the problem of constructing secure positioning protocols (Sastry et. al, 2003). Informally, the goal of such protocols is to enable a party P to convince a set of verifiers about P’s location in space, using information about the time it takes P to respond to queries sent from different points. It has been shown by Chandran et al (2009) that in general such task is impossible to achieve if the adversary can position his stations in multiple points in space. Chandran et al proposed to overcome this impossibility result by moving to Maurer’s bounded-storage model. Namely, they construct schemes that are secure under the assumption that the memory of the adversary is bounded. Later Buhrman et al (2010) considered secure positioning protocols schemes in quantum settings.

Book ChapterDOI
28 May 2014
TL;DR: The Uchida and Uchiyama algorithm for curves of all genus is extended, giving a generalization of elliptic nets to hyperelliptic curves and the optimality of these algorithms is studied.
Abstract: Stange has showed how to compute the Tate pairing on an elliptic curve using elliptic nets. After that, Uchida and Uchiyama gave a generalization of elliptic nets to hyperelliptic curves. They also gave an algorithm to compute the Tate pairing on a hyperelliptic curve of genus 2. In this paper, we extend their algorithm for curves of all genus. In a computational point of view, we also study the optimality of these algorithms.

Book ChapterDOI
28 May 2014
TL;DR: A further optimization of decomposing 4-bit S-boxes by exploiting affine transformations and a single shared quadratic permutation to construct a merged masked S-box, which can be used for both encryption and decryption.
Abstract: Countermeasures against side-channel analysis attacks are increasingly considered already during the design/implementation step of cryptographic algorithms for embedded devices. An important challenge is to reduce the overhead (area, time) introduced by the countermeasures, and, consequently, in the past years a lot of progress has been achieved in this direction. In this contribution we propose a further optimization of decomposing 4-bit S-boxes by exploiting affine transformations and a single shared quadratic permutation. Thereby many various S-boxes can be merged into one component and thus reduce the resource overhead. We applied our proposed scheme on a Threshold Implementation masked Present S-box and its inverse in order to construct a merged masked S-box, which can be used for both encryption and decryption. This design saves up to 24% resources on a Virtex-5 FPGA platform and up to 28% for an ASIC implementation compared to previously published designs. It is noteworthy to stress that our technique is not restricted to the TI countermeasure, but also allows to reduce the resource requirements of the non-linear layer of cryptographic algorithms with a set of different S-boxes, such as SERPENT or DES, amongst others.

Book ChapterDOI
28 May 2014
TL;DR: By combining the time-memory-data tradeoff (TMDTO) attack independently proposed by Babbage and Golic (BG) with the BSW sampling technique, the authors explores to mount a new TMDTO attack on stream ciphers.
Abstract: By combining the time-memory-data tradeoff (TMDTO) attack independently proposed by Babbage and Golic (BG) with the BSW sampling technique, this paper explores to mount a new TMDTO attack on stream ciphers. The new attack gives a wider variety of trade-offs, compared with original BG-TMDTO attack. It is efficient when multiple data is allowed for the attacker from the same key with different IVs, even though the internal state size is twice the key size. We apply the new attack to MICKEY and Grain stream ciphers, and improves the existing TMDTO attacks on them. Our attacks on Grain v1 and Grain-128 stream ciphers are rather attractive in the respect that the online time, offline time and memory complexities are all better than an exhaustive key search, and the amount of keystream needed are completely valid. Finally, we generalize the new attack to a Guess and Determine-TMDTO attack on stream ciphers, and mount a Guess and Determine-TMDTO attack on SOSEMANUK stream cipher with the online time and offline time complexities both equal to 2128, which achieves the best time complexity level compared with all existing attacks on SOSEMANUK so far.

Book ChapterDOI
28 May 2014
TL;DR: An existential forgery attack against IOC is presented which makes only one chosen message query, runs in a small constant time, and succeeds with an overwhelming probability 1 - 3 × 2− n , where n is the block length of the underlying block cipher.
Abstract: In this paper we cryptanalyse a block cipher mode of operation, called Input Output Chaining (IOC), designed by Recacha and submitted to NIST in 2013 for consideration as a lightweight authenticated encryption mode. We present an existential forgery attack against IOC which makes only one chosen message query, runs in a small constant time, and succeeds with an overwhelming probability 1 - 3 × 2− n , where n is the block length of the underlying block cipher. Therefore, this attack fully breaks the integrity of IOC.

Book ChapterDOI
28 May 2014
TL;DR: This paper proposes the first MPC protocol for prefix sum in general semigroups with constant 2d + 2dc rounds and almost linear O(l log*(c) l) communication complexity, and constructs the first bit addition protocol with constant rounds andalmost linear communication complexity.
Abstract: One of research goals on multi-party computation (MPC) is to achieve both perfectly secure and efficient protocols for basic functions or operations (e.g., equality, comparison, bit decomposition, and modular exponentiation). Recently, for many basic operations, MPC protocols with constant rounds and linear communication cost (in the input size) are proposed. In this paper, we propose the first MPC protocol for prefix sum in general semigroups with constant 2d + 2dc rounds and almost linear O(l log*(c) l) communication complexity, where c is a constant, d is the round complexity of subroutine protocol used in the MPC protocol, l is the input size, and log*(c) is the iterated logarithm function. The prefix sum protocol can be seen as a generalization of the postfix comparison protocol proposed by Toft. Moreover, as an application of the prefix sum protocol, we construct the first bit addition protocol with constant rounds and almost linear communication complexity.

Book ChapterDOI
28 May 2014
TL;DR: The primitive of proxy re-encryption is adapted to allow a sender to choose who among the potential delegatees will be able to decrypt his messages, and a simple and efficient scheme is proposed which is secure under chosen plaintext attack under standard algorithmic assumptions in a bilinear setting.
Abstract: Proxy re-encryption is a cryptographic primitive proposed by Blaze, Bleumer and Strauss in 1998. It allows a user, Alice, to decide that in case of unavailability, one (or several) particular user, the delegatee, Bob, will be able to read her confidential messages. This is made possible thanks to a semi-trusted third party, the proxy, which is given by Alice a re-encryption key, computed with Alice’s secret key and Bob’s public key. This information allows the proxy to transform a ciphertext intended to Alice into a ciphertext intended to Bob. Very few constructions of proxy re-encryption scheme actually handle the concern that the original sender may not want his message to be read by Bob instead of Alice. In this article, we adapt the primitive of proxy re-encryption to allow a sender to choose who among the potential delegatees will be able to decrypt his messages, and propose a simple and efficient scheme which is secure under chosen plaintext attack under standard algorithmic assumptions in a bilinear setting. We also add to our scheme a traceability of the proxy so that Alice can detect if it has leaked some re-encryption keys.

Book ChapterDOI
28 May 2014
TL;DR: It is shown that, while independent universal hash families provide the desired unforgeability independently of the used encryption algorithm, the security of MACs based on dependent universalHash families is not guaranteed for all choices of encryption algorithms.
Abstract: Due to their potential use as building blocks for constructing highly efficient message authentication codes (MACs), universal hash-function families have been attracting increasing research attention, both from the design and analysis points of view. In universal hash-function families based MACs, the message to be authenticated is first compressed using a universal hash function and, then, the compressed image is encrypted to produce the authentication tag. Many definitions of universal hash families have appeared in the literature. The main focus of earlier definitions is to classify universal hash functions based on their message collision properties. In this paper, we introduce a different classification of universal hash families. As opposed to classifying universal hash families based on message collision probabilities, our classification aims to give direct relation between universal hash families used as building blocks to design MACs and the encryption algorithm used to process their hashed images. We give two examples of universal hash families with equivalent collision resiliency. We show that, while one constructs secure MACs, the other can lead to insecure MAC construction even when coupled with an encryption algorithm that provides perfect secrecy (in Shannon’s sense). We formally define two classes of universal hash families: independent and dependent universal hash families. We show that, while independent universal hash families provide the desired unforgeability independently of the used encryption algorithm, the security of MACs based on dependent universal hash families is not guaranteed for all choices of encryption algorithms. We conclude by giving a sufficient condition on the encryption algorithm that guarantees the construction of secure MACs, even when combined with a dependent hash family.