scispace - formally typeset
Search or ask a question

Showing papers presented at "International Conference on Cryptology in Africa in 2010"


Book ChapterDOI
03 May 2010
TL;DR: Wang et al. as mentioned in this paper proposed an efficient unidirectional proxy re-encryption scheme without resorting to pairings and showed that it is vulnerable to chosen-ciphertext attack (CCA).
Abstract: Proxy re-encryption (PRE) allows a semi-trusted proxy to convert a ciphertext originally intended for Alice into one encrypting the same plaintext for Bob. The proxy only needs a re-encryption key given by Alice, and cannot learn anything about the plaintext encrypted. This adds flexibility in various applications, such as confidential email, digital right management and distributed storage. In this paper, we study unidirectional PRE, which the re-encryption key only enables delegation in one direction but not the opposite. In PKC 2009, Shao and Cao proposed a unidirectional PRE assuming the random oracle. However, we show that it is vulnerable to chosen-ciphertext attack (CCA). We then propose an efficient unidirectional PRE scheme (without resorting to pairings). We gain high efficiency and CCA-security using the “token-controlled encryption” technique, under the computational Diffie-Hellman assumption, in the random oracle model and a relaxed but reasonable definition.

299 citations


Book ChapterDOI
03 May 2010
TL;DR: This paper proposes a fresh re-keying scheme that is especially suited for challenge-response protocols such as used to authenticate tags, and estimates the cost in terms of area and execution time for various security/performance trade-offs.
Abstract: The market for RFID technology has grown rapidly over the past few years. Going along with the proliferation of RFID technology is an increasing demand for secure and privacy-preserving applications. In this context, RFID tags need to be protected against physical attacks such as Differential Power Analysis (DPA) and fault attacks. The main obstacles towards secure RFID are the extreme constraints of passive tags in terms of power consumption and silicon area, which makes the integration of countermeasures against physical attacks even more difficult than for other types of embedded systems. In this paper we propose a fresh re-keying scheme that is especially suited for challenge-response protocols such as used to authenticate tags. We evaluate the resistance of our scheme against fault and side-channel analysis, and introduce a simple architecture for VLSI implementation. In addition, we estimate the cost of our scheme in terms of area and execution time for various security/performance trade-offs. Our experimental results show that the proposed re-keying scheme provides better security (and does so at less cost) than state-of-the-art countermeasures.

147 citations


Book ChapterDOI
03 May 2010
TL;DR: This paper introduces a general technique for constructing proofs of shuffles which restrict the permutation to a group that is characterized by a public polynomial, and gives a new efficient proof of an unrestricted shuffle that is conceptually simpler and allow a simpler analysis than all previous proofs of shuffle.
Abstract: A proof of a shuffle is a zero-knowledge proof that one list of ciphertexts is a permutation and re-encryption of another list of ciphertexts. We call a shuffle restricted if the permutation is chosen from a public subset of all permutations. In this paper, we introduce a general technique for constructing proofs of shuffles which restrict the permutation to a group that is characterized by a public polynomial. This generalizes previous work by Reiter and Wang [22], and de Hoogh et al. [7]. Our approach also gives a new efficient proof of an unrestricted shuffle that we think is conceptually simpler and allow a simpler analysis than all previous proofs of shuffles.

86 citations


BookDOI
01 Jan 2010
TL;DR: The Third International Conference on Cryptology in Africa (Africrypt 2010) as discussed by the authors was held in Stellenbosch, South Africa, from May 3 to 6, 2010.
Abstract: Africacrypt 2010, the Third International Conference on Cryptology in Africa, took place May 3–6, 2010 in Stellenbosch, South Africa. The General Chairs, Riaal Domingues from the South African Communications and Security Agency and Christine Swart from the University of Cape Town, were always a pleasure to work with and did an outstanding job with the local arrangements. We are deeply thankful that they agreed to host Africacrypt 2010 with only four months notice after unanticipated events forced a change of location. The Africacrypt 2010 submission deadline was split into two. Authors submitting papers were required to register titles and abstracts by the first deadline, January 5. A total of 121 submissions had been received by this deadline, although some were withdrawn before review. Authors were allowed to continue working on their papers until the second deadline, January 10. Submissions were evaluated in three phases over a period of nearly two months. The selection phase started on January 5: Program Committee members began evaluating abstracts and volunteering to handle various papers. We assigned a team of people to each paper. The review phase started on January 11: Program Committee members were given access to the full papers and began in-depth reviews of 82 submissions. Most of the reviews were completed by February 7, the beginning of the discussion phase. Program Committee members were given access to other reviews and built consensus in their evaluations of the submissions. In the end the discussions included 285 full reports and 203 additional comments. The submissions, reviews, and subsequent discussions were handled smoothly by iChair. On February 21 we sent out 2 notifications of conditional acceptance and 24 notifications of unconditional acceptance. The next day we sent out comments from the reviewers. One paper eventually met its acceptance conditions; the final program contained 25 contributed papers and 3 invited talks. The authors prepared final versions of the 25 contributed papers by February 28. It is our pleasure to thank the other 53 Program Committee members for lending their expertise to Africacrypt 2010 and for putting tremendous effort into detailed reviews and discussions. We would also like to thank Thomas Baigneres and Matthieu Finiasz for writing the iChair software; Springer for agreeing to an accelerated schedule for printing the proceedings; 70 external referees who reviewed individual papers upon request from the Program Committee; and, most importantly, all authors for submitting interesting new research papers to Africacrypt 2010.

66 citations


Book ChapterDOI
03 May 2010
TL;DR: An implementation of Pollard's rho algorithm to compute the elliptic curve discrete logarithm for the Synergistic Processor Elements of the Cell Broadband Engine Architecture and shows that the ECC2K-130 challenge can be solved in one year using the Synergy Processor Units of less than 2700 Sony Playstation 3 gaming consoles.
Abstract: This paper describes an implementation of Pollard's rho algorithm to compute the elliptic curve discrete logarithm for the Synergistic Processor Elements of the Cell Broadband Engine Architecture. Our implementation targets the elliptic curve discrete logarithm problem defined in the Certicom ECC2K-130 challenge. We compare a bitsliced implementation to a non-bitsliced implementation and describe several optimization techniques for both approaches. In particular, we address the question whether normal-basis or polynomial-basis representation of field elements leads to better performance. We show that using our software the ECC2K-130 challenge can be solved in one year using the Synergistic Processor Units of less than 2700 Sony Playstation 3 gaming consoles.

37 citations


Book ChapterDOI
03 May 2010
TL;DR: It is proved that PKENO can alternatively be built out of robust non-interactive threshold public-key cryptosystems, a primitive that differs from identity-based encryption, and new definitions that are stronger than those considered so far are put forward.
Abstract: Public-key encryption schemes with non-interactive opening (PKENO) allow a receiver to non-interactively convince third parties that a ciphertext decrypts to a given plaintext or, alternatively, that such a ciphertext is invalid. Two practical generic constructions for PKENO have been proposed so far, starting from either identity-based encryption or public-key encryption with witness-recovering decryption (PKEWR). We show that the known transformation from PKEWR to PKENO fails to provide chosen-ciphertext security; only the transformation from identity-based encryption remains thus valid. Next, we prove that PKENO can alternatively be built out of robust non-interactive threshold public-key cryptosystems, a primitive that differs from identity-based encryption. Using the new transformation, we construct two efficient PKENO schemes: one based on the Decisional Diffie-Hellman assumption (in the Random-Oracle Model) and one based on the Decisional Linear assumption (in the standard model). Last but not least, we propose new applications of PKENO in protocol design. Motivated by these applications, we reconsider proof soundness for PKENO and put forward new definitions that are stronger than those considered so far. We give a taxonomy of all definitions and demonstrate them to be satisfiable.

33 citations


Book ChapterDOI
03 May 2010
TL;DR: This paper presents a differential fault analysis attack on HC-128 and recovers the complete internal state of HC- 128 by solving a set of 32 systems of linear equations over Z2 in 1024 variables.
Abstract: HC-128 is a high speed stream cipher with a 128-bit secret key and a 128-bit initialization vector. It has passed all the three stages of the ECRYPT stream cipher project and is a member of the eSTREAM software portfolio. In this paper, we present a differential fault analysis attack on HC-128. The fault model in which we analyze the cipher is the one in which the attacker is able to fault a random word of the inner state of the cipher but cannot control its exact location nor its new faulted value. To perform the attack, we exploit the fact that some of the inner state words in HC-128 may be utilized several times without being updated. Our attack requires about 7968 faults and recovers the complete internal state of HC-128 by solving a set of 32 systems of linear equations over Z2 in 1024 variables.

30 citations


Book ChapterDOI
03 May 2010
TL;DR: This paper shows how the QR-PUF authentication can be interwoven with Quantum Key Exchange (QKE), leading to an authenticated QKE protocol between two parties with the special property that it requires no a priori secret shared by the two parties, and that the quantum channel is the authenticated channel, allowing for an unauthenticated classical channel.
Abstract: Physical Unclonable Functions (PUFs) are physical structures that are hard to clone and have a unique challenge-response behaviour. In this paper we propose a new security primitive, the quantum-readout PUF (QR-PUF): a classical PUF which is challenged using a quantum state, and whose response is also a quantum state. By the no-cloning property of unknown quantum states, attackers cannot intercept challenges or responses without noticeably disturbing the readout process. Thus, a verifier who sends quantum states as challenges and receives the correct quantum states back can be certain that he is probing a specific QR-PUF without disturbances, even in the QR-PUF is far away ‘in the field' and under hostile control. For PUFs whose information content is not exceedingly large, all currently known PUF-based authentication and anti-counterfeiting schemes require trusted readout devices in the field. Our quantum readout scheme has no such requirement. We show how the QR-PUF authentication can be interwoven with Quantum Key Exchange (QKE), leading to an authenticated QKE protocol between two parties with the special property that it requires no a priori secret shared by the two parties, and that the quantum channel is the authenticated channel, allowing for an unauthenticated classical channel.

29 citations


Book ChapterDOI
03 May 2010
TL;DR: In this paper, an algorithm for parallel exhaustive search for short vectors in lattices is presented, which can be applied to a wide range of parallel computing systems and can be implemented on graphics cards using CUDA.
Abstract: In this paper we present an algorithm for parallel exhaustive search for short vectors in lattices This algorithm can be applied to a wide range of parallel computing systems To illustrate the algorithm, it was implemented on graphics cards using CUDA, a programming framework for NVIDIA graphics cards We gain large speedups compared to previous serial CPU implementations Our implementation is almost 5 times faster in high lattice dimensions Exhaustive search is one of the main building blocks for lattice basis reduction in cryptanalysis Our work results in an advance in practical lattice reduction

29 citations


Book ChapterDOI
03 May 2010
TL;DR: This paper revisits the security model for fair blind signatures given by Hufschmitt and Traore in 2007 and gives the first practical fair blind signature scheme with a security proof in the standard model.
Abstract: A fair blind signature is a blind signature with revocable anonymity and unlinkability, ie an authority can link an issuing session to the resulting signature and trace a signature to the user who requested it In this paper we first revisit the security model for fair blind signatures given by Hufschmitt and Traore in 2007 We then give the first practical fair blind signature scheme with a security proof in the standard model Our scheme satisfies a stronger variant of the Hufschmitt-Traore model

29 citations


Book ChapterDOI
03 May 2010
TL;DR: In this paper, the authors investigated the relevance of the theoretical framework on profiled side-channel attacks presented by F.-X. at Eurocrypt 2009 and showed that with an engineer's mindset, two techniques can greatly improve both the off-line profiling and the on-line attack.
Abstract: This article investigates the relevance of the theoretical framework on profiled side-channel attacks presented by F.-X. Standaert et al. at Eurocrypt 2009. The analyses consist in a case-study based on side-channel measurements acquired experimentally from a hardwired cryptographic accelerator. Therefore, with respect to previous formal analyses carried out on software measurements or on simulated data, the investigations we describe are more complex, due to the underlying chip's architecture and to the large amount of algorithmic noise. In this difficult context, we show however that with an engineer's mindset, two techniques can greatly improve both the off-line profiling and the on-line attack. First, we explore the appropriateness of different choices for the sensitive variables. We show that a skilled attacker aware of the register transfers occurring during the cryptographic operations can select the most adequate distinguisher, thus increasing its success rate. Second, we introduce a method based on the thresholding of leakage data to accelerate the profiling or the matching stages. Indeed, leveraging on an engineer's common sense, it is possible to visually foresee the shape of some eigenvectors thereby anticipating their estimation towards their asymptotic value by authoritatively zeroing weak components containing mainly non-informational noise. This method empowers an attacker, in that it saves traces when converging towards correct values of the secret. Concretely, we demonstrate a 5 times speed-up in the on-line phase of the attack.

Book ChapterDOI
03 May 2010
TL;DR: This paper presents a strategy for computing Grobner basis that challenges complexity estimates and uses a flexible partial enlargement technique together with reduced row echelon forms to generate lower degree elements–mutants.
Abstract: Recent developments in multivariate polynomial solving algorithms have made algebraic cryptanalysis a plausible threat to many cryptosystems. However, theoretical complexity estimates have shown this kind of attack unfeasible for most realistic applications. In this paper we present a strategy for computing Grobner basis that challenges those complexity estimates. It uses a flexible partial enlargement technique together with reduced row echelon forms to generate lower degree elements–mutants. This new strategy surpasses old boundaries and obligates us to think of new paradigms for estimating complexity of Grobner basis computation. The new proposed algorithm computed a Grobner basis of a degree 2 random system with 32 variables and 32 equations using 30 GB which was never done before by any known Grobner bases solver.

Book ChapterDOI
03 May 2010
TL;DR: In this article, a group of users agree on a secret group key while obtaining some additional information that they can use on-demand to efficiently compute independent secret keys for any possible subgroup.
Abstract: Modern multi-user communication systems, including popular instant messaging tools, social network platforms, and cooperative-work applications, offer flexible forms of communication and exchange of data. At any time point concurrent communication sessions involving different subsets of users can be invoked. The traditional tool for achieving security in a multi-party communication environment are group key exchange (GKE) protocols that provide participants with a secure group key for their subsequent communication. Yet, in communication scenarios where various user subsets may be involved in different sessions the deployment of classical GKE protocols has clear performance and scalability limitations as each new session should be preceded by a separate execution of the protocol. The motivation of this work is to study the possibility of designing more flexible GKE protocols allowing not only the computation of a group key for some initial set of users but also efficient derivation of independent secret keys for all potential subsets. In particular we improve and generalize the recently introduced GKE protocols enabling on-demand derivation of peer-to-peer keys (so called GKE+P protocols). We show how a group of users can agree on a secret group key while obtaining some additional information that they can use on-demand to efficiently compute independent secret keys for any possible subgroup. Our security analysis relies on the Gap Diffie-Hellman assumption and uses random oracles.

Book ChapterDOI
03 May 2010
TL;DR: In this article, the authors define the concept of optimistic fair priced oblivious transfer and propose a generic construction that extends secure POT schemes to realize this functionality, based on verifiably encrypted signatures, and shows that disputes can be resolved without the buyer losing her privacy.
Abstract: Priced oblivious transfer (POT) is a two-party protocol between a vendor and a buyer in which the buyer purchases digital goods without the vendor learning what is bought. Although privacy properties are guaranteed, current schemes do not offer fair exchange. A malicious vendor can, e.g., prevent the buyer from retrieving the goods after receiving the payment, and a malicious buyer can also accuse an honest vendor of misbehavior without the vendor being able to prove this untrue. In order to address these problems, we define the concept of optimistic fair priced oblivious transfer and propose a generic construction that extends secure POT schemes to realize this functionality. Our construction, based on verifiably encrypted signatures, employs a neutral adjudicator that is only involved in case of dispute, and shows that disputes can be resolved without the buyer losing her privacy, i.e., the buyer does not need to disclose which digital goods she is interested in. We show that our construction can be instantiated with an existing universally composable POT scheme, and furthermore we propose a novel full-simulation secure POT scheme that is much more efficient.

Book ChapterDOI
03 May 2010
TL;DR: The new range proof technique is very efficient and more efficient than the existing solutions in practical small ranges and achieves stronger security and stronger privacy than most of the existing range proof schemes.
Abstract: A batch proof and verification technique by Chida and Yamamoto is extended to work in a more general scenario. The new batch proof and verification technique is more useful and can save more cost than the original technique. An application of the new batch proof and verification technique is range proof, which proves that a secret integer is in an interval range. Like the most resent and advanced range proof protocol by Camenisch, Chaabouni and Shelat in Asiacrypt2008, the new range proof technique is especially suitable for practical small ranges, but more efficient and stronger in security than the former. The new range proof technique is very efficient and more efficient than the existing solutions in practical small ranges. Moreover, it achieves stronger security and stronger privacy (perfect honest-verifier zero knowledge) than most of the existing range proof schemes.

Book ChapterDOI
03 May 2010
TL;DR: This paper defines a model and security notions of information-theoretically secure key-insulated multireceiver authentication codes (KI-MRA), and shows lower bounds of sizes of entities' secret-keys and shows that the direct construction is optimal.
Abstract: Exposing a secret-key is one of the most disastrous threats in cryptographic protocols. The key-insulated security is proposed with the aim of realizing the protection against such key-exposure problems. In this paper, we study key-insulated authentication schemes with information-theoretic security. More specifically, we focus on one of information-theoretically secure authentication, called multireceiver authentication codes, and we newly define a model and security notions of information-theoretically secure key-insulated multireceiver authentication codes (KI-MRA for short) based on the ideas of both computationally secure key-insulated signature schemes and multireceiver authentication-codes with information-theoretic setting. In addition, we show lower bounds of sizes of entities' secret-keys. We also provide two kinds of constructions of KI-MRA: direct and generic constructions which are provably secure in our security definitions. It is shown that the direct construction meets the lower bounds of key-sizes with equality. Therefore, it turns out that our lower bounds are tight, and that the direct construction is optimal.

Book ChapterDOI
03 May 2010
TL;DR: In this paper, pairing-friendly groups are exploited to obtain zero-knowledge proofs for distributed-password public-key cryptography (DPwPKC), whose security relies on the Decisional Linear assumption.
Abstract: Distributed-password public-key cryptography (DPwPKC) allows the members of a group of people, each one holding a small secret password only, to help a leader to perform the private operation, associated to a public-key cryptosystem. Abdalla et al. recently defined this tool [1], with a practical construction. Unfortunately, the latter applied to the ElGamal decryption only, and relied on the DDH assumption, excluding any recent pairing-based cryptosystems. In this paper, we extend their techniques to support, and exploit, pairing-based properties: we take advantage of pairing-friendly groups to obtain efficient (simulation-sound) zero-knowledge proofs, whose security relies on the Decisional Linear assumption. As a consequence, we provide efficient protocols, secure in the standard model, for ElGamal decryption as in [1], but also for Linear decryption, as well as extraction of several identity-based cryptosystems [6,4]. Furthermore, we strenghten their security model by suppressing the useless testPwd queries in the functionality.

Book ChapterDOI
03 May 2010
TL;DR: This paper presents a protocol achieving perfectly secure message transmission of a single message with O(n2) communication complexity in polynomial time, which is an improvement on previous protocols which achieve perfectly secure Message Transmission of asingle message with a communication complexity of O( n3).
Abstract: Recently Kurosawa and Suzuki considered almost secure (1-phase n-channel) message transmission when n=(2t+1). The authors gave a lower bound on the communication complexity and presented an exponential time algorithm achieving this bound. In this paper we present a polynomial time protocol achieving the same security properties for the same network conditions. Additionally, we introduce and formalize new security parameters to message transmission protocols which we feel are missing and necessary in the literature. We also focus on 2-phase protocols. We present a protocol achieving perfectly secure message transmission of a single message with O(n2) communication complexity in polynomial time. This is an improvement on previous protocols which achieve perfectly secure message transmission of a single message with a communication complexity of O(n3).

Book ChapterDOI
03 May 2010
TL;DR: This paper revisits the work of Heninger and Shacham in Crypto 2009 and provides a combinatorial model for the search where some random bits of the primes are known and shows how one can factorize N given the knowledge of random bits in the least significant halves of thePrimes.
Abstract: This paper discusses the factorization of the RSA modulus N (i.e., N=pq, where p, q are primes of same bit size) by reconstructing the primes from randomly known bits. The reconstruction method is a modified brute-force search exploiting the known bits to prune wrong branches of the search tree, thereby reducing the total search space towards possible factorization. Here we revisit the work of Heninger and Shacham in Crypto 2009 and provide a combinatorial model for the search where some random bits of the primes are known. This shows how one can factorize N given the knowledge of random bits in the least significant halves of the primes. We also explain a lattice based strategy in this direction. More importantly, we study how N can be factored given the knowledge of some blocks of bits in the most significant halves of the primes. We present improved theoretical result and experimental evidences in this direction.

Book ChapterDOI
03 May 2010
TL;DR: In this article, the authors presented an optimally resilient, perfectly secure Asynchronous Verifiable Secret Sharing (AVSS) protocol that can generate d-sharing of a secret for any d, where t ≥d≤2t.
Abstract: Verifiable Secret Sharing (VSS) is a fundamental primitive used in many distributed cryptographic tasks, such as Multiparty Computation (MPC) and Byzantine Agreement (BA). It is a two phase (sharing, reconstruction) protocol. The VSS and MPC protocols are carried out among n parties, where t out of n parties can be under the influence of a Byzantine (active) adversary, having unbounded computing power. It is well known that protocols for perfectly secure VSS and perfectly secure MPC exist in an asynchronous network iff n≥4t+1. Hence, we call any perfectly secure VSS (MPC) protocol designed over an asynchronous network with n=4t+1 as optimally resilient VSS (MPC) protocol. A secret is d-shared among the parties if there exists a random degree-d polynomial whose constant term is the secret and each honest party possesses a distinct point on the degree-d polynomial. Typically VSS is used as a primary tool to generate t-sharing of secret(s). In this paper, we present an optimally resilient, perfectly secure Asynchronous VSS (AVSS) protocol that can generate d-sharing of a secret for any d, where t≤d≤2t. This is the first optimally resilient, perfectly secure AVSS of its kind in the literature. Specifically, our AVSS can generate d-sharing of l≥1 secrets from ${\mathbb F}$ concurrently, with a communication cost of ${\cal O}(\ell n^2 \log{|{\mathbb F}|})$ bits, where ${\mathbb F}$ is a finite field. Communication complexity wise, the best known optimally resilient, perfectly secure AVSS is reported in [2]. The protocol of [2] can generate t-sharing of l secrets concurrently, with the same communication complexity as our AVSS. However, the AVSS of [2] and [4] (the only known optimally resilient perfectly secure AVSS, other than [2]) does not generate d-sharing, for any d>t. Interpreting in a different way, we may also say that our AVSS shares l(d+1−t) secrets simultaneously with a communication cost of ${\cal O}(\ell n^2 \log{|{\mathbb F}|})$ bits. Putting d=2t (the maximum value of d), we notice that the amortized cost of sharing a single secret using our AVSS is only ${\cal O}(n \log{|{\mathbb F}|})$ bits. This is a clear improvement over the AVSS of [2] whose amortized cost of sharing a single secret is ${\cal O}(n^2 \log{|{\mathbb F}|})$ bits. As an interesting application of our AVSS, we propose a new optimally resilient, perfectly secureAsynchronous Multiparty Computation (AMPC) protocol that communicates ${\cal O}(n^2 \log|{\mathbb F}|)$ bits per multiplication gate. The best known optimally resilient perfectly secure AMPC is due to [2], which communicates ${\cal O}(n^3 \log|{\mathbb F}|)$ bits per multiplication gate. Thus our AMPC improves the communication complexity of the best known AMPC of [2] by a factor of Ω(n).

Book ChapterDOI
03 May 2010
TL;DR: In this article, the authors propose a new way of carrying out Miller's algorithm that involves new explicit formulas which reduce the number of full extension field operations that occur in an iteration of the Miller loop, resulting in significant speed ups in most practical situations of between 5 and 30 percent.
Abstract: The most costly operations encountered in pairing computations are those that take place in the full extension field $\mathbb{F}_{p^k}$. At high levels of security, the complexity of operations in $\mathbb{F}_{p^k}$ dominates the complexity of the operations that occur in the lower degree subfields. Consequently, full extension field operations have the greatest effect on the runtime of Miller's algorithm. Many recent optimizations in the literature have focussed on improving the overall operation count by presenting new explicit formulas that reduce the number of subfield operations encountered throughout an iteration of Miller's algorithm. Unfortunately, almost all of these improvements tend to suffer for larger embedding degrees where the expensive extension field operations far outweigh the operations in the smaller subfields. In this paper, we propose a new way of carrying out Miller's algorithm that involves new explicit formulas which reduce the number of full extension field operations that occur in an iteration of the Miller loop, resulting in significant speed ups in most practical situations of between 5 and 30 percent.

Book ChapterDOI
03 May 2010
TL;DR: In this paper, the SHA-3-512 hash function was analyzed for round 2 of the SHAvite 3-512 competition and a chosen counter, chosen salt preimage attack and collision attack were presented.
Abstract: In this paper, we analyze the SHAvite-3-512 hash function, as proposed and tweaked for round 2 of the SHA-3 competition. We present cryptanalytic results on 10 out of 14 rounds of the hash function SHAvite-3-512, and on the full 14 round compression function of SHAvite-3-512. We show a second preimage attack on the hash function reduced to 10 rounds with a complexity of 2497 compression function evaluations and 216 memory. For the full 14-round compression function, we give a chosen counter, chosen salt preimage attack with 2384 compression function evaluations and 2128 memory (or complexity 2448 without memory), and a collision attack with 2192 compression function evaluations and 2128 memory.

Book ChapterDOI
03 May 2010
TL;DR: This work unify the previous well-studied models into a generalization, called fair partially blind signatures, and proposes an instantiation that is secure in the standard model without any setup assumptions, giving a positive answer to the open question of whether fair blind signature schemes in thestandard model exist.
Abstract: It is well-known that blind signature schemes provide full anonymity for the receiving user. For many real-world applications, however, this leaves too much room for fraud. There are two generalizations of blind signature schemes that compensate this weakness: fair blind signatures and partially blind signatures. Fair blind signature schemes allow a trusted third party to revoke blindness in case of a dispute. In partially blind signature schemes, the signer retains a certain control over the signed message because signer and user have to agree on a specific part of the signed message. In this work, we unify the previous well-studied models into a generalization, called fair partially blind signatures. We propose an instantiation that is secure in the standard model without any setup assumptions. With this construction, we also give a positive answer to the open question of whether fair blind signature schemes in the standard model exist.

Book ChapterDOI
03 May 2010
TL;DR: N-cell GF-NLFSR structures offer similar proofs of security against differential cryptanalysis as conventional n-cell Feistel structures and are resistant against other block cipher attacks such as linear, boomerang, integral, impossible differential, higher order differential, interpolation, slide, XSL and related-key differential attacks.
Abstract: The n-cell GF-NLFSR (Generalized Feistel-NonLinear Feedback Shift Register) structure [8] is a generalized unbalanced Feistel network that can be considered as a generalization of the outer function FO of the KASUMI block cipher. An advantage of this cipher over other n-cell generalized Feistel networks, e.g. SMS4 [11] and Camellia [5], is that it is parallelizable for up to n rounds. In hardware implementations, the benefits translate to speeding up encryption by up to n times while consuming less area and power. At the same time n-cell GF-NLFSR structures offer similar proofs of security against differential cryptanalysis as conventional n-cell Feistel structures. We also ensure that parallelized versions of Camellia and SMS4 are resistant against other block cipher attacks such as linear, boomerang, integral, impossible differential, higher order differential, interpolation, slide, XSL and related-key differential attacks.

Book ChapterDOI
03 May 2010
TL;DR: Two methods for finding linear differential trails that lead to lower estimated attack complexities when used within the framework introduced by Brier, Khazaei, Meier and Peyrin at ASIACRYPT 2009 are applied.
Abstract: This paper presents improved collision attacks on round-reduced variants of the hash function CubeHash, one of the SHA-3 second round candidates. We apply two methods for finding linear differential trails that lead to lower estimated attack complexities when used within the framework introduced by Brier, Khazaei, Meier and Peyrin at ASIACRYPT 2009. The first method yields trails that are relatively dense at the beginning and sparse towards the end. In combination with the condition function concept, such trails lead to much faster collision attacks. We demonstrate this by providing a real collision for CubeHash-5/96. The second method randomizes the search for highly probable linear differential trails and leads to significantly better attacks for up to eight rounds.

Book ChapterDOI
03 May 2010
TL;DR: This work presents a new and efficient hash-and-sign signature scheme in the standard model that is based on the RSA assumption, and adapts the new proof techniques used to prove the recent RSA scheme by Hohenberger and Waters.
Abstract: In this work we present a new and efficient hash-and-sign signature scheme in the standard model that is based on the RSA assumption. Technically it adapts the new proof techniques that are used to prove the recent RSA scheme by Hohenberger and Waters. In contrast to the Hohenberger-Waters scheme our scheme allows to sign blocks of messages and to issue signatures on committed values, two key properties required for building privacy preserving systems.