scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Cryptology in 2011"


Journal ArticleDOI
TL;DR: A recently introduced masking method which is based on secret sharing and multi-party computation methods is discussed, which results in implementations that are provably resistant against a wide range of attacks, while making only minimal assumptions on the hardware.
Abstract: Hardware implementations of cryptographic algorithms are vulnerable to side-channel attacks. Side-channel attacks that are based on multiple measurements of the same operation can be countered by employing masking techniques. Many protection measures depart from an idealized hardware model that is very expensive to meet with real hardware. In particular, the presence of glitches causes many masking techniques to leak information during the computation of nonlinear functions. We discuss a recently introduced masking method which is based on secret sharing and multi-party computation methods. The approach results in implementations that are provably resistant against a wide range of attacks, while making only minimal assumptions on the hardware. We show how to use this method to derive secure implementations of some nonlinear building blocks for cryptographic algorithms. Finally, we provide a provable secure implementation of the block cipher Noekeon and verify the results by means of low-level simulations.

311 citations


Journal ArticleDOI
TL;DR: This work formalizes a cryptographic primitive, the “tweakable block cipher,” and suggests that tweakable block ciphers are easy to design, the extra cost of making a block cipher “Tweakable” is small, and it is easier to design and prove the security of applications of blockciphers that need this variability using tweakable blocks.
Abstract: A common trend in applications of block ciphers over the past decades has been to employ block ciphers as one piece of a “mode of operation”—possibly, a way to make a secure symmetric-key cryptosystem, but more generally, any cryptographic application. Most of the time, these modes of operation use a wide variety of techniques to achieve a subgoal necessary for their main goal: instantiation of “essentially different” instances of the block cipher. We formalize a cryptographic primitive, the “tweakable block cipher.” Such a cipher has not only the usual inputs—message and cryptographic key—but also a third input, the “tweak.” The tweak serves much the same purpose that an initialization vector does for CBC mode or that a nonce does for OCB mode. Our abstraction brings this feature down to the primitive block-cipher level, instead of incorporating it only at the higher modes-of-operation levels. We suggest that (1) tweakable block ciphers are easy to design, (2) the extra cost of making a block cipher “tweakable” is small, and (3) it is easier to design and prove the security of applications of block ciphers that need this variability using tweakable block ciphers.

288 citations


Journal ArticleDOI
TL;DR: This paper shows that the systematic process variation adversely affects the ability of the RO-PUF to generate unique chip-signatures, and proposes a compensation method to mitigate it.
Abstract: In this paper, we analyze ring oscillator (RO) based physical unclonable function (PUF) on FPGAs. We show that the systematic process variation adversely affects the ability of the RO-PUF to generate unique chip-signatures, and propose a compensation method to mitigate it. Moreover, a configurable ring oscillator (CRO) technique is proposed to reduce noise in PUF responses. Our compensation method could improve the uniqueness of the PUF by an amount as high as 18%. The CRO technique could produce nearly 100% error-free PUF outputs over varying environmental conditions without post-processing while consuming minimum area.

283 citations


Journal ArticleDOI
TL;DR: Recent contributions and applications of MIA are compiled in a comprehensive study and the strengths and weaknesses of this new distinguisher are put forward and standard power analysis attacks using the correlation coefficient are compared.
Abstract: Mutual Information Analysis is a generic side-channel distinguisher that has been introduced at CHES 2008. It aims to allow successful attacks requiring minimum assumptions and knowledge of the target device by the adversary. In this paper, we compile recent contributions and applications of MIA in a comprehensive study. From a theoretical point of view, we carefully discuss its statistical properties and relationship with probability density estimation tools. From a practical point of view, we apply MIA in two of the most investigated contexts for side-channel attacks. Namely, we consider first-order attacks against an unprotected implementation of the DES in a full custom IC and second-order attacks against a masked implementation of the DES in an 8-bit microcontroller. These experiments allow to put forward the strengths and weaknesses of this new distinguisher and to compare it with standard power analysis attacks using the correlation coefficient.

261 citations


Journal ArticleDOI
TL;DR: The experimental results on real-world power traces show that the proposed provably secure countermeasure against first order side-channel attacks in the lightweight block cipher PRESENT provides additional security.
Abstract: A provably secure countermeasure against first order side-channel attacks was proposed by Nikova et al. (P. Ning, S. Qing, N. Li (eds.) International conference in information and communications security. Lecture notes in computer science, vol. 4307, pp. 529–545, Springer, Berlin, 2006). We have implemented the lightweight block cipher PRESENT using the proposed countermeasure. For this purpose we had to decompose the S-box used in PRESENT and split it into three shares that fulfill the properties of the scheme presented by Nikova et al. (P. Lee, J. Cheon (eds.) International conference in information security and cryptology. Lecture notes in computer science, vol. 5461, pp. 218–234, Springer, Berlin, 2008). Our experimental results on real-world power traces show that this countermeasure provides additional security. Post-synthesis figures for an ASIC implementation require only 2,300 GE, which makes this implementation suitable for low-cost passive RFID-tags.

215 citations


Journal ArticleDOI
TL;DR: Two efficient Identity-Based Encryption (IBE) systems that admit selective-identity security reductions without random oracles in groups equipped with a bilinear map are constructed.
Abstract: We construct two efficient Identity-Based Encryption (IBE) systems that admit selective-identity security reductions without random oracles in groups equipped with a bilinear map. Selective-identity secure IBE is a slightly weaker security model than the standard security model for IBE. In this model the adversary must commit ahead of time to the identity that it intends to attack, whereas in an adaptive-identity attack the adversary is allowed to choose this identity adaptively. Our first system—BB1—is based on the well studied decisional bilinear Diffie–Hellman assumption, and extends naturally to systems with hierarchical identities, or HIBE. Our second system—BB2—is based on a stronger assumption which we call the Bilinear Diffie–Hellman Inversion assumption and provides another approach to building IBE systems. Our first system, BB1, is very versatile and well suited for practical applications: the basic hierarchical construction can be efficiently secured against chosen-ciphertext attacks, and further extended to support efficient non-interactive threshold decryption, among others, all without using random oracles. Both systems, BB1 and BB2, can be modified generically to provide “full” IBE security (i.e., against adaptive-identity attacks), either using random oracles, or in the standard model at the expense of a non-polynomial but easy-to-compensate security reduction.

176 citations


Journal ArticleDOI
TL;DR: This work gives detailed implementation results which show that the Gallant–Lambert–Vanstone method for elliptic curve point multiplication on general curves runs in between 0.70 and 0.83 the time of the previous best methods.
Abstract: Efficiently computable homomorphisms allow elliptic curve point multiplication to be accelerated using the Gallant–Lambert–Vanstone (GLV) method. Iijima, Matsuo, Chao and Tsujii gave such homomorphisms for a large class of elliptic curves by working over ${\mathbb{F}}_{p^{2}}$. We extend their results and demonstrate that they can be applied to the GLV method. In general we expect our method to require about 0.75 the time of previous best methods (except for subfield curves, for which Frobenius expansions can be used). We give detailed implementation results which show that the method runs in between 0.70 and 0.83 the time of the previous best methods for elliptic curve point multiplication on general curves.

140 citations


Journal ArticleDOI
TL;DR: A positive obfuscation result is presented for a traditional cryptographic functionality which takes a ciphertext for message m encrypted under Alice’s public key and transforms it into a cipher text for the same message m under Bob's public key which satisfies a definition of obfuscation which incorporates more security-aware provisions.
Abstract: We present a positive obfuscation result for a traditional cryptographic functionality. This positive result stands in contrast to well-known impossibility results (Barak et al. in Advances in Cryptology—CRYPTO’01, 2002), for general obfuscation and recent impossibility and implausibility (Goldwasser and Kalai in 46th IEEE Symposium on Foundations of Computer Science (FOCS), pp. 553–562, 2005) results for obfuscation of many cryptographic functionalities. Whereas other positive obfuscation results in the standard model apply to very simple point functions (Canetti in Advances in Cryptology—CRYPTO’97, 1997; Wee in 37th ACM Symposium on Theory of Computing (STOC), pp. 523–532, 2005), our obfuscation result applies to the significantly more complex and widely-used re-encryption functionality. This functionality takes a ciphertext for message m encrypted under Alice’s public key and transforms it into a ciphertext for the same message m under Bob’s public key. To overcome impossibility results and to make our results meaningful for cryptographic functionalities, our scheme satisfies a definition of obfuscation which incorporates more security-aware provisions.

137 citations


Journal ArticleDOI
TL;DR: It is shown that fault attacks on SRAM-based FPGAs may behave differently with respect to attacks against ASIC, and they need therefore to be addressed by specific countermeasures, that are also discussed in this paper.
Abstract: Programmable devices are an interesting alternative when implementing embedded systems on a low-volume scale. In particular, the affordability and the versatility of SRAM-based FPGAs make them attractive with respect to ASIC implementations. FPGAs have thus been used extensively and successfully in many fields, such as implementing cryptographic accelerators. Hardware implementations, however, must be protected against malicious attacks, e.g. those based on fault injections. Protections have been usually evaluated on ASICs, but FPGAs can be vulnerable as well. This work presents thus fault injection attacks against a secured AES architecture implemented on a SRAM-based FPGA. The errors are injected during the computation by means of voltage glitches and laser attacks. To our knowledge, this is one of the first works dealing with dynamic laser fault injections. We show that fault attacks on SRAM-based FPGAs may behave differently with respect to attacks against ASIC, and they need therefore to be addressed by specific countermeasures, that are also discussed in this paper. In addition, we discuss the different effects obtained by the two types of attacks.

77 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a comprehensive statistical study of TRNGs based on the sampling of an oscillator subject to phase noise (a.k.a. phase jitters).
Abstract: Physical random number generators (a.k.a. TRNGs) appear to be critical components of many cryptographic systems. Yet, such building blocks are still too seldom provided with a formal assessment of security, in comparison to what is achieved for conventional cryptography. In this work, we present a comprehensive statistical study of TRNGs based on the sampling of an oscillator subject to phase noise (a.k.a. phase jitters). This classical layout, typically instantiated with a ring oscillator, provides a simple and attractive way to implement a TRNG on a chip. Our mathematical study allows one to evaluate and control the main security parameters of such a random source, including its entropy rate and the biases of certain bit patterns, provided that a small number of physical parameters of the oscillator are known. In order to evaluate these parameters in a secure way, we also provide an experimental method for filtering out the global perturbations affecting a chip and possibly visible to an attacker. Finally, from our mathematical model, we deduce specific statistical tests applicable to the bitstream of a TRNG. In particular, in the case of an insecure configuration, we show how to recover the parameters of the underlying oscillator.

74 citations


Journal ArticleDOI
TL;DR: It is proved that in the multiparty case it is possible to construct a single mechanism that works for all (polynomial) utility functions and shown that the known protocols for rational secret sharing that do not assume simultaneous channels all suffer from the problem that one of the parties can cause the others to output an incorrect value.
Abstract: The problem of carrying out cryptographic computations when the participating parties are rational in a game-theoretic sense has recently gained much attention. One problem that has been studied considerably is that of rational secret sharing. In this setting, the aim is to construct a mechanism (protocol) so that parties behaving rationally have incentive to cooperate and provide their shares in the reconstruction phase, even if each party prefers to be the only one to learn the secret. Although this question was only recently asked by Halpern and Teague (STOC 2004), a number of works with beautiful ideas have been presented to solve this problem. However, they all have the property that the protocols constructed need to know the actual utility values of the parties (or at least a bound on them). This assumption is very problematic because the utilities of parties are not public knowledge. We ask whether this dependence on the actual utility values is really necessary and prove that in the case of two parties, rational secret sharing cannot be achieved without it. On the positive side, we show that in the multiparty case it is possible to construct a single mechanism that works for all (polynomial) utility functions. Our protocol has an expected number of rounds that is constant, and is optimally resilient to coalitions. In addition to the above, we observe that the known protocols for rational secret sharing that do not assume simultaneous channels all suffer from the problem that one of the parties can cause the others to output an incorrect value. (This problem arises when a party gains higher utility by having another output an incorrect value than by learning the secret itself; we argue that such a scenario needs to be considered.) We show that this problem is inherent in the non-simultaneous channels model, unless the actual values of the parties’ utilities from this attack are known, in which case it is possible to prevent this from happening.

Journal ArticleDOI
TL;DR: In this article, the selective decommitment problem was studied and the authors showed that no simulation-based security scheme can achieve provable security in the context of graph 3-coloring.
Abstract: The selective decommitment problem can be described as follows: assume that an adversary receives a number of commitments and then may request openings of, say, half of them. Do the unopened commitments remain secure? Although this question arose more than twenty years ago, no satisfactory answer could be presented so far. We answer the question in several ways: If simulation-based security is desired (i.e., if we demand that the adversary’s output can be simulated by a machine that does not see the unopened commitments), then security is not provable for noninteractive or perfectly binding commitment schemes via black-box reductions to standard cryptographic assumptions. However, we show how to achieve security in this sense with interaction and a non-black-box reduction to one-way permutations. If only indistinguishability of the unopened commitments from random commitments is desired, then security is not provable for (interactive or noninteractive) perfectly binding commitment schemes, via black-box reductions to standard cryptographic assumptions. However, any statistically hiding scheme does achieve security in this sense. Our results give an almost complete picture when and how security under selective openings can be achieved. Applications of our results include: Essentially, an encryption scheme must be non-committing in order to achieve provable security against an adaptive adversary. When implemented with our secure commitment scheme, the interactive proof for graph 3-coloring due to becomes zero-knowledge under parallel composition. On the technical side, we develop a technique to show very general impossibility results for black-box proofs.

Journal ArticleDOI
TL;DR: A new primitive called identity-based encryption with wildcards, or WIBE for short, is introduced that allows a sender to encrypt messages to a whole range of receivers whose identities match a certain pattern.
Abstract: In this paper, we introduce a new primitive called identity-based encryption with wildcards, or WIBE for short. It allows a sender to encrypt messages to a whole range of receivers whose identities match a certain pattern. This pattern is defined through a sequence of fixed strings and wildcards, where any string can take the place of a wildcard in a matching identity. Our primitive can be applied to provide an intuitive way to send encrypted email to groups of users in a corporate hierarchy. We propose a full security notion and give efficient implementations meeting this notion under different pairing-related assumptions, both in the random oracle model and in the standard model.

Journal ArticleDOI
TL;DR: In this paper, the authors presented an algorithm for solving the discrete logarithm problem in Jacobians of families of plane curves whose degrees in X and Y are low with respect to their genera.
Abstract: We present an algorithm for solving the discrete logarithm problem in Jacobians of families of plane curves whose degrees in X and Y are low with respect to their genera. The finite base fields $\mathbb{F}_{q}$ are arbitrary, but their sizes should not grow too fast compared to the genus. For such families, the group structure and discrete logarithms can be computed in subexponential time of $L_{q^{g}}(1/3,O(1))$. The runtime bounds rely on heuristics similar to the ones used in the number field sieve or the function field sieve.

Journal ArticleDOI
TL;DR: In this article, the authors show how to construct collisions between palindromic bit strings of length 2n+2 for Tillich and Zemor's construction, and also yield collisions for related proposals by Petit et al.
Abstract: At CRYPTO ’94, Tillich and Zemor proposed a family of hash functions, based on computing a suitable matrix product in groups of the form $SL_{2}(\mathbb{F}_{2^{n}})$. We show how to construct collisions between palindromic bit strings of length 2n+2 for Tillich and Zemor’s construction. The approach also yields collisions for related proposals by Petit et al. from ICECS ’08 and CT-RSA ’09. It seems fair to consider our attack as practical: for parameters of interest, the colliding bit strings have a length of a few hundred bits and can be found on a standard PC within seconds.

Journal ArticleDOI
TL;DR: It is demonstrated that the traditional symbolic secrecy criterion for key exchange provides an inadequate security guarantee and a new symbolic criterion is proposed that guarantees composable concrete security within the Universally Composable security framework.
Abstract: In light of the growing complexity of cryptographic protocols and applications, it becomes highly desirable to mechanize—and eventually automate—the security analysis of protocols. A natural step towards automation is to allow for symbolic security analysis. However, the complexity of mechanized symbolic analysis is typically exponential in the space and time complexities of the analyzed system. Thus, full automation via direct analysis of the entire given system has so far been impractical even for systems of modest complexity. We propose an alternative route to fully automated and efficient security analysis of systems with no a priori bound on the complexity. We concentrate on systems that have an unbounded number of components, where each component is of small size. The idea is to perform symbolic analysis that guarantees composable security. This allows applying the automated analysis only to individual components, while still guaranteeing security of the overall system. We exemplify the approach in the case of authentication and key-exchange protocols of a specific format. Specifically, we formulate and mechanically assert symbolic properties that correspond to concrete security properties formulated within the Universally Composable security framework. As an additional contribution, we demonstrate that the traditional symbolic secrecy criterion for key exchange provides an inadequate security guarantee (regardless of the complexity of verification) and propose a new symbolic criterion that guarantees composable concrete security.

Journal Article
TL;DR: This paper analyzes the security problem in virtualized environments and presents the virtualized environment security evaluation criteria, and further research its support technology and authentication methods.
Abstract: With the development of virtualization technology, some security problems have appeared. Researchers have begun to suspect the security of virtualized environments and to consider how to evaluate the degree of their security. But there are no uniform virtualized environment security evaluation criteria at present. In this paper, we analyse the security problem in virtualized environments and present our criteria. We further research its support technology and authentication methods. First, we introduce the background of the problem. Next, we summarize the related work. After that we present our virtualized environment security evaluation criteria and introduce our work on support technology and authentication method. For support technology, we propose some safety guarantee technology for each security level. For authentication method, we give some ideas for the evaluation of each security level.

Journal ArticleDOI
TL;DR: A new and severe cryptanalytic attack on the F-FCSR stream cipher family is presented, and the complexity is low enough to allow the attack to be performed on a single PC within seconds.
Abstract: The F-FCSR stream cipher family has been presented a few years ago. Apart from some flaws in the initial propositions, corrected in a later stage, there are no known weaknesses of the core of these algorithms. Two variants, F-FCSR-H and F-FCSR-16, were proposed in the eSTREAM project, and F-FCSR-H v2 is one of the ciphers selected for the eSTREAM portfolio. In this paper we present a new and severe cryptanalytic attack on the F-FCSR stream cipher family. We give the details of the attack when applied to F-FCSR-H v2 and F-FCSR-16. The attack requires a few Mbytes of received sequence, and the complexity is low enough to allow the attack to be performed on a single PC within seconds.

Journal ArticleDOI
TL;DR: In this article, the authors extend these impossibility results for universal composability to the case of no honest majority and show for which models the impossibility results hold and for which they do not, and also consider a setting where the inputs to the protocols running in the network are fixed before any execution.
Abstract: Universal composability and concurrent general composition consider a setting where secure protocols are run concurrently with each other and with arbitrary other possibly insecure protocols. Protocols that meet the definition of universal composability are guaranteed to remain secure even when run in this strongly adversarial setting. In the case of an honest majority, or where there is a trusted setup phase of some kind (like a common reference string or the key-registration public-key infrastructure of Barak et al. in FOCS 2004), it has been shown that any functionality can be securely computed in a universally composable way. On the negative side, it has also been shown that in the plain model where there is no trusted setup at all, there are large classes of functionalities which cannot be securely computed in a universally composable way without an honest majority. In this paper, we extend these impossibility results for universal composability. We study a number of public-key models and show for which models the impossibility results of universal composability hold and for which they do not. We also consider a setting where the inputs to the protocols running in the network are fixed before any execution begins. The majority of our results are negative and we show that the known impossibility results for universal composability in the case of no honest majority extend to many other settings.

Journal ArticleDOI
TL;DR: It is shown that assuming the existence of one-way functions only, there exist adaptive zero-knowledge proofs for all languages in $\mathcal{NP}$, and a black-box separation is derived between adaptively and statically secure oblivious transfer.
Abstract: In the setting of secure computation, a set of parties wish to securely compute some function of their inputs, in the presence of an adversary. The adversary in question may be static (meaning that it controls a predetermined subset of the parties) or adaptive (meaning that it can choose to corrupt parties during the protocol execution and based on what it sees). In this paper, we study two fundamental questions relating to the basic zero-knowledge and oblivious transfer protocol problems: Adaptive zero-knowledge proofs: We ask whether it is possible to construct adaptive zero-knowledge proofs (with unconditional soundness) for all of $\mathcal{NP}$. Beaver (STOC [1996]) showed that known zero-knowledge proofs are not adaptively secure, and in addition showed how to construct zero-knowledge arguments (with computational soundness). Adaptively secure oblivious transfer: All known protocols for adaptively secure oblivious transfer rely on seemingly stronger hardness assumptions than for the case of static adversaries. We ask whether this is inherent, and in particular, whether it is possible to construct adaptively secure oblivious transfer from enhanced trapdoor permutations alone. We provide surprising answers to the above questions, showing that achieving adaptive security is sometimes harder than achieving static security, and sometimes not. First, we show that assuming the existence of one-way functions only, there exist adaptive zero-knowledge proofs for all languages in $\mathcal{NP}$. In order to prove this, we overcome the problem that all adaptive zero-knowledge protocols known until now used equivocal commitments (which would enable an all-powerful prover to cheat). Second, we prove a black-box separation between adaptively secure oblivious transfer and enhanced trapdoor permutations. As a corollary, we derive a black-box separation between adaptively and statically secure oblivious transfer. This is the first black-box separation to relate to adaptive security and thus the first evidence that it is indeed harder to achieve security in the presence of adaptive adversaries than in the presence of static adversaries.

Journal ArticleDOI
TL;DR: In this article, the notion of resource-fair protocols is introduced, which is similar to the security definition in the universal composability (UC) framework, but works in a model that allows any party to request additional resources from the environment to deal with dishonest parties that may prematurely abort.
Abstract: We introduce the notion of resource-fair protocols. Informally, this property states that if one party learns the output of the protocol, then so can all other parties, as long as they expend roughly the same amount of resources. As opposed to previously proposed definitions related to fairness, our definition follows the standard simulation paradigm and enjoys strong composability properties. In particular, our definition is similar to the security definition in the universal composability (UC) framework, but works in a model that allows any party to request additional resources from the environment to deal with dishonest parties that may prematurely abort. In this model we specify the ideally fair functionality as allowing parties to “invest resources” in return for outputs, but in such an event offering all other parties a fair deal. (The formulation of fair dealings is kept independent of any particular functionality, by defining it using a “wrapper.”) Thus, by relaxing the notion of fairness, we avoid a well-known impossibility result for fair multi-party computation with corrupted majority; in particular, our definition admits constructions that tolerate arbitrary number of corruptions. We also show that, as in the UC framework, protocols in our framework may be arbitrarily and concurrently composed. Turning to constructions, we define a “commit-prove-fair-open” functionality and design an efficient resource-fair protocol that securely realizes it, using a new variant of a cryptographic primitive known as “time-lines.” With (the fairly wrapped version of) this functionality we show that some of the existing secure multi-party computation protocols can be easily transformed into resource-fair protocols while preserving their security.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of secure multiparty computation in the completely unauthenticated setting, where all messages sent by the parties may be tampered with and modified by the adversary without the uncorrupted parties being able to detect this fact.
Abstract: Research on secure multiparty computation has mainly concentrated on the case where the parties can authenticate each other and the communication between them. This work addresses the question of what security can be guaranteed when authentication is not available. We consider a completely unauthenticated setting, where all messages sent by the parties may be tampered with and modified by the adversary without the uncorrupted parties being able to detect this fact. In this model, it is not possible to achieve the same level of security as in the authenticated-channel setting. Nevertheless, we show that meaningful security guarantees can be provided: Essentially, all the adversary can do is to partition the network into disjoint sets, where in each set the computation is secure in of itself, and also independent of the computation in the other sets. In this setting we provide, for the first time, nontrivial security guarantees in a model with no setup assumptions whatsoever. We also obtain similar results while guaranteeing universal composability, in some variants of the common reference string model. Finally, our protocols can be used to provide conceptually simple and unified solutions to a number of problems that were studied separately in the past, including password-based authenticated key exchange and nonmalleable commitments. As an application of our results, we study the question of constructing secure protocols in partially authenticated networks, where some of the links are authenticated, and some are not (as is the case in most networks today).

Journal ArticleDOI
TL;DR: A general framework based on the interpolation of group homomorphisms leads to the design of a generic undeniable signature scheme called MOVA with batch verification and featuring nontransferability, which makes it possible to consider signatures of about 50 bits.
Abstract: This paper is devoted to the design and analysis of short undeniable signatures based on a random oracle. Exploiting their online property, we can achieve signatures with a fully scalable size depending on the security level. To this end, we develop a general framework based on the interpolation of group homomorphisms, leading to the design of a generic undeniable signature scheme called MOVA with batch verification and featuring nontransferability. By selecting group homomorphisms with a small group range, we obtain very short signatures. We also minimize the number of moves of the verification protocols by proposing some variants with only two moves in the random oracle model. We provide a formal security analysis of MOVA and assess the security in terms of the signature length. Under reasonable assumptions and with some carefully selected parameters, the MOVA scheme makes it possible to consider signatures of about 50 bits.

Journal ArticleDOI
TL;DR: The EnRUPT hash functions were proposed by O’Neil, Nohl and Henzen as candidates for the SHA-3 competition, organised by NIST and it is demonstrated that the attack is practical by giving an actual collision example for EnRupT-256.
Abstract: The EnRUPT hash functions were proposed by O’Neil, Nohl and Henzen as candidates for the SHA-3 competition, organised by NIST. The proposal contains seven concrete hash functions, each with a different digest length. We present a practical collision attack on each of these seven EnRUPT variants. The time complexity of our attack varies from 236 to 240 round computations, depending on the EnRUPT variant, and the memory requirements are negligible. We demonstrate that our attack is practical by giving an actual collision example for EnRUPT-256.

Journal Article
TL;DR: In this paper, a configurable approach is proposed to balance system reliability and performance to tolerate soft errors via partial software protection, which is implemented by modification of the compiler and evaluated with different configurations.
Abstract: Compared with hardware-based methods, softwarebased methods which do not need additional hardware costs are regarded as efficient methods for tolerating soft errors. Softwarebased methods which are implemented by software protection imply performance sacrifice. This paper proposes a new configurable approach whose purpose is to balance system reliability and performance to tolerate soft errors via partial software protection. Those unprotected software regions which are motivated by soft error mask on the software level are related to statically dead codes, those codes whose probabilities of being executed are low and some partially dead codes. For those protected codes, we copy all data and operate every operation twice to ensure the data stored in the memory are correct. Additionally, we ensure every branch instruction can jump to the right address by checking the condition and destination address. Finally, our approach is implemented by modification of the compiler. System reliability and performance are evaluated with different configurations. Experimental results demonstrate our aim to balance system reliability and performance.

Journal ArticleDOI
Abstract: In this paper we present invalid-curve attacks that apply to the Montgomery ladder elliptic curve scalar multiplication (ECSM) algorithm. An elliptic curve over the binary field is defined using two parameters, a and b. We show that with a different “value” for curve parameter a, there exists a cryptographically weaker group in nine of the ten NIST-recommended elliptic curves over $\mathbb{F}_{2^{m}}$. Thereafter, we present two attacks that are based on the observation that parameter a is not utilized for the Montgomery ladder algorithms proposed by Lopez and Dahab (CHES 1999: Cryptographic Hardware and Embedded Systems, LNCS, vol. 1717, pp. 316–327, Springer, Berlin, 1999). We also present the probability of success of such attacks for general and NIST-recommended elliptic curves. In addition we give some countermeasures to resist these attacks.

Journal ArticleDOI
TL;DR: The Modified Zero Cross Correlation Code (MZCC) has minimum length and can be constructed quite simply for any number of users and for any code weights, and has better spectrum slicing properties and noise performance in term of Bit Error Rate.
Abstract: Abstract The paper presents a method for the development of a new class of zero cross correlation optical code for Optical Code Division Multiple Access (OCDMA) system using Spectral Amplitude Coding. The proposed code is called Modified Zero Cross Correlation Code (MZCC). The code has minimum length and can be constructed quite simply for any number of users and for any code weights. The code has better spectrum slicing properties and noise performance in term of Bit Error Rate. The Modified Zero Cross Correlation Code will be demonstrated in simulation using OptiSys. 6.0 to observe noise performance which is better as compared to the existing Zero Cross Correlation Code.