scispace - formally typeset
Search or ask a question

Showing papers in "IACR Cryptology ePrint Archive in 2005"


Posted Content
TL;DR: In this article, the authors describe side-channel attacks based on inter-process leakage through the state of the CPU's memory cache, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups.
Abstract: We describe several software side-channel attacks based on inter-process leakage through the state of the CPU’s memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts, and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several such attacks on AES, and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux’s dm-crypt encrypted partitions (in the latter case, the full key can be recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we describe several countermeasures for mitigating such attacks.

1,109 citations


Posted Content
TL;DR: In this paper, a Hierarchical Identity Based Encryption (HIBE) scheme is presented, where the ciphertext consists of just three group elements and decryption requires only two bilinear map computations, regardless of the hierarchy depth.
Abstract: We present a Hierarchical Identity Based Encryption (HIBE) system where the ciphertext consists of just three group elements and decryption requires only two bilinear map computations, regardless of the hierarchy depth. Encryption is as ecient as in other HIBE systems. We prove that the scheme is selective-ID secure in the standard model and fully secure in the random oracle model. Our system has a number of applications: it gives very ecient forward secure public key and identity based cryptosystems (with short ciphertexts), it converts the NNL broadcast encryption system into an ecient public key broadcast system, and it provides an ecient mechanism for encrypting to the future. The system also supports limited delegation where users can be given restricted private keys that only allow delegation to bounded depth. The HIBE system can be modified to support sublinear size private keys at the cost of some ciphertext expansion.

1,076 citations


Posted Content
TL;DR: In this paper, Boneh et al. address three important issues of a PEKS scheme, including refreshing keywords, removing secure channel, and processing multiple keywords, which have not been considered in Boneh and Di Crescenzo's paper.
Abstract: The public key encryption with keyword search (PEKS) scheme recently proposed by Boneh, Di Crescenzo, Ostrovsky, and Persiano enables one to search encrypted keywords without compromising the security of the original data. In this paper, we address three important issues of a PEKS scheme, “refreshing keywords”, “removing secure channel”, and “processing multiple keywords”, which have not been considered in Boneh et. al.’s paper. We argue that care must be taken when keywords are used frequently in the PEKS scheme as this situation might contradict the security of PEKS. We then point out the inefficiency of the original PEKS scheme due to the use of the secure channel. We resolve this problem by constructing an efficient PEKS scheme that removes secure channel. Finally, we propose a PEKS scheme that encrypts multiple keywords efficiently.

438 citations


Posted Content
TL;DR: This paper examines the implications of heightened security needs for pairing-based cryptosystems and describes three different reasons why high-security users might have concerns about the long-term viability of these systems.
Abstract: In recent years cryptographic protocols based on the Weil and Tate pairings on elliptic curves have attracted much attention. A notable success in this area was the elegant solution by Boneh and Franklin [8] of the problem of efficient identity-based encryption. At the same time, the security standards for public key cryptosystems are expected to increase, so that in the future they will be capable of providing security equivalent to 128-, 192-, or 256-bit AES keys. In this paper we examine the implications of heightened security needs for pairing-based cryptosystems. We first describe three different reasons why high-security users might have concerns about the long-term viability of these systems. However, in our view none of the risks inherent in pairing-based systems are sufficiently serious to warrant pulling them from the shelves. We next discuss two families of elliptic curves E for use in pairing-based cryptosystems. The first has the property that the pairing takes values in the prime field $\mathbb{F}_p$ over which the curve is defined; the second family consists of supersingular curves with embedding degree k = 2. Finally, we examine the efficiency of the Weil pairing as opposed to the Tate pairing and compare a range of choices of embedding degree k, including k = 1 and k = 24.

264 citations


Posted Content
TL;DR: In this article, the authors show that a low-tech attacker can build a pick-pocket system that can remotely use a victim contactless smartcard, without the victim's knowledge.
Abstract: A contactless smartcard is a smartcard that can communicate with other devices without any physical connection, using Radio-Frequency Identifier (RFID) technology. Contactless smartcards are becoming increasingly popular, with applications like credit-cards, national-ID, passports, physical access. The security of such applications is clearly critical. A key feature of RFID-based systems is their very short range: typical systems are designed to operate at a range of ≈ 10cm. In this study we show that contactless smartcard technology is vulnerable to relay attacks: An attacker can trick the reader into communicating with a victim smartcard that is very far away. A “low-tech” attacker can build a pick-pocket system that can remotely use a victim contactless smartcard, without the victim’s knowledge. The attack system consists of two devices, which we call the “ghost” and the “leech”. We discuss basic designs for the attacker’s equipment, and explore their possible operating ranges. We show that the ghost can be up to 50m away from the card reader— 3 orders of magnitude higher than the nominal range. We also show that the leech can be up to 50cm away from the the victim card. The main characteristics of the attack are: orthogonality to any security protocol, unlimited distance between the attacker and the victim, and low cost of the attack system.

260 citations


Posted Content
TL;DR: The methods and techniques employed in side-channel attacks are surveyed, the destructive effects of such attacks, the countermeasures against such attacks and evaluation of their feasibility and applicability, and the necessity and feasibility of adopting this kind of physical security testing and evaluation in the development of FIPS 140-3 standard are explored.
Abstract: Side-channel attacks are easy-to-implement whilst powerful attacks against cryptographic implementations, and their targets range from primitives, protocols, modules, and devices to even systems. These attacks pose a serious threat to the security of cryptographic modules. In consequence, cryptographic implementations have to be evaluated for their resistivity against such attacks and the incorporation of different countermeasures has to be considered. This paper surveys the methods and techniques employed in these attacks, the destructive effects of such attacks, the countermeasures against such attacks and evaluation of their feasibility and applicability. Finally, the necessity and feasibility of adopting this kind of physical security testing and evaluation in the development of FIPS 140-3 standard are explored. This paper is not only a survey paper, but also more a position paper.

223 citations


Posted Content
TL;DR: In this article, the authors present a mechanized prover for secrecy properties of security protocols, which does not rely on the Dolev-Yao model, but on the computational model.
Abstract: We present a new mechanized prover for secrecy properties of security protocols. In contrast to most previous provers, our tool does not rely on the Dolev-Yao model, but on the computational model. It produces proofs presented as sequences of games; these games are formalized in a probabilistic polynomial-time process calculus. Our tool provides a generic method for specifying security properties of the cryptographic primitives, which can handle shared-key and public-key encryption, signatures, message authentication codes, and hash functions. Our tool produces proofs valid for a number of sessions polynomial in the security parameter, in the presence of an active adversary. We have implemented our tool and tested it on a number of examples of protocols from the literature.

175 citations



Posted Content
TL;DR: In this article, a group signature scheme that is provably secure in a universally composable framework, within the standard model with trusted parameters, is presented, which is based on the simulatability of real protocol executions in an ideal setting.
Abstract: We provide a construction for a group signature scheme that is provably secure in a universally composable framework, within the standard model with trusted parameters. Our proposed scheme is fairly simple and its efficiency falls within small factors of the most efficient group signature schemes with provable security in any model (including random oracles). Security of our constructions require new cryptographic assumptions, namely the Strong LRSW, EDH, and Strong SXDH assumptions. Evidence for any assumption we introduce is provided by proving hardness in the generic group model. Our second contribution is the first definition of security for group signatures based on the simulatability of real protocol executions in an ideal setting that captures the basic properties of unforgeability, anonymity, unlinkability, and exculpability for group signature schemes.

150 citations


Posted Content
TL;DR: The notions of existential and universal untraceability are defined and the access to the communication channels from a set of oracles are modeled and it is shown that this formalisation fits the problem being considered and allows a formal analysis of the protocols in terms of traceability.
Abstract: Radio Frequency Identification (RFID) systems aim to identify objects in open environments with neither physical nor visual contact. They consist of transponders inserted into objects, of readers, and usually of a database which contains information about the objects. The key point is that authorised readers must be able to identify tags without an adversary being able to trace them. Traceability is often underestimated by advocates of the technology and sometimes exaggerated by its detractors. Whatever the true picture, this problem is a reality when it blocks the deployment of this technology and some companies, faced with being boycotted, have already abandoned its use. Using cryptographic primitives to thwart the traceability issues is an approach which has been explored for several years. However, the research carried out up to now has not provided satisfactory results as no universal formalism has been defined. In this paper, we propose an adversarial model suitable for RFID environments. We define the notions of existential and universal untraceability and we model the access to the communication channels from a set of oracles. We show that our formalisation fits the problem being considered and allows a formal analysis of the protocols in terms of traceability. We use our model on several well-known RFID protocols and we show that most of them have weaknesses and are vulnerable to traceability.

143 citations


Posted Content
TL;DR: The concept of key encapsulation is extended to the primitives of identity-based and certificateless encryption and is given generic constructions of ID-KEMs and CL-K EMs that are provably secure in the random oracle model.
Abstract: We extend the concept of key encapsulation mechanisms to the primitives of ID-based and certificateless encryption. We show that the natural combination of ID-KEMs or CL-KEMs with data encapsulation mechanisms results in encryption schemes which are secure in a strong sense. In addition, we give generic constructions of ID-KEMs and CL-KEMs, as well as specific instantiations, which are provably secure.

Posted Content
TL;DR: A variant of Waters' identity-based encryption scheme with a much smaller system parameters size (only a few kilobytes) is presented and it is shown that this variant is semantically secure against passive adversaries in the standard model.
Abstract: In this paper, we present a variant of Waters’ Identity-Based Encryption scheme with a much smaller public-key size (only a few kilobytes). We show that this variant is semantically secure against passive adversaries in the standard model. In essence, the new scheme divides Waters’ public key size by a factor ` at the cost of (negligibly) reducing security by ` bits. Therefore, our construction settles an open question asked by Waters and constitutes the first fully secure practical Identity-Based Encryption scheme.

Posted Content
TL;DR: A set of lowest degree annihilators for symmetric functions is identified and an efficient algorithm for computing the algebraic immunity of a symmetric function is proposed.
Abstract: In this paper, we analyze the algebraic immunity of symmetric Boolean functions. The algebraic immunity is a property which measures the resistance against the algebraic attacks on symmetric ciphers. We identify a set of lowest degree annihilators for symmetric functions and propose an efficient algorithm for computing the algebraic immunity of a symmetric function. The existence of several symmetric functions with maximum algebraic immunity is proven. In this way, we have found a new class of functions which have good implementation properties and maximum algebraic immunity.

Posted Content
TL;DR: In this article, the authors introduce the necessary mathematical tools to deal with multivariate quadratic systems, present an overview of important schemes known so far and outline how they fit into a taxonomy of only four basic schemes and some generic modiflers.
Abstract: Multivariate quadratic systems can be used to construct both secure and e‐cient public key schemes In this article, we introduce the necessary mathematical tools to deal with multivariate quadratic systems, present an overview of important schemes known so far and outline how they flt into a taxonomy of only four basic schemes and some generic modiflers Moreover, we suggest new constructions not previously considered In this context, we propose some open problems and new research directions in the fleld of multivariate quadratic schemes

Posted Content
TL;DR: It is shown that for a reduced version of SHA-1, with 53 rounds instead of 80, it is possible to find collisions in less than 280 operations.
Abstract: We report on the experiments we performed in order to assess the security of SHA-1 against the attack by Chabaud and Joux [5]. We present some ideas for optimizations of the attack and some properties of the message expansion routine. Finally, we show that for a reduced version of SHA-1, with 53 rounds instead of 80, it is possible to find collisions in less than 280 operations.

Posted Content
TL;DR: This paper advises creating an automated tool to help with the mundane parts of writing and checking common arguments in the authors' proofs and explains why it is thought that such a tool would be useful, by considering two very different proofs of security from the literature and showing the places where having this tool would have been useful.
Abstract: This paper tries to sell a potential approach to making the process of writing and verifying our cryptographic proofs less prone to errors Specifically, I advocate creating an automated tool to help us with the mundane parts of writing and checking common arguments in our proofs On a high level, this tool should help us verify that two pieces of code induce the same probability distribution on some of their common variables In this paper I explain why I think that such a tool would be useful, by considering two very different proofs of security from the literature and showing the places in those proofs where having this tool would have been useful I also explain how I believe that this tool can be built Perhaps surprisingly, it seems to me that the functionality of such tool can be implemented using only “static code analysis” (ie, things that compilers do) I plan to keep updated versions of this docuemnt along with other update reports on the web at http://wwwresearchibmcom/people/s/shaih/CAV/

Posted Content
TL;DR: In this article, fast formulae for scalar multiplication in the Kummer surface associated to a genus 2 curve, using a Montgomery ladder, were derived for the arithmetic in Jacobians of genus 2 curves.
Abstract: In 1986, D. V. Chudnovsky and G. V. Chudnovsky proposed to use formulae coming from Theta functions for the arithmetic in Jacobians of genus 2 curves. We follow this idea and derive fast formulae for the scalar multiplication in the Kummer surface associated to a genus 2 curve, using a Montgomery ladder. Our formulae can be used to design very efficient genus 2 cryptosystems that should be faster than elliptic curve cryptosystems in some hardware configurations.

Posted Content
TL;DR: A chosen-ciphertext secure, searchable public key encryption scheme which allows for dynamic re-encryption of ciphertexts, and provides for node-targeted searches based on keywords or other identifiers.
Abstract: We consider the problem of using untrusted components to build correlation-resistant survivable storage systems that protect file replica locations, while allowing nodes to continuously re-distribute files throughout the network. The principal contribution is a chosen-ciphertext secure, searchable public key encryption scheme which allows for dynamic re-encryption of ciphertexts, and provides for node-targeted searches based on keywords or other identifiers. The scheme is provably secure under the SXDH assumption which holds in certain subgroups of elliptic curves, and a closely related assumption that we introduce.

Posted Content
TL;DR: A novel framework for the generic construction of hybrid encryption schemes which produces more efficient schemes than the ones known before and allows immediate conversion from a class of threshold public-key encryption to a threshold hybrid one without considerable overhead, which may not be possible in the previous approach.
Abstract: This paper presents a novel framework for the generic construction of hybrid encryption schemes which produces more efficient schemes than the ones known before. A previous framework introduced by Shoup combines a key encapsulation mechanism (KEM) and a data encryption mechanism (DEM). While it is sufficient to require both components to be secure against chosen ciphertext attacks (CCA-secure), Kurosawa and Desmedt showed a particular example of KEM that is not CCA-secure but can be securely combined with a specific type of CCA-secure DEM to obtain a more efficient, CCA-secure hybrid encryption scheme. There are also many other efficient hybrid encryption schemes in the literature that do not fit Shoup’s framework. These facts serve as motivation to seek another framework. The framework we propose yields more efficient hybrid scheme, and in addition provides insightful explanation about existing schemes that do not fit into the previous framework. Moreover, it allows immediate conversion from a class of threshold public-key encryption to a hybrid one without considerable overhead, which may not be possible in the previous approach.

Posted Content
TL;DR: In 1998, Blaze, Bleumer and Strauss (BBS) proposed an application called atomic proxy re-encryption, in which a semi-trusted proxy converts a ciphertext for Alice into a cipher text for Bob without seeing the underlying plaintext.
Abstract: In 1998, Blaze, Bleumer, and Strauss (BBS) proposed an application called atomic proxy re-encryption, in which a semi-trusted proxy converts a ciphertext for Alice into a ciphertext for Bob without seeing the underlying plaintext. We predict that fast and secure re-en cryption will become increasingly popular as a method for managing encrypted file systems. Although efficie ntly computable, the wide-spread adoption of BBS re-encryption has been hindered by considerable security risks. Following recent work of Dodis and Ivan, we present new re-encryption schemes that realize a stronger notion of security, and we demonstrate the usefulness of proxy re-encryption as a method of adding access control to a secure file system. Performance measurements of our experimental file system de monstrate that proxy re-encryption can work effectively in practice.

Posted Content
TL;DR: In this paper, a linear-time active attack against the HB authentication protocol was proposed, which was later shown to be vulnerable to both passive and active attacks, and was later removed.
Abstract: Much research has focused on providing RFID tags with lightweight cryptographic functionality. The HB authentication protocol was recently proposed [1] and claimed to be secure against both passive and active attacks. In this note we propose a linear-time active attack against HB .

Posted Content
TL;DR: In this paper, a ring signature scheme based on bilinear pairings is proposed, which is proven to be secure against adaptive chosen message attack without using the random oracle model.
Abstract: Since the formalization of ring signature by Rivest, Shamir and Tauman in 2001, there are lots of variations appeared in the literature. Almost all of the variations rely on the random oracle model for security proof. In this paper, we propose a ring signature scheme based on bilinear pairings, which is proven to be secure against adaptive chosen message attack without using the random oracle model. It is one of the first in the literature to achieve this security level.

Posted Content
TL;DR: This work presents the first efficient group signature scheme with a simple Joining protocol that is based on a “single message and signature response” interaction between the prospective user and the Group Manager (GM).
Abstract: A group signature is a basic privacy mechanism. The group joining operation is a critical component of such a scheme. To date all secure group signature schemes either employed a trustedparty aided join operation or a complex joining protocol requiring many interactions between the prospective user and the Group Manager (GM). In addition no efficient scheme employed a join protocol proven secure against adversaries that have the capability to dynamically initiate multiple concurrent join sessions during an attack. This work presents the first efficient group signature scheme with a simple Joining protocol that is based on a “single message and signature response” interaction between the prospective user and the GM. This single-message and signature-response registration paradigm where no other actions are taken, is the most efficient possible join interaction and was originally alluded to in 1997 by Camenisch and Stadler, but its efficient instantiation remained open till now. The fact that joining has two short communication flows and does not require secure channels is highly advantageous: for example, it allows users to easily join by a proxy (i.e., a security officer of a company can send a file with all registration requests in his company and get back their certificates for distribution back to members of the company). It further allows an easy and non-interactive global system re-keying operation as well as straightforward treatment of multi-group signatures. We present a strong security model for group signatures (the first explicitly taking into account concurrent join attacks) and an efficient scheme with a single-message and signature-response join protocol. The present manuscript is a full version containing proofs, minor corrections as well as a more flexible and efficient protocol construction compared to the proceedings version [28]. ∗Computer Science and Engineering Dept., University of Connecticut, Storrs, CT, USA, aggelos@cse.uconn.edu. Research partly supported by NSF CAREER Award CNS-0447808. †RSA Laboratories, Bedford, MA, USA and Computer Science Dept., Columbia University, NY, USA moti@cs.columbia.edu

Posted Content
TL;DR: In this paper, the Coin query is used to evaluate the security of authenticated key agreement protocols without key confirmation, which can be used to prove all the commonly required security attributes of a key agreement protocol with key confirmation.
Abstract: Since Bellare and Rogaway’s work in 1994, the indistinguishability-based security models of authenticated key agreement protocols in simple cases have been evolving for more than ten years. In this paper, we review and organize the models under a unified framework with some new extensions. By providing a new ability (the Coin query) to adversaries and redefining two key security notions, the framework fully exploits an adversary’s capacity and can be used to prove all the commonly required security attributes of key agreement protocols with key confirmation. At the same time, the Coin query is also used to define a model which can be used to heuristically evaluate the security of a large category of authenticated protocols without key confirmation. We use the models to analyze a few identity-based authenticated key agreement protocols with pairings.

Posted Content
TL;DR: This paper introduces a new cryptographic object called a key regression scheme, and proposes three constructions that are provably secure under standard cryptographic assumptions and empirically shows that key regression can significantly reduce the bandwidth requirements of a content publisher under realistic workloads using lazy revocation.
Abstract: The Plutus file system introduced the notion of key rotation as a means to derive a sequence of temporally-related keys from the most recent key. In this paper we show that, despite natural intuition to the contrary, key rotation schemes cannot generically be used to key other cryptographic objects; in fact, keying an encryption scheme with the output of a key rotation scheme can yield a composite system that is insecure. To address these shortcomings, we introduce a new cryptographic object called a key regression scheme, and we propose three constructions that are provably secure under standard cryptographic assumptions. We implement key regression in a secure file system and empirically show that key regression can significantly reduce the bandwidth requirements of a content publisher under realistic workloads using lazy revocation. Our experiments also serve as the first empirical evaluation of either a key rotation or key regression scheme.

Posted Content
TL;DR: In this article, the authors revisited the formulation of certificateless public key encryption and constructed a more efficient scheme and then extended it to an authenticated encryption, and presented an instantiation.
Abstract: In [3] Al-Riyami and Paterson introduced the notion of “Certificateless Public Key Cryptography” and presented an instantiation. In this paper, we revisit the formulation of certificateless public key encryption and construct a more efficient scheme and then extend it to an authenticated encryption.

Posted Content
TL;DR: In this short memo, the results achieved during a two and half months long research on the discovery of collisions for a set of hash functions (MD4, MD5, HAVAL-128, RIPEMD) are summarized.
Abstract: In this short memo, we summarize the results achieved during a two and half months long research. Further details will be provided in a forthcoming paper. One of the major cryptographic “break-through” of the recent years was a discovery of collisions for a set of hash functions (MD4, MD5, HAVAL-128, RIPEMD) by the Chinese cryptographers in August 2004 [1]. Their authors (Wang et al.) kept the algorithm secret, however. During October 2004, the Australian team (Hawkes et al.) tried to reconstruct the methodology in their great work [3]. The most important "Chinese trick" was not discovered, although they succeeded in describing a differential scheme of conditions that hold for the published collisions. Nevertheless, fulfilling the conditions of this scheme has been still more computationally difficult in comparison to what the results of [1] showed. During our research, we also analyzed the available data using differential cryptanalysis. We have found a way to generate the first message block of the collision about 1000 2000 times faster than the Chinese team that corresponds to reaching the first colliding block in 2 minutes using a common notebook (PC platform). The same computation phase took the Chinese team about an hour using an IBM P690 supercomputer. On the other hand, the Chinese team was 2 80 times faster when computing the second message block of their collisions. Therefore, our and the Chinese methods probably differs in several details in both parts of the computation. Overall, our method is about 3 6 times faster. More specifically, finding the first (complete) collision took 8 hours using a notebook PC (Intel Pentium 1.6 GHz). Note that our method works for any initialization vector. It can be abused in forging signatures of software packages and digital certificates as some papers show ([4], [5], [6]). We have shown that it is possible to find MD5 collisions using an ordinary home PC. That should be a warning towards persisting usage of MD5. In the appendix, we show new examples of collisions for a standard and chosen initialization vectors. 1 This research has been done during Christmas vacation and during January and February 2005. At that time the author has been working for the company LEC, s.r.o., Prague, Czech Republic which supported the project by material and financial means.

Posted Content
TL;DR: In this paper, the authors summarize the results achieved during a brief three months long research on collisions of the MD5 hash function and propose several multi-message modification methods, which are more effective than methods described in [1, 8].
Abstract: In this paper, we summarize the results achieved during our brief three months long research on collisions of the MD5 hash function Being inspired by the results announced by Wang et al [1] we independently developed methods for finding collisions which work for any initialization value and which are quicker than the methods presented in [1, 8] It enables us to find a MD5 collision on a standard notebook PC roughly in 8 hours [7] Independently on [1, 8], we discovered and propose several multi-message modification methods, which are more effective than methods described in [1, 8] We show their principle

Posted Content
TL;DR: In this article, Miltersen and Vinodchandran give two applications of Nisan-Wigderson-type pseudorandom generators in cryptography, assuming the existence of an appropriate NW-type generator, they construct a one-message witness-indistinguishable proof system for every language in NP, based on any trapdoor permutation.
Abstract: We give two applications of Nisan–Wigderson-type (“non-cryptographic”) pseudorandom generators in cryptography. Specifically, assuming the existence of an appropriate NW-type generator, we construct: 1. A one-message witness-indistinguishable proof system for every language in NP, based on any trapdoor permutation. This proof system does not assume a shared random string or any setup assumption, so it is actually an “NP proof system.” 2. A noninteractive bit commitment scheme based on any one-way function. The specific NW-type generator we need is a hitting set generator fooling nondeterministic circuits. It is known how to construct such a generator if E = DTIME(2) has a function of nondeterministic circuit complexity 2 (Miltersen and Vinodchandran, FOCS ‘99). Our witness-indistinguishable proofs are obtained by using the NW-type generator to derandomize the ZAPs of Dwork and Naor (FOCS ‘00). To our knowledge, this is the first construction of an NP proof system achieving a secrecy property. Our commitment scheme is obtained by derandomizing the interactive commitment scheme of Naor (J. Cryptology, 1991). Previous constructions of noninteractive commitment schemes were only known under incomparable assumptions.

Posted Content
TL;DR: In this article, the authors proposed an identity-based signature scheme which allows aggregation when the signatures to be aggregated come all from the same signer, and they proved that the scheme is unforgeable, in the random oracle model, assuming that the Computational Diffie-Hellman problem is hard to solve.
Abstract: Aggregate signatures are a useful primitive which allows aggregation into a single and constant-length signature many signatures on different messages computed by different users. Specific proposals of aggregate signature schemes exist only for PKI-based scenarios. For identity-based scenarios, where public keys of the users are directly derived from their identities, the signature schemes proposed up to now do not seem to allow constant-length aggregation. We provide an intermediate solution to this problem, by designing a new identity-based signature scheme which allows aggregation when the signatures to be aggregated come all from the same signer. The new scheme is deterministic and enjoys some better properties than the previous proposals; for example, it allows detection of a possible corruption of the master entity. We formally prove that the scheme is unforgeable, in the random oracle model, assuming that the Computational Diffie--Hellman problem is hard to solve.