scispace - formally typeset
Search or ask a question

Showing papers on "Encryption published in 2002"


BookDOI
01 Jan 2002
TL;DR: This volume is the authoritative guide to the Rijndael algorithm and AES and professionals, researchers, and students active or interested in data encryption will find it a valuable source of information and reference.
Abstract: From the Publisher: In October 2000, the US National Institute of Standards and Technology selected the block cipher Rijndael as the Advanced Encryption Standard (AES). AES is expected to gradually replace the present Data Encryption Standard (DES) as the most widely applied data encryption technology.|This book by the designers of the block cipher presents Rijndael from scratch. The underlying mathematics and the wide trail strategy as the basic design idea are explained in detail and the basics of differential and linear cryptanalysis are reworked. Subsequent chapters review all known attacks against the Rijndael structure and deal with implementation and optimization issues. Finally, other ciphers related to Rijndael are presented.|This volume is THE authoritative guide to the Rijndael algorithm and AES. Professionals, researchers, and students active or interested in data encryption will find it a valuable source of information and reference.

2,140 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the noise characteristics of the power signals and developed an approach to model the signal-to-noise ratio (SNR) using a multiple-bit attack.
Abstract: This paper examines how monitoring power consumption signals might breach smart-card security. Both simple power analysis and differential power analysis attacks are investigated. The theory behind these attacks is reviewed. Then, we concentrate on showing how power analysis theory can be applied to attack an actual smart card. We examine the noise characteristics of the power signals and develop an approach to model the signal-to-noise ratio (SNR). We show how this SNR can be significantly improved using a multiple-bit attack. Experimental results against a smart-card implementation of the Data Encryption Standard demonstrate the effectiveness of our multiple-bit attack. Potential countermeasures to these attacks are also discussed.

1,554 citations


Book ChapterDOI
01 Dec 2002
TL;DR: In this article, the authors presented hierarchical identity-based encryption schemes and signature schemes that have total collusion resistance on an arbitrary number of levels and that have chosen ciphertext security in the random oracle model assuming the difficulty of the Bilinear Diffie-Hellman problem.
Abstract: We present hierarchical identity-based encryption schemes and signature schemes that have total collusion resistance on an arbitrary number of levels and that have chosen ciphertext security in the random oracle model assuming the difficulty of the Bilinear Diffie-Hellman problem.

1,334 citations


Patent
25 Mar 2002
TL;DR: In this paper, the authors propose a system and method for communicating information between a first party and a second party, comprising the steps of receiving, by an intermediary, an identifier of desired information and accounting information for a transaction involving the information from the first party, and negotiating, by the intermediary, a comprehension function for obscuring at least a portion of the information communicated between the first parties and the second parties.
Abstract: A system and method for communicating information between a first party and a second party, comprising the steps of receiving, by an intermediary, an identifier of desired information and accounting information for a transaction involving the information from the first party, transmitting an identifier of the first party to the second party, and negotiating, by the intermediary, a comprehension function for obscuring at least a portion of the information communicated between the first party and the second party. The data transmission may be made secure with respect to the intermediary by providing an asymmetric key or direct key exchange for encryption of the communication between the first and second party. The data transmission may be made secure with respect to the second party by maintaining the information in encrypted format at the second party, with the decryption key held only by the intermediary, and transmitting a secure composite of the decryption key and a new encryption key to the second party for transcoding of the data record, and providing the new decryption key to the first party, so that the information transmitted to the first party can be comprehended by it.

1,193 citations


Posted Content
TL;DR: In this article, an identity-based signature scheme using gap Diffie-Hellman (GDH) groups was proposed and proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model.
Abstract: In this paper we propose an identity(ID)-based signature scheme using gap Diffie-Hellman (GDH) groups. Our scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the random oracle model. Using GDH groups obtained from bilinear pairings, as a special case of our scheme, we obtain an ID-based signature scheme that shares the same system parameters with the ID-based encryption scheme (BF-IBE) by Boneh and Franklin [BF01], and is as efficient as the BF-IBE. Combining our signature scheme with the BF-IBE yields a complete solution of an ID-based public key system. It can be an alternative for certificate-based public key infrastructures, especially when efficient key management and moderate security are required.

858 citations


Book ChapterDOI
02 May 2002
TL;DR: In this article, the authors present several new and fairly practical public-key encryption schemes and prove them secure against adaptive chosen ciphertext attack, and introduce a general framework that allows one to construct secure encryption schemes from language membership problems.
Abstract: We present several new and fairly practical public-key encryption schemes and prove them secure against adaptive chosen ciphertext attack. One scheme is based on Paillier's Decision Composite Residuosity assumption, while another is based in the classical Quadratic Residuosity assumption. The analysis is in the standard cryptographic model, i.e., the security of our schemes does not rely on the Random Oracle model. Moreover, we introduce a general framework that allows one to construct secure encryption schemes in a generic fashion from language membership problems that satisfy certain technical requirements. Our new schemes fit into this framework, as does the Cramer-Shoup scheme based on the Decision Diffie-Hellman assumption.

770 citations


Book ChapterDOI
02 May 2002
TL;DR: The first public-key traitor tracing scheme with constant transmission rate was proposed by Naccac, Shamir, and Stern as mentioned in this paper, which achieves the same expansion efficiency as regular ElGamal encryption.
Abstract: An important open problem in the area of Traitor Tracing is designing a scheme with constant expansion of the size of keys (users' keys and the encryption key) and of the size of ciphertexts with respect to the size of the plaintext. This problem is known from the introduction of Traitor Tracing byChor, Fiat and Naor. We refer to such schemes as traitor tracing with constant transmission rate. Here we present a general methodologyand two protocol constructions that result in the first two public-keytraitor tracing schemes with constant transmission rate in settings where plaintexts can be calibrated to be sufficientlylarge. Our starting point is the notion of "copyrighted function" which was presented byNaccac he, Shamir and Stern. We first solve the open problem of discrete-log-based and public-key-based "copyrighted function." Then, we observe the simple yet crucial relation between (public-key) copyrighted encryption and (public-key) traitor tracing, which we exploit byin troducing a generic design paradigm for designing constant transmission rate traitor tracing schemes based on copyrighted encryption functions. Our first scheme achieves the same expansion efficiency as regular ElGamal encryption. The second scheme introduces only a slightlylarger (constant) overhead, however, it additionallyac hieves efficient black-box traitor tracing (against any pirate construction).

667 citations


Journal Article
TL;DR: This work presents a general methodology and two protocol constructions that result in the first two public-key traitor tracing schemes with constant transmission rate in settings where plaintexts can be calibrated to be sufficientlylarge.
Abstract: An important open problem in the area of Traitor Tracing is designing a scheme with constant expansion of the size of keys (users' keys and the encryption key) and of the size of ciphertexts with respect to the size of the plaintext. This problem is known from the introduction of Traitor Tracing by Chor, Fiat and Naor. We refer to such schemes as traitor tracing with constant transmission rate. Here we present a general methodology and two protocol constructions that result in the first two public-key traitor tracing schemes with constant transmission rate in settings where plaintexts can be calibrated to be sufficiently large. Our starting point is the notion of copyrighted function which was presented by Naccache, Shamir and Stern. We first solve the open problem of discrete-log-based and public-key-based copyrighted function. Then, we observe the simple yet crucial relation between (public-key) copyrighted encryption and (public-key) traitor tracing, which we exploit by introducing a generic design paradigm for designing constant transmission rate traitor tracing schemes based on copyrighted encryption functions. Our first scheme achieves the same expansion efficiency as regular ElGamal encryption. The second scheme introduces only a slightly larger (constant) overhead, however, it additionally achieves efficient black-box traitor tracing (against any pirate construction).

649 citations


Journal Article
TL;DR: A general framework that allows one to construct secure encryption schemes in a generic fashion from language membership problems that satisfy certain technical requirements is introduced, as does the Cramer-Shoup scheme based on the Decision Diffie-Hellman assumption.
Abstract: We present several new and fairly practical public-key encryption schemes and prove them secure against adaptive chosen ciphertext attack. One scheme is based on Paillier's Decision Composite Residuosity assumption, while another is based in the classical Quadratic Residuosity assumption. The analysis is in the standard cryptographic model, i.e., the security of our schemes does not rely on the Random Oracle model. Moreover, we introduce a general framework that allows one to construct secure encryption schemes in a generic fashion from language membership problems that satisfy certain technical requirements. Our new schemes fit into this framework, as does the Cramer-Shoup scheme based on the Decision Diffie-Hellman assumption.

636 citations


Book ChapterDOI
02 May 2002
TL;DR: A two-level HIBE system with total collusion resistance at the upper (domain) level and partial collusion resistance in the lower (user) level, which has chosen-ciphertext security in the random-oracle model is presented.
Abstract: We introduce the concept of hierarchical identity-based encryption (HIBE) schemes, give precise definitions of their security and mention some applications A two-level HIBE (2-HIBE) scheme consists of a root private key generator (PKG), domain PKGs and users, all of which are associated with primitive IDs (PIDs) that are arbitrary strings A user's public key consists of their PID and their domain's PID (in whole called an address) In a regular IBE (which corresponds to a 1-HIBE) scheme, there is only one PKG that distributes private keys to each user (whose public keys are their PID) In a 2-HIBE, users retrieve their private key from their domain PKG Domain PKGs can compute the private key of any user in their domain, provided they have previously requested their domain secret key from the root PKG (who possesses a master secret) We can go beyond two levels by adding subdomains, subsubdomains, and so on We present a two-level system with total collusion resistance at the upper (domain) level and partial collusion resistance at the lower (user) level, which has chosen-ciphertext security in the random-oracle model

633 citations


Book ChapterDOI
02 May 2002
TL;DR: It is shown that gCCA2-security suffices for all known uses of CCA2-secure encryption, while no longer suffering from the definitional shortcomings of the latter.
Abstract: We formally study the notion of a joint signature and encryption in the public-key setting. We refer to this primitive as signcryption, adapting the terminology of [35]. We present two definitions for the security of signcryption depending on whether the adversary is an outsider or a legal user of the system. We then examine generic sequential composition methods of building signcryption from a signature and encryption scheme. Contrary to what recent results in the symmetric setting [5, 22] might lead one to expect, we show that classical "encryptthen-sign" (?tS) and "sign-then-encrypt" (St?) methods are both secure composition methods in the public-key setting.We also present a new composition method which we call "commit-thenencrypt-and-sign" (Ct?&S). Unlike the generic sequential composition methods, Ct?&S applies the expensive signature and encryption operations in parallel, which could imply a gain in efficiency over the StE and EtS schemes. We also show that the new Ct?&S method elegantly combines with the recent "hash-sign-switch" technique of [30], leading to efficient on-line/off-line signcryption.Finally and of independent interest, we discuss the definitional inadequacy of the standard notion of chosen ciphertext (CCA2) security. We suggest a natural and very slight relaxation of CCA2-security, which we call generalized CCA2-security (gCCA2). We show that gCCA2-security suffices for all known uses of CCA2-secure encryption, while no longer suffering fromthe definitional shortcomings of the latter.

Book ChapterDOI
18 Feb 2002
TL;DR: In this article, the authors introduce basic definitions of security for homomorphic signature systems, motivate the inquiry with example applications, and describe several schemes that are homomorphic with respect to useful binary operations.
Abstract: Privacy homomorphisms, encryption schemes that are also homomorphisms relative to some binary operation, have been studied for some time, but one may also consider the analogous problem of homomorphic signature schemes. In this paper we introduce basic definitions of security for homomorphic signature systems, motivate the inquiry with example applications, and describe several schemes that are homomorphic with respect to useful binary operations. In particular, we describe a scheme that allows a signature holder to construct the signature on an arbitrarily redacted submessage of the originally signed message. We present another scheme for signing sets that is homomorphic with respect to both union and taking subsets. Finally, we show that any signature scheme that is homomorphic with respect to integer addition must be insecure.

Proceedings ArticleDOI
12 May 2002
TL;DR: This work describes an algorithm whereby a community of users can compute a public "aggregate" of their data that does not expose individual users' data, and uses homomorphic encryption to allow sums of encrypted vectors to be computed and decrypted without exposing individual data.
Abstract: Server-based collaborative filtering systems have been very successful in e-commerce and in direct recommendation applications. In future, they have many potential applications in ubiquitous computing settings. But today's schemes have problems such as loss of privacy, favoring retail monopolies, and with hampering diffusion of innovations. We propose an alternative model in which users control all of their log data. We describe an algorithm whereby a community of users can compute a public "aggregate" of their data that does not expose individual users' data. The aggregate allows personalized recommendations to be computed by members of the community, or by outsiders. The numerical algorithm is fast, robust and accurate. Our method reduces the collaborative filtering task to an iterative calculation of the aggregate requiring only addition of vectors of user data. Then we use homomorphic encryption to allow sums of encrypted vectors to be computed and decrypted without exposing individual data. We give verification schemes for all parties in the computation. Our system can be implemented with untrusted servers, or with additional infrastructure, as a fully peer-to-peer (P2P) system.

Book
15 Jan 2002
TL;DR: A full chapter on error correcting codes introduces the basic elements of coding theory and provides a flexible organization, as each chapter is modular and can be covered in any order.
Abstract: From the Publisher: This book assumes a minimal background in programming and a level of math sophistication equivalent to a course in linear algebra. It provides a flexible organization, as each chapter is modular and can be covered in any order. Using Mathematica, Maple, and MATLAB, computer examples included in an Appendix explain how to do computation and demonstrate important concepts. A full chapter on error correcting codes introduces the basic elements of coding theory. Other topics covered: Classical cryptosystems, basic number theory, the data encryption standard, AES: Rijndael, the RSA algorithm, discrete logarithms, digital signatures, e-commerce and digital cash, secret sharing schemes, games, zero knowledge techniques, key establishment protocols, information theory, elliptic curves, error correcting codes, quantum cryptography. For professionals in cryptography and network security.

Proceedings ArticleDOI
18 Nov 2002
TL;DR: This paper formalizes and investigates the authenticated-encryption with associated-data (AEAD) problem, and studies two simple ways to turn an authenticated-Encryption scheme that does not support associated- data into one that does: nonce stealing and ciphertext translation.
Abstract: When a message is transformed into a ciphertext in a way designed to protect both its privacy and authenticity, there may be additional information, such as a packet header, that travels alongside the ciphertext (at least conceptually) and must get authenticated with it. We formalize and investigate this authenticated-encryption with associated-data (AEAD) problem. Though the problem has long been addressed in cryptographic practice, it was never provided a definition or even a name. We do this, and go on to look at efficient solutions for AEAD, both in general and for the authenticated-encryption scheme OCB. For the general setting we study two simple ways to turn an authenticated-encryption scheme that does not support associated-data into one that does: nonce stealing and ciphertext translation. For the case of OCB we construct an AEAD-scheme by combining OCB and the pseudorandom function PMAC, using the same key for both algorithms. We prove that, despite "interaction" between the two schemes when using a common key, the combination is sound. We also consider achieving AEAD by the generic composition of a nonce-based, privacy-only encryption scheme and a pseudorandom function.

Book ChapterDOI
02 May 2002
TL;DR: The notion of key-insulated public-key encryption was introduced in this article, where the secret key(s) stored on the insecure device are refreshed at discrete time periods via interaction with a physically-secure -but computationally limited -device which stores a "master key".
Abstract: Cryptographic computations (decryption, signature generation, etc.) are often performed on a relatively insecure device (e.g., a mobile device or an Internet-connected host) which cannot be trusted to maintain secrecy of the private key. We propose and investigate the notion of key-insulated security whose goal is to minimize the damage caused by secret-key exposures. In our model, the secret key(s) stored on the insecure device are refreshed at discrete time periods via interaction with a physically-secure - but computationally-limited - device which stores a "master key". All cryptographic computations are still done on the insecure device, and the public key remains unchanged. In a (t, N)-key-insulated scheme, an adversary who compromises the insecure device and obtains secret keys for up to t periods of his choice is unable to violate the security of the cryptosystem for any of the remaining N-t periods. Furthermore, the scheme remains secure (for all time periods) against an adversary who compromises only the physically-secure device. We focus primarily on key-insulated public-key encryption. We construct a (t, N)-key-insulated encryption scheme based on any (standard) publickey encryption scheme, and give a more efficient construction based on the DDH assumption. The latter construction is then extended to achieve chosen-ciphertext security.

Book ChapterDOI
18 Aug 2002
TL;DR: The "Layered Subset Difference" (LSD) technique is described, which reduces the number of keys given to each user by almost a square root factor without affecting the other parameters, and makes it possible to address any subset defined by a nested combination of inclusion and exclusion conditions by sending less than 4t short messages, and the scheme remains secure even if all the other users form an adversarial coalition.
Abstract: Broadcast Encryption schemes enable a center to broadcast encrypted programs so that only designated subsets of users can decrypt each program. The stateless variant of this problem provides each user with a fixed set of keys which is never updated. The best scheme published so far for this problem is the "subset difference" (SD) technique of Naor Naor and Lotspiech, in which each one of the n users is initially given O(log2(n)) symmetric encryption keys. This allows the broadcaster to define at a later stage any subset of up to r users as "revoked", and to make the program accessible only to their complement by sending O(r) short messages before the encrypted program, and asking each user to perform an O(log(n)) computation. In this paper we describe the "Layered Subset Difference" (LSD) technique, which achieves the same goal with O(log1+?(n)) keys, O(r) messages, and O(log(n)) computation. This reduces the number of keys given to each user by almost a square root factor without affecting the other parameters. In addition, we show how to use the same LSD keys in order to address any subset defined by a nested combination of inclusion and exclusion conditions with a number of messages which is proportional to the complexity of the description rather than to the size of the subset. The LSD scheme is truly practical, and makes it possible to broadcast an unlimited number of programs to 256,000,000 possible customers by giving each new customer a smart card with one kilobyte of tamper-resistant memory. It is then possible to address any subset defined by t nested inclusion and exclusion conditions by sending less than 4t short messages, and the scheme remains secure even if all the other users form an adversarial coalition.

Proceedings ArticleDOI
12 May 2002
TL;DR: This work investigates the identifiability of World Wide Web traffic based on this unconcealed information in a large sample of Web pages, and shows that it suffices to identify a significant fraction of them quite reliably.
Abstract: Encryption is often proposed as a tool for protecting the privacy of World Wide Web browsing. However, encryption-particularly as typically implemented in, or in concert with popular Web browsers-does not hide all information about the encrypted plaintext. Specifically, HTTP object count and sizes are often revealed (or at least incompletely concealed). We investigate the identifiability of World Wide Web traffic based on this unconcealed information in a large sample of Web pages, and show that it suffices to identify a significant fraction of them quite reliably. We also suggest some possible countermeasures against the exposure of this kind of information and experimentally evaluate their effectiveness.

Proceedings Article
05 Aug 2002
TL;DR: Randomized Partial Checking (RPC) as mentioned in this paper is a new technique for making mix nets robust, which is particularly well suited for voting systems, as it ensures voter privacy and provides assurance of correct operation.
Abstract: We propose a new technique for making mix nets robust, called randomized partial checking (RPC). The basic idea is that rather than providing a proof of completely correct operation, each server provides strong evidence of its correct operation by revealing a pseudo-randomly selected subset of its input/output relations. Randomized partial checking is exceptionally efficient compared to previous proposals for providing robustness; the evidence provided at each layer is shorter than the output of that layer, and producing the evidence is easier than doing the mixing. It works with mix nets based on any encryption scheme (i.e., on public-key alone, and on hybrid schemes using public-key/symmetric-key combinations). It also works both with Chaumian mix nets where the messages are successively encrypted with each servers’ key, and with mix nets based on a single public key with randomized re-encryption at each layer. Randomized partial checking is particularly well suited for voting systems, as it ensures voter privacy and provides assurance of correct operation. Voter privacy is ensured (either probabilistically or cryptographically) with appropriate design and paRSA Laboratories, Bedford, MA 01730, mjakobsson@rsasecurity.com RSA Laboratories, Bedford, MA 01730, ajuels@rsasecurity.com Laboratory for Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, rivest@mit.edu Support provided by the Carnegie Foundation. rameter selection. Unlike previous work, our work provides voter privacy as a global property of the mix net rather than as a property ensured by a single honest server. RPC-based mix nets also provide very high assurance of a correct election result, since a corrupt server is very likely to be caught if it attempts to tamper with even a couple of ballots.

Book ChapterDOI
18 Aug 2002
TL;DR: In this paper, it was shown that non-interactive non-committing encryption has no solution in the complexity-theoretic (CT) model, and that in this model there does not exist noninteractive NICE.
Abstract: We show that there exists a natural protocol problem which has a simple solution in the random-oracle (RO) model and which has no solution in the complexity-theoretic (CT) model, namely the problem of constructing a non-interactive communication protocol secure against adaptive adversaries a.k.a. non-interactive non-committing encryption. This separation between the models is due to the so-called programability of the random oracle. We show this by providing a formulation of the RO model in which the oracle is not programmable, and showing that in this model, there does not exist non-interactive non-committing encryption.

Journal Article
TL;DR: The Layered Subset Difference (LSD) scheme as discussed by the authors reduces the number of keys required by each user by almost a square root factor without affecting the other parameters, and achieves the same goal with O(log 1+e (n)) keys, O(r) messages, and O(n) computation.
Abstract: Broadcast Encryption schemes enable a center to broadcast encrypted programs so that only designated subsets of users can decrypt each program. The stateless variant of this problem provides each user with a fixed set of keys which is never updated. The best scheme published so far for this problem is the subset difference (SD) technique of Naor Naor and Lotspiech, in which each one of the n users is initially given O(log 2 (n)) symmetric encryption keys. This allows the broadcaster to define at a later stage any subset of up to r users as revoked, and to make the program accessible only to their complement by sending O(r) short messages before the encrypted program, and asking each user to perform an O(log(n)) computation. In this paper we describe the Layered Subset Difference (LSD) technique, which achieves the same goal with O(log 1+e (n)) keys, O(r) messages, and O(log(n)) computation. This reduces the number of keys given to each user by almost a square root factor without affecting the other parameters. In addition, we show how to use the same LSD keys in order to address any subset defined by a nested combination of inclusion and exclusion conditions with a number of messages which is proportional to the complexity of the description rather than to the size of the subset. The LSD scheme is truly practical, and makes it possible to broadcast an unlimited number of programs to 256,000,000 possible customers by giving each new customer a smart card with one kilobyte of tamper-resistant memory. It is then possible to address any subset defined by t nested inclusion and exclusion conditions by sending less than 4t short messages, and the scheme remains secure even if all the other users form an adversarial coalition.

Posted Content
TL;DR: This work provides voter privacy as a global property of the mix net rather than as a property ensured by a single honest server, as it ensures voter privacy and provides assurance of correct operation.
Abstract: We propose a new technique for making mix nets robust, called randomized partial checking (RPC). The basic idea is that rather than providing a proof of completely correct operation, each server provides strong evidence of its correct operation by revealing a pseudo-randomly selected subset of its input/output relations. Randomized partial checking is exceptionally efficient compared to previous proposals for providing robustness; the evidence provided at each layer is shorter than the output of that layer, and producing the evidence is easier than doing the mixing. It works with mix nets based on any encryption scheme (i.e., on public-key alone, and on hybrid schemes using public-key/symmetric-key combinations). It also works both with Chaumian mix nets where the messages are successively encrypted with each servers’ key, and with mix nets based on a single public key with randomized re-encryption at each layer. Randomized partial checking is particularly well suited for voting systems, as it ensures voter privacy and provides assurance of correct operation. Voter privacy is ensured (either probabilistically or cryptographically) with appropriate design and paRSA Laboratories, Bedford, MA 01730, mjakobsson@rsasecurity.com RSA Laboratories, Bedford, MA 01730, ajuels@rsasecurity.com Laboratory for Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, rivest@mit.edu Support provided by the Carnegie Foundation. rameter selection. Unlike previous work, our work provides voter privacy as a global property of the mix net rather than as a property ensured by a single honest server. RPC-based mix nets also provide very high assurance of a correct election result, since a corrupt server is very likely to be caught if it attempts to tamper with even a couple of ballots.

Book ChapterDOI
18 Feb 2002
TL;DR: This article presents a hardware implementation of the S-Boxes from the Advanced Encryption Standard (AES), and shows that a calculation of this function and its inverse can be done efficiently with combinational logic.
Abstract: This article presents a hardware implementation of the S-Boxes from the Advanced Encryption Standard (AES). The SBoxes substitute an 8-bit input for an 8-bit output and are based on arithmetic operations in the finite field GF(28). We show that a calculation of this function and its inverse can be done efficiently with combinational logic. This approach has advantages over a straight-forward implementation using read-only memories for table lookups. Most of the functionality is used for both encryption and decryption. The resulting circuit offers low transistor count, has low die-size, is convenient for pipelining, and can be realized easily within a semi-custom design methodology like a standard-cell design. Our standard cell implementation on a 0.6 ?m CMOS process requires an area of only 0.108 mm2 and has delay below 15 ns which equals a maximum clock frequency of 70 MHz. These results were achieved without applying any speed optimization techniques like pipelining.

Journal ArticleDOI
TL;DR: This work presents a novel approach to improving the security of passwords that automatically adapts to gradual changes in a user’s typing patterns while maintaining the same hardened password across multiple logins, for use in file encryption or other applications requiring a long-term secret key.
Abstract: We present a novel approach to improving the security of passwords. In our approach, the legitimate user’s typing patterns (e.g., durations of keystrokes and latencies between keystrokes) are combined with the user’s password to generate a hardened password that is convincingly more secure than conventional passwords alone. In addition, our scheme automatically adapts to gradual changes in a user’s typing patterns while maintaining the same hardened password across multiple logins, for use in file encryption or other applications requiring a long-term secret key. Using empirical data and a prototype implementation of our scheme, we give evidence that our approach is viable in practice, in terms of ease of use, improved security, and performance.

Patent
13 Aug 2002
TL;DR: In this paper, a method and system for encrypting a first piece of information M to be sent by a sender (100) to a receiver (110) allows both sender and receiver to compute a secret message key using identity-based information and a bilinear map.
Abstract: A method and system for encrypting a first piece of information M to be sent by a sender (100) to a receiver (110) allows both sender and receiver to compute a secret message key using identity-based information and a bilinear map. In a one embodiment, the sender (100) computes an identity-based encryption key from an identifier ID associated with the receiver (110). The identifier ID may include various types of information such as the receiver's e-mail address, a receiver credential, a message identifier, or a date. The sender uses a bilinear map and the encryption key to compute a secret message key grID, which is then used to encrypt a message M, producing ciphertext V to be sent from the sender (100) to the receiver (110) together with an element rP. An identity-based decryption key dID is computed by a private key generator (120) based on the ID associated with the receiver and a secret master key s. After obtaining the private decryption key from the key generator (120), the receiver (110) uses it together with the element rP and the bilinear map to compute the secret message key grID, which is then used to decrypt V and recover the original message M. According to one embodiment, the bilinear map is based on a Weil pairing or a Tate pairing defined on a subgroup of an elliptic curve. Also described are several applications of the techniques, including key revocation, credential management, and return receipt notification.

Patent
12 Mar 2002
TL;DR: In this article, a domain-based digital rights management (DRM) method and system is described, where a domain has one or more communication devices, such as user devices that share a common cryptographic key of the domain.
Abstract: A domain-based digital rights management (DRM) method and system. A domain has one or more communication devices, such as user devices that share a common cryptographic key of the domain. There may be a plurality of domains in a digital rights management environment and the domains may additionally be overlapping. A domain authority, in combination with a digital rights management module of a communication device, operates to selectively register and unregister the communication device to the one or more domains and to control access to encrypted digital content information.

Patent
28 Mar 2002
TL;DR: In this paper, each user is provided a registration key, and a long-time updated broadcast key is encrypted using the registration key and provided periodically to a user, where the user decrypts the broadcast message using the short time key.
Abstract: Method and apparatus for secure transmissions. Each user is provided a registration key. A long-time updated broadcast key is encrypted using the registration key and provided periodically to a user. A short-time updated key is encrypted using the broadcast key and provided periodically to a user. Broadcasts are then encrypted using the short-time key, wherein the user decrypts the broadcast message using the short-time key.

Book ChapterDOI
30 Sep 2002
TL;DR: This work presents one such PH (none was known so far) which can be proven secure against known-cleartext attacks, as long as the ciphertext space is much larger than the cleartext space.
Abstract: Privacy homomorphisms (PHs) are encryption transformations mapping a set of operations on cleartext to another set of operations on ciphertext. If addition is one of the ciphertext operations, then it has been shown that a PH is insecure against a chosen-cleartext attack. Thus, a PH allowing full arithmetic on encrypted data can be at best secure against known-cleartext attacks. We present one such PH (none was known so far) which can be proven secure against known-cleartext attacks, as long as the ciphertext space is much larger than the cleartext space. Some applications to delegation of sensitive computing and data and to e-gambling are briefly outlined.

Journal Article
TL;DR: Various ways to perform an efficient side channel attack are shown and potential applications, extensions to other padding schemes and various ways to fix the problem are discussed.
Abstract: In many standards, e.g. SSL/TLS, IPSEC, WTLS, messages are first pre-formatted, then encrypted in CBC mode with a block cipher. Decryption needs to check if the format is valid. Validity of the format is easily leaked from communication protocols in a chosen ciphertext attack since the receiver usually sends an acknowledgment or an error message. This is a side channel. In this paper we show various ways to perform an efficient side channel attack. We discuss potential applications, extensions to other padding schemes and various ways to fix the problem.

Proceedings ArticleDOI
07 Aug 2002
TL;DR: This paper points out CKBA is very weak to the chosen/known-plaintext attack with only one plain-image, and its security to brute-force ciphertext-only attack is overestimated by the authors.
Abstract: The security of digital images attracts much attention recently, and many image encryption methods have been proposed. In IS-CAS2000, a new chaotic key-based algorithm (CKBA) for image encryption was proposed. This paper points out CKBA is very weak to the chosen/known-plaintext attack with only one plain-image, and its security to brute-force ciphertext-only attack is overestimated by the authors. That is to say, CKBA is not secure at all from cryptographic viewpoint. Some experiments are made to show the feasibility of the chosen/known-plaintext attack. We also discuss some remedies to the original scheme and their performance, and we find none of them can essentially improve the security of CKBA.