scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Information Security in 2006"


Journal ArticleDOI
TL;DR: A flexible and robust framework is proposed to permit the continuous and transparent authentication of the user, thereby maximising security and minimising user inconvenience, to service the needs of the insecure and evermore functional mobile handset.
Abstract: Mobile handsets have found an important place in modern society, with hundreds of millions currently in use. The majority of these devices use inherently weak authentication mechanisms, based upon passwords and PINs. This paper presents a feasibility study into a biometric-based technique, known as keystroke analysis – which authenticates the user based upon their typing characteristic. In particular, this paper identifies two typical handset interactions, entering telephone numbers and typing text messages, and seeks to authenticate the user during their normal handset interaction. It was found that neural network classifiers were able to perform classification with average equal error rates of 12.8%. Based upon these results, the paper concludes by proposing a flexible and robust framework to permit the continuous and transparent authentication of the user, thereby maximising security and minimising user inconvenience, to service the needs of the insecure and evermore functional mobile handset.

338 citations


Journal ArticleDOI
TL;DR: A set of concepts founded on the notions of ownership, permission, and trust and intended for requirements modeling are proposed and shown to support the automatic verification of security and trust requirements using Datalog.
Abstract: A number of recent proposals aim to incorporate security engineering into mainstream software engineering. Yet, capturing trust and security requirements at an organizational level, as opposed to an IT system level, and mapping these into security and trust management policies is still an open problem. This paper proposes a set of concepts founded on the notions of ownership, permission, and trust and intended for requirements modeling. It also extends Tropos, an agent-oriented software engineering methodology, to support security requirements engineering. These concepts are formalized and are shown to support the automatic verification of security and trust requirements using Datalog. To make the discussion more concrete, we illustrate the proposal with a Health Care case study.

117 citations


Journal ArticleDOI
TL;DR: In this paper, combinatorial design followed by randomized merging strategy is applied to key pre-distribution in sensor nodes and it provides slight improvement in terms of certain parameters than the basic random merging strategy.
Abstract: In this paper, combinatorial design followed by randomized merging strategy is applied to key pre-distribution in sensor nodes. A transversal design is used to construct a (v, b, r, k) configuration and then randomly selected blocks are merged to form the sensor nodes. We present detailed mathematical analysis of the number of nodes, number of keys per node and the probability that a link gets affected if certain number of nodes are compromised. The technique is tunable to user requirements and it also compares favourably with state of the art design strategies. An important feature of our design is the presence of more number of common keys between any two nodes. Further, we study the situation when properly chosen blocks are merged to form sensor nodes such that the number of intra-node common key is minimized. We present a basic heuristic for this approach and show that it provides slight improvement in terms of certain parameters than our basic random merging strategy.

98 citations


Journal ArticleDOI
TL;DR: This work presents an efficient implementation of the proposed techniques based on El Gamal encryption whose security only relies on the intractability of the decisional Diffie—Hellman problem and the resulting protocols require just three rounds of bidder broadcasting in the random oracle model.
Abstract: Privacy has become a factor of increasing importance in auction design. We propose general techniques for cryptographic first-price and (M+1)st-price auction protocols that only yield the winners' identities and the selling price. Moreover, if desired, losing bidders learn no information at all, except that they lost. Our security model is merely based on computational intractability. In particular, our approach does not rely on trusted third parties, e.g., auctioneers. We present an efficient implementation of the proposed techniques based on El Gamal encryption whose security only relies on the intractability of the decisional Diffie—Hellman problem. The resulting protocols require just three rounds of bidder broadcasting in the random oracle model. Communication complexity is linear in the number of possible bids.

84 citations


Journal ArticleDOI
TL;DR: Two generations of novel firewall analysis tools are designed and implemented, which allow the administrator to easily discover and test the global firewall policy, and they operate on a more understandable level of abstraction.
Abstract: Practically every corporation that is connected to the Internet has at least one firewall, and often many more. However, the protection that these firewalls provide is only as good as the policy they are configured to implement. Therefore, testing, auditing, or reverse-engineering existing firewall configurations are important components of every corporation’s network security practice. Unfortunately, this is easier said than done. Firewall configuration files are written in notoriously hard to read languages, using vendor-specific GUIs. A tool that is sorely missing in the arsenal of firewall administrators and auditors is one that allows them to analyze the policy on a firewall.To alleviate some of these difficulties, we designed and implemented two generations of novel firewall analysis tools, which allow the administrator to easily discover and test the global firewall policy. Our tools use a minimal description of the network topology, and directly parse the various vendor-specific low-level configuration files. A key feature of our tools is that they are passive: no packets are sent, and the analysis is performed offline, on a machine that is separate from the firewall itself. A typical question our tools can answer is “from which machines can our DMZ be reached, and with which services?.” Thus, our tools complement existing vulnerability analyzers and port scanners, as they can be used before a policy is actually deployed, and they operate on a more understandable level of abstraction. This paper describes the design and architecture of these tools, their evolution from a research prototype to a commercial product, and the lessons we have learned along the way.

75 citations


Journal ArticleDOI
TL;DR: In this paper, Boneh and Franklin proposed an identity-based encryption (IBE) scheme that is escrow free in that no credentialissuing authority (or colluding set of credential-issuing authorities) is able to decrypt ciphertexts itself, provided the users' public keys are properly certified.
Abstract: Since Boneh and Franklin published their seminal paper on identity based encryption (IBE) using the Weil pairing, there has been a great deal of interest in cryptographic primitives based on elliptic-curve pairings. One particularly interesting application has been to control access to data, via possibly complex policies. In this paper we continue the research in this vein. We present an encryption scheme such that the receiver of an encrypted message can only decrypt if it satisfies a particular policy chosen by the sender at the time of encryption. Unlike standard IBE, our encryption scheme is escrow free in that no credential-issuing authority (or colluding set of credential-issuing authorities) is able to decrypt ciphertexts itself, providing the users' public keys are properly certified. In addition we describe a security model for the scenario in question and provide proofs of security for our scheme (in the random oracle model).

40 citations


Journal ArticleDOI
TL;DR: A novel countermeasure against cryptoviral extortion attacks is shown that forces the API caller to demonstrate that an authorized party can recover the asymmetrically encrypted data.
Abstract: This paper presents the experimental results that were obtained by implementing the payload of a cryptovirus on the Microsoft Windows platform. The attack is based entirely on the Microsoft Cryptographic API and the needed API calls are covered in detail. More specifically, it is shown that by using eight types of API calls and 72 lines of C code, the payload can hybrid encrypt sensitive data and hold it hostage. Benchmarks are also given. A novel countermeasure against cryptoviral extortion attacks is shown that forces the API caller to demonstrate that an authorized party can recover the asymmetrically encrypted data.

33 citations


Journal ArticleDOI
TL;DR: Some design and management issues in running an open PKI, based on the experience gained in the day-by-day operation of the EuroPKI infrastructure are discussed, to identify problems that hamper large scale adoption of public-key certificates.
Abstract: This paper discusses some design and management issues in running an open PKI, based on the experience gained in the day-by-day operation of the EuroPKI infrastructure. The problems are discussed with an historical perspective that includes real-life lessons learnt in EuroPKI about certification practices, services and applications. User-reported problems are also discussed to identify problems that hamper large scale adoption of public-key certificates. The article closes with a general outlook for the field and the description of the future EuroPKI plans.

30 citations


Journal ArticleDOI
Yukiyasu Tsunoo1, Etsuko Tsujihara, Maki Shigeri, Hiroyasu Kubo, Kazuhiko Minematsu1 
TL;DR: This paper provided the cache attack in which the average method is embodied, and provides improved key estimation, and includes the study on the attack that exploits internal collision.
Abstract: A concrete attack using side channel information from cache memory behaviour was proposed for the first time at ISITA 2002. The attack uses the difference between execution times associated with S-box cache-hits and cache-misses to recover the intermediate key. Recently, a theoretical estimation of the number of messages needed for the attack was proposed and it was reported that the average method obtains key information with fewer messages than maximum threshold or intermediate threshold method. Taking the structure of cipher into account, this paper provided the cache attack in which the average method is embodied, and provides improved key estimation. This paper includes the study on the attack that exploits internal collision.

29 citations


Journal ArticleDOI
TL;DR: In this paper, a first-order logic (FOL) semantics for SPKI/SDSI has been proposed, which is equivalent to string rewriting semantics used by SDSI designers, for all queries associated with the rewriting semantics.
Abstract: SPKI/SDSI is a language for expressing distributed access control policy, derived from SPKI and SDSI. We provide a first-order logic (FOL) semantics for SDSI, and show that it has several advantages over previous semantics. For example, the FOL semantics is easily extended to additional policy concepts and gives meaning to a larger class of access control and other policy analysis queries. We prove that the FOL semantics is equivalent to the string rewriting semantics used by SDSI designers, for all queries associated with the rewriting semantics. We also provide a FOL semantics for SPKI/SDSI and use it to analyze the design of SPKI/SDSI. This reveals some problems. For example, the standard proof procedure in RFC 2693 is semantically incomplete. In addition, as noted before by other authors, authorization tags in SPKI/SDSI are algorithmically problematic, making a complete proof procedure unlikely. We compare SPKI/SDSI with RT 1 C, which is a language in the RTRole-based Trust-management framework that can be viewed as an extension of SDSI. The constraint feature of RT 1 C, based on Constraint Datalog, provides an alternative mechanism that is expressively similar to SPKI/SDSI tags, semantically natural, and algorithmically tractable.

27 citations


Journal ArticleDOI
TL;DR: A monitoring system which detects repeated packets in network traffic, and has applications including detecting computer worms, which uses Bloom filters with counters and simulations confirm that this approach can detect worms at early stages of propagation.
Abstract: We present a monitoring system which detects repeated packets in network traffic, and has applications including detecting computer worms. It uses Bloom filters with counters. The system analyzes traffic in routers of a network. Our preliminary evaluation of the system involved traffic from our internal lab and a well known historical data set. After appropriate configuration, no false alarms are obtained under these data sets and we expect low false alarm rates are possible in many network environments. We also conduct simulations using real Internet Service Provider topologies with realistic link delays and simulated traffic. These simulations confirm that this approach can detect worms at early stages of propagation. We believe our approach, with minor adaptations, is of independent interest for use in a number of network applications which benefit from detecting repeated packets, beyond detecting worm propagation. These include detecting network anomalies such as dangerous traffic fluctuations, abusive use of certain services, and some distributed denial-of-service attacks.

Journal ArticleDOI
TL;DR: It is shown that in the presence of a malicious signer key substitution attacks against several signature schemes that are secure in the sense introduced by Menezes and Smart can be mounted.
Abstract: Given a signature sfor some message malong with a corresponding public verification key yin a key substitution attack an attacker derives another verification key $$\overline{y}$$ ≠ y—possibly along with a matching secret key—such that sis also a valid signature of mfor the verification key $$\overline{y}$$. Menezes and Smart have shown that with suitable parameter restrictions DSA and EC-DSA are immune to such attacks. Here, we show that in the presence of a malicious signer key substitution attacks against several signature schemes that are secure in the sense introduced by Menezes and Smart can be mounted. While for EC-DSA such an attack is feasible, other established signature schemes, including EC-KCDSA, can be shown to be secure in this sense.

Journal ArticleDOI
TL;DR: In this paper, a formal model for security of verifiable shuffles and a new verifiable shuffle system based on the Paillier encryption scheme was proposed. But the model is general and can be extended to provide provable security for verifiable SHA decryption.
Abstract: A shuffle takes a list of ciphertexts and outputs a permuted list of re-encryptions of the input ciphertexts. Mix-nets, a popular method for anonymous routing, can be constructed from a sequence of shuffles and decryption. We propose a formal model for security of verifiable shuffles and a new verifiable shuffle system based on the Paillier encryption scheme, and prove its security in the proposed dmodel. The model is general and can be extended to provide provable security for verifiable shuffle decryption.

Journal ArticleDOI
TL;DR: This work combines the method of searching for an invariant subspace of the unbalanced Oil and Vinegar signature scheme and the Minrank method to defeat the new TTS signature scheme.
Abstract: We combine the method of searching for an invariant subspace of the unbalanced Oil and Vinegar signature scheme and the Minrank method to defeat the new TTS signature scheme, which was suggested for low-cost smart card applications at CHES 2004. We show that the attack complexity is less than 250.

Journal ArticleDOI
TL;DR: This paper proposes a formal authorization language that cannot only express nonmonotonic delegation policies which have not been considered in previous approaches, but also represent the delegation with depth, separation of duty, and positive and negative authorizations through Answer Set Programming.
Abstract: Distributed authorization is an essential issue in computer security. Recent research shows that trust management is a promising approach for the authorization in distributed environments. There are two key issues for a trust management system: how to design an expressive high-level policy language and how to solve the compliance-checking problem (Blaze et al. in Proceedings of the Symposium on Security and Privacy, pp. 164–173, 1996; Proceedings of 2nd International Conference on Financial Cryptography (FC’98). LNCS, vol.1465, pp. 254–274, 1998), where ordinary logic programming has been used to formalize various distributed authorization policies (Li et al. in Proceedings of the 2002 IEEE Symposium on Security and Privacy, pp. 114–130, 2002; ACM Trans. Inf. Syst. Secur. (TISSEC) 6(1):128–171, 2003). In this paper, we employ Answer Set Programming to deal with many complex issues associated with the distributed authorization along the trust management approach. In particular, we propose a formal authorization language $$\mathcal {AL}$$ providing its semantics through Answer Set Programming. Using language $$\mathcal {AL}$$, we cannot only express nonmonotonic delegation policies which have not been considered in previous approaches, but also represent the delegation with depth, separation of duty, and positive and negative authorizations. We also investigate basic computational properties related to our approach. Through two case studies. we further illustrate the application of our approach in distributed environments.

Journal ArticleDOI
TL;DR: This paper uses trusted computing platforms linked with peer-to-peer networks to create a network of trustworthy mediators and improve availability and uses threshold cryptography to build a back-up and migration technique which allows recovery from a mediator crashing while also avoiding having all mediators share all secrets.
Abstract: The security-mediated approach to PKI offers several advantages, such as instant revocation and compatibility with standard RSA tools. In this paper, we present a design and prototype that addresses its trust and scalability problems. We use trusted computing platforms linked with peer-to-peer networks to create a network of trustworthy mediators and improve availability. We use threshold cryptography to build a back-up and migration technique which allows recovery from a mediator crashing while also avoiding having all mediators share all secrets. We then use strong forward secrecy with this migration, to mitigate the damage should a crashed mediator actually be compromised.

Journal ArticleDOI
TL;DR: A variant of the Complex Multiplication method that generates ECs of cryptographically strong order based on the computation of Weber polynomials is presented, which can serve as useful guidelines to potential implementers of EC cryptosystems involving generation ofECs of a desirable order on resource limited hardware devices or in systems operating under strict timing response constraints.
Abstract: In many cryptographic applications it is necessary to generate elliptic curves (ECs) whose order possesses certain properties. The method that is usually employed for the generation of such ECs is the so-called Complex Multiplication method. This method requires the use of the roots of certain class field polynomials defined on a specific parameter called the discriminant. The most commonly used polynomials are the Hilbert and Weber ones. The former can be used to generate directly the EC, but they are characterized by high computational demands. The latter have usually much lower computational requirements, but they do not directly construct the desired EC. This can be achieved if transformations of their roots to the roots of the corresponding (generated by the same discriminant) Hilbert polynomials are provided. In this paper we present a variant of the Complex Multiplication method that generates ECs of cryptographically strong order. Our variant is based on the computation of Weber polynomials. We present in a simple and unifying manner a complete set of transformations of the roots of a Weber polynomial to the roots of its corresponding Hilbert polynomial for all values of the discriminant. In addition, we prove a theoretical estimate of the precision required for the computation of Weber polynomials for all values of the discriminant. We present an extensive experimental assessment of the computational efficiency of the Hilbert and Weber polynomials along with their precision requirements for various discriminant values and we compare them with the theoretical estimates. We further investigate the time efficiency of the new Complex Multiplication variant under different implementations of a crucial step of the variant. Our results can serve as useful guidelines to potential implementers of EC cryptosystems involving generation of ECs of a desirable order on resource limited hardware devices or in systems operating under strict timing response constraints.

Journal ArticleDOI
TL;DR: This paper proposes two new schemes for validating digital signatures as non-repudiation evidence that minimize the trusted third party's involvement.
Abstract: A digital signature applied on a message could serve as irrefutable cryptographic evidence to prove its origin and integrity. However, evidence solely based on digital signatures may not enforce strong non-repudiation. Additional mechanisms are needed to make digital signatures as valid non-repudiation evidence in the settlement of possible disputes. Most of existing mechanisms for maintaining the validity of digital signatures rely on the supporting services from trusted third parties, e.g., time-stamping and certificate revocation. Obviously, this is less efficient for on-line transactions. In this paper, we propose two new schemes for validating digital signatures as non-repudiation evidence that minimize the trusted third party's involvement.

Journal ArticleDOI
TL;DR: This paper presents several attacks on the encryption feature provided by the WinRAR compression software, and shows that WinR AR appears to offer slightly better security features than WinZip.
Abstract: Originally written to provide the file compression feature, computer software such as WinRAR and WinZip now also provide encryption features due to the rising need for security and privacy protection of files within a computer system or for sharing within a network. However, since compression has been much in use well before users saw the need for security, most are more familiar with compression software than they are with security ones. Therefore, encryption-enabled compression software such as WinRAR and WinZip tend to be more widely used for security than a dedicated security software. In this paper, we present several attacks on the encryption feature provided by the WinRAR compression software. These attacks are possible due to the subtlety in developing security software based on the integration of multiple cryptographic primitives. In other words, no matter how securely designed each primitive is, using them especially in association with other primitives does not always guarantee secure systems. Instead, time and again such a practice has shown to result in flawed systems. Our results, compared to recent attacks on WinZip by Kohno, show that WinRAR appears to offer slightly better security features.

Journal ArticleDOI
TL;DR: This work automatically augment source code to dynamically catch stack and heap-based buffer overflow and underflow attacks, and recover from them by allowing the program to continue execution, suggesting a slow-down of 20% for Apache in full-protection mode, and 1.2% with selective protection.
Abstract: We examine the problem of containing buffer overflow attacks in a safe and efficient manner. Briefly, we automatically augment source code to dynamically catch stack and heap-based buffer overflow and underflow attacks, and recover from them by allowing the program to continue execution. Our hypothesis is that we can treat each code function as a transaction that can be aborted when an attack is detected, without affecting the application's ability to correctly execute. Our approach allows us to enable selectively or disable components of this defensive mechanism in response to external events, allowing for a direct tradeoff between security and performance. We combine our defensive mechanism with a honeypot-like configuration to detect previously unknown attacks, automatically adapt an application's defensive posture at a negligible performance cost, and help determine worm signatures. Our scheme provides low impact on application performance, the ability to respond to attacks without human intervention, the capacity to handle previously unknown vulnerabilities, and the preservation of service availability. We implement a stand-alone tool, DYBOC, which we use to instrument a number of vulnerable applications. Our performance benchmarks indicate a slow-down of 20% for Apache in full-protection mode, and 1.2% with selective protection. We provide preliminary evidence toward the validity of our transactional hypothesis via two experiments: first, by applying our scheme to 17 vulnerable applications, successfully fixing 14 of them; second, by examining the behavior of Apache when each of 154 potentially vulnerable routines are made to fail, resulting in correct behavior in 139 cases (90%), with similar results for sshd (89%) and Bind (88%).

Journal ArticleDOI
TL;DR: The underlying mechanisms that make up the PolicyUpdater system are shown, including the theoretical foundation of its formal language, system structure, implementation issues and performance analysis.
Abstract: PolicyUpdater is a fully-implemented authorisation system that provides policy evaluations as well as dynamic policy updates. These functions are achieved by the use of a logic-based language, $${\cal L}$$, to represent the underlying access control policies, constraints and update propositions. The system performs access control query evaluations and conditional policy updates by translating the language $${\cal L}$$ policies to a normal logic program in a form suitable for evaluation using the Stable Model semantics. In this paper, we show the underlying mechanisms that make up the PolicyUpdater system, including the theoretical foundation of its formal language, system structure, implementation issues and performance analysis.

Journal ArticleDOI
TL;DR: It is shown that after a 1-round initialization phase during which random bits are distributed among n players, it is possible to perform each of the k XOR computations using two rounds of communication.
Abstract: In this paper we study the randomness complexity needed to distributively perform k XOR computations in a t-private way using constant-round protocols in the case in which the players are honest but curious.We show that the existence of a particular family of subsets allows the recycling of random bits for constant-round private protocols. More precisely, we show that after a 1-round initialization phase during which random bits are distributed among n players, it is possible to perform each of the k XOR computations using two rounds of communication.For $$t\leq c\sqrt{n/\log n}$$, for any c < 1/2, we design a protocol that uses O(kt2log n) random bits.

Journal ArticleDOI
TL;DR: Public Key Infrastructure (PKI) is probably one of the most important items in the arsenal of security measures that can be brought to bear against the aforementioned growing risks and threats.
Abstract: There is no doubt that the Internet is affecting every aspect of our lives; the most significant changes are occurring in private and public sector organizations that are transforming their conventional operating models to Internetbased service models, known as eBusiness, eCommerce, and eGovernment. Companies, institutions, and organizations, irrespective of their size, are nowadays utilizing the Internet for communicating with their customers, suppliers, and partners; for facilitating the interconnection of their employees and branches; for connecting to their back-end data systems and for performing commercial transactions. In such an environment, where almost every organization relies heavily on information and communications technologies, new dependencies and risks arise. Public Key Infrastructure (PKI) is probably one of the most important items in the arsenal of security measures that can be brought to bear against the aforementioned growing risks and threats. PKI research has been active for more than 26 years. In 1978 R.L. Rivest, A. Shamir, and L. Adleman published what is now commonly called the RSA cryptosystem (Communications of the ACM 21(2), 120–128 (1978)), one of the most significant discoveries in the history of cryptography. Since the mathematical foundation of RSA rests on the intractability of factoring large composite integers, in the same year, R. Merkle demonstrated that certain computational puzzles could also be used in constructing public key cryptography (Communications of the ACM 21(4), 194–299 (1978)). As the years passed by, several countries started developing their PKI. Inevitably, several practical problems were identified. Although adhering to international standards, such as ITU, ISO, IETF, and PKCS, different PKI systems (national or/and international) could not connect to one another. Subsequently, a number of organizations were formed to promote and support the interoperability of dif-

Journal ArticleDOI
TL;DR: This paper presents and analyzes the efficiency of a novel HTIR scheme that enables the gradual distribution of encrypted confidential information to large, distributed, (potentially) hierarchically structured user communities and the subsequent publication of corresponding short decryption keys, at a predetermined time.
Abstract: Rapid distribution of newly released confidential information is often impeded by network traffic jams, especially when the confidential information is either crucial or highly prized. This is the case for stock market values, blind auction bidding amounts, many large corporations'strategic business plans, certain news agencies'timed publications, and some licensed software updates. Hierarchical time-based information release (HTIR) schemes enable the gradual distribution of encrypted confidential information to large, distributed, (potentially) hierarchically structured user communities, and the subsequent publication of corresponding short decryption keys, at a predetermined time, so that users can rapidly access the confidential information. This paper presents and analyzes the efficiency of a novel HTIR scheme.

Journal ArticleDOI
TL;DR: This paper investigates the security of RSA cryptosystem using the Chinese remainder theorem (CRT) in the sense of SCA, and applies Novak’s attack to other CRT-based Cryptosystems, namely Multi-Prime RSA, Multi-Exponent RSA, Rabin cryptos System, and HIME(R) cryptosSystem, and proposes countermeasures against these attacks.
Abstract: A side channel attack (SCA) is a serious attack on the implementation of cryptosystems, which can break the secret key using side channel information such as timing, power consumption, etc. Recently, Boneh et al. showed that SSL is vulnerable to SCA if the attacker gets access to the local network of the server. Therefore, public-key infrastructure eventually becomes a target of SCA. In this paper, we investigate the security of RSA cryptosystem using the Chinese remainder theorem (CRT) in the sense of SCA. Novak first proposed a simple power analysis (SPA) against the CRT part using the difference of message modulo p and modulo q. In this paper, we apply Novak’s attack to the other CRT-based cryptosystems, namely Multi-Prime RSA, Multi-Exponent RSA, Rabin cryptosystem, and HIME(R) cryptosystem. Novak-type attack strictly depends on how to implement the CRT. We examine the operations related to CRT of these cryptosystems, and show that an extended Novak-type attack is effective on them. Moreover, we present a novel attack called zero-multiplication attack. The attacker tries to guess the secret prime by producing ciphertexts that cause a multiplication with zero during the decryption, which is easily detected by power analysis. Our experimental result shows that the timing with the zero multiplication is reduced about 10% from the standard one. Finally, we propose countermeasures against these attacks. The proposed countermeasures are based on the ciphertext blinding, but they require no inversion operation. The overhead of the proposed scheme is only about 1–5% of the whole decryption if the bit length of modulus is 1,024.