scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Dependable and Secure Computing in 2008"


Journal ArticleDOI
TL;DR: This paper model the sequence of operations in credit card transaction processing using a hidden Markov model (HMM) and shows how it can be used for the detection of frauds and compares it with other techniques available in the literature.
Abstract: Due to a rapid advancement in the electronic commerce technology, the use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper, we model the sequence of operations in credit card transaction processing using a hidden Markov model (HMM) and show how it can be used for the detection of frauds. An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected. We present detailed experimental results to show the effectiveness of our approach and compare it with other techniques available in the literature.

430 citations


Journal ArticleDOI
TL;DR: This work presents a new mechanized prover for secrecy properties of security protocols that provides a generic method for specifying security properties of the cryptographic primitives, which can handle shared-key and public-key encryption, signatures, message authentication codes, and hash functions.
Abstract: We present a new mechanized prover for secrecy properties of security protocols. In contrast to most previous provers, our tool does not rely on the Dolev-Yao model, but on the computational model. It produces proofs presented as sequences of games; these games are formalized in a probabilistic polynomial-time process calculus. Our tool provides a generic method for specifying security properties of the cryptographic primitives, which can handle shared-key and public-key encryption, signatures, message authentication codes, and hash functions. Our tool produces proofs valid for a number of sessions polynomial in the security parameter, in the presence of an active adversary. We have implemented our tool and tested it on a number of examples of protocols from the literature.

161 citations


Journal ArticleDOI
TL;DR: This paper formalizes classes of security analysis problems in the context of role-based access control, and shows that in general these problems are PSPACE-complete.
Abstract: Specifying and managing access control policies is a challenging problem. We propose to develop formal verification techniques for access control policies to improve the current state of the art of policy specification and management. In this paper, we formalize classes of security analysis problems in the context of role-based access control. We show that in general these problems are PSPACE-complete. We also study the factors that contribute to the computational complexity by considering a lattice of various subcases of the problem with different restrictions. We show that several subcases remain PSPACE-complete, several further restricted subcases are NP-complete, and identify two subcases that are solvable in polynomial time. We also discuss our experiences and findings from experimentations that use existing formal method tools, such as model checking and logic programming, for addressing these problems.

139 citations


Journal ArticleDOI
TL;DR: A (stochastic) branching process model for characterizing the propagation of Internet worms is developed and developed, leading to the development of an automatic worm containment strategy that prevents the spread of a worm beyond its early stage.
Abstract: Self-propagating codes, called worms, such as Code Red, Nimda, and Slammer, have drawn significant attention due to their enormously adverse impact on the Internet. Thus, there is great interest in the research community in modeling the spread of worms and in providing adequate defense mechanisms against them. In this paper, we present a (stochastic) branching process model for characterizing the propagation of Internet worms. The model is developed for uniform scanning worms and then extended to preference scanning worms. This model leads to the development of an automatic worm containment strategy that prevents the spread of a worm beyond its early stage. Specifically, for uniform scanning worms, we are able to 1) provide a precise condition that determines whether the worm spread will eventually stop and 2) obtain the distribution of the total number of hosts that the worm infects. We then extend our results to contain preference scanning worms. Our strategy is based on limiting the number of scans to dark-address space. The limiting value is determined by our analysis. Our automatic worm containment schemes effectively contain both uniform scanning worms and local preference scanning worms, and it is validated through simulations and real trace data to be nonintrusive. We also show that our worm strategy, when used with traditional firewalls, can be deployed incrementally to provide worm containment for the local network and benefit the Internet.

105 citations


Journal ArticleDOI
TL;DR: It is shown that, even with partial deployment on the Internet, IDPFs can proactively limit the spoofing capability of attackers and can help localize the origin of an attack packet to a small number of candidate networks.
Abstract: The distributed denial-of-service (DDoS) attack is a serious threat to the legitimate use of the Internet. Prevention mechanisms are thwarted by the ability of attackers to forge or spoof the source addresses in IP packets. By employing IP spoofing, attackers can evade detection and put a substantial burden on the destination network for policing attack packets. In this paper, we propose an interdomain packet filter (IDPF) architecture that can mitigate the level of IP spoofing on the Internet. A key feature of our scheme is that it does not require global routing information. IDPFs are constructed from the information implicit in border gateway protocol (BGP) route updates and are deployed in network border routers. We establish the conditions under which the IDPF framework correctly works in that it does not discard packets with valid source addresses. Based on extensive simulation studies, we show that, even with partial deployment on the Internet, IDPFs can proactively limit the spoofing capability of attackers. In addition, they can help localize the origin of an attack packet to a small number of candidate networks.

103 citations


Journal ArticleDOI
TL;DR: A new key assignment scheme for access control, which is both efficient and secure, and Elliptic-curve cryptography is deployed in this scheme.
Abstract: In electronic subscription and pay TV systems, data can be organized and encrypted using symmetric key algorithms according to predefined time periods and user privileges and then broadcast to users. This requires an efficient way of managing the encryption keys. In this scenario, time-bound key management schemes for a hierarchy were proposed by Tzeng and Chien in 2002 and 2005, respectively. Both schemes are insecure against collusion attacks. In this paper, we propose a new key assignment scheme for access control, which is both efficient and secure. Elliptic-curve cryptography is deployed in this scheme. We also provide the analysis of the scheme with respect to security and efficiency issues.

83 citations


Journal ArticleDOI
TL;DR: This work analyzes the impact of two different types of hard errors, namely, Time- Dependent Dielectric Breakdown (TDDB) and Electromigration (EM) on FPGAs, and studies the performance degradation of FPGA over time caused by Hot-Carrier Effects and Negative Bias Temperature Instability.
Abstract: Field-Programmable Gate Arrays (FPGAs) have been aggressively moving to lower gate length technologies. Such a scaling of technology has an adverse impact on the reliability of the underlying circuits in such architectures. Various different physical phenomena have been recently explored and demonstrated to impact the reliability of circuits in the form of both transient error susceptibility and permanent failures. In this work, we analyze the impact of two different types of hard errors, namely, Time- Dependent Dielectric Breakdown (TDDB) and Electromigration (EM) on FPGAs. We also study the performance degradation of FPGAs over time caused by Hot-Carrier Effects (HCE) and Negative Bias Temperature Instability (NBTI). Each study is performed on the components of FPGAs most affected by the respective phenomena, from both the performance and reliability perspective. Different solutions are demonstrated to counter each failure and degradation phenomena to increase the operating lifetime of the FPGAs.

80 citations


Journal ArticleDOI
TL;DR: The main goal of this paper is to perform risk analysis of software systems based on the security patterns that they contain, and to determine to what extent specific security patterns shield from known attacks.
Abstract: The importance of software security has been profound, since most attacks to software systems are based on vulnerabilities caused by poorly designed and developed software. Furthermore, the enforcement of security in software systems at the design phase can reduce the high cost and effort associated with the introduction of security during implementation. For this purpose, security patterns that offer security at the architectural level have been proposed in analogy to the well-known design patterns. The main goal of this paper is to perform risk analysis of software systems based on the security patterns that they contain. The first step is to determine to what extent specific security patterns shield from known attacks. This information is fed to a mathematical model based on the fuzzy-set theory and fuzzy fault trees in order to compute the risk for each category of attacks. The whole process has been automated using a methodology that extracts the risk of a software system by reading the class diagram of the system under study.

77 citations


Journal ArticleDOI
TL;DR: A novel concept called "authentication through presence" is introduced that can be used for several applications, including for key establishment and for broadcast authentication over an insecure radio channel and is secure with respect to a realistic attacker model.
Abstract: Inspired by unidirectional error detecting codes that are used in situations where only one kind of bit errors are possible (e.g., it is possible to change a bit "0" into a bit "1", but not the contrary), we propose integrity codes (I-codes) for a radio communication channel, which enable integrity protection of messages exchanged between entities that do not hold any mutual authentication material (i.e. public keys or shared secret keys). The construction of I-codes enables a sender to encode any message such that if its integrity is violated in transmission over a radio channel, the receiver is able to detect it. In order to achieve this, we rely on the physical properties of the radio channel and on unidirectional error detecting codes. We analyze in detail the use of I-codes on a radio communication channel and we present their implementation on a wireless platform as a "proof of concept". We further introduce a novel concept called "authentication through presence", whose broad applications include broadcast authentication, key establishment and navigation signal protection. We perform a detailed analysis of the security of our coding scheme and we show that it is secure within a realistic attacker model.

47 citations


Journal ArticleDOI
TL;DR: An exploration approach centered on high level simulation is developed, demonstrating how the execution of a large set of experiments allowed by the fast simulation engine can lead to important improvements in the knowledge and the identification of the weaknesses in cryptographic algorithm implementations.
Abstract: The design flow of a digital cryptographic device must take into account the evaluation of its security against attacks based on side channels observation. The adoption of high level countermeasures, as well as the verification of the feasibility of new attacks, presently require the execution of time-consuming physical measurements on the prototype product or the simulation at a low abstraction level. Starting from these assumptions, we developed an exploration approach centered on high level simulation, in order to evaluate the actual implementation of a cryptographic algorithm, being it software or hardware based. The simulation is performed within a unified tool based on SystemC, that can model a software implementation running on a microprocessor-based architecture or a dedicated hardware implementation as well as mixed software-hardware implementations with cycle-accurate resolution. Here we describe the tool and provide a large set of design explorations and characterizations based on actual implementations of the AES cryptographic algorithm, demonstrating how the execution of a large set of experiments allowed by the fast simulation engine can lead to important improvements in the knowledge and the identification of the weaknesses in cryptographic algorithm implementations.

34 citations


Journal ArticleDOI
TL;DR: This paper creates vulnerability signatures which are guaranteed to have zero false positives, and shows how to automate signature creation for any vulnerability that can be detected by a runtime monitor.
Abstract: In this paper, we explore the problem of creating \emph{vulnerability signatures}. A vulnerability signature is based on a program vulnerability, and is not specific to any particular exploit. The advantage of vulnerability signatures is that their quality can be guaranteed. In particular, we create vulnerability signatures which are guaranteed to have zero false positives. We show how to automate signature creation for any vulnerability that can be detected by a runtime monitor. We provide a formal definition of a vulnerability signature, and investigate the computational complexity of creating and matching vulnerability signatures. We systematically explore the design space of vulnerability signatures. We also provide specific techniques for creating vulnerability signatures in a variety of language classes. In order to demonstrate our techniques, we have built a prototype system. Our experiments show that we can, using a single exploit, automatically generate a vulnerability signature as a regular expression, as a small program, or as a system of constraints. We demonstrate techniques for creating signatures of vulnerabilities which can be exploited via multiple program paths. Our results indicate that our approach is a viable option for signature generation, especially when guarantees are desired.

Journal ArticleDOI
TL;DR: This work proposes and analyzes the just-enough redundancy transmission (JERT) scheme that uses the powerful maximum-distance separable (MDS) codes to address the problem of random key predistribution for wireless sensor networks.
Abstract: In random key predistribution techniques for wireless sensor networks, a relatively small number of keys are randomly chosen from a large key pool and are loaded on the sensors prior to deployment. After deployment, each sensor tries finding a common key shared by itself and each of its neighbors to establish a link key to protect the wireless communication between themselves. One intrinsic disadvantage of such techniques is that some neighboring sensors do not share any common key. In order to establish a link key among these neighbors, a multihop secure path may be used to deliver the secret. Unfortunately, the possibility of sensors being compromised on the path may render such an establishment process insecure. In this work, we propose and analyze the just-enough redundancy transmission (JERT) scheme that uses the powerful maximum-distance separable (MDS) codes to address the problem. In the JERT scheme, the secret link key is encoded in (n, k) MDS code and transmitted through multiple multihop paths. To reduce the total information that needs to be transmitted, the redundant symbols of the MDS codes are transmitted only if the destination fails to decode the secret. The JERT scheme is demonstrated to be efficient and resilient against node capture. One salient feature of the JERT scheme is its flexibility of trading transmission for lower information disclosure.

Journal ArticleDOI
TL;DR: A generalized ring signature scheme and a generalized multi-signer ring signature based on the original ElGamal signature scheme that can achieve unconditional signer ambiguity and is secure against adaptive chosen-message attacks in the random oracle model are proposed.
Abstract: Ring signature was first introduced in 2001. In a ring signature, instead of revealing the actual identity of the message signer, it specifies a set of possible signers. The verifier can be convinced that the signature was indeed generated by one of the ring members, however, she is unable to tell which member actually produced the signature. In this paper, we propose a generalized ring signature scheme and a generalized multi-signer ring signature based on the original ElGamal signature scheme. The proposed ring signature can achieve unconditional signer ambiguity and is secure against adaptive chosen-message attacks in the random oracle model. Comparing to ring signature based on RSA algorithm, the proposed generalized ring signature scheme has three advantages: (1) all ring members can share the same prime number and all operations can be performed in the same domain; (2) by combining with multi-signatures, we can develop the generalized multi-signer ring signature schemes to enforce cross-organizational involvement in message leaking. It may result in a higher level of confidence or broader coverage on the message source; and (3) the proposed ring signature is a convertible ring signature. It enables the actual message signer to prove to a verifier that only she is capable of generating the ring signature.

Journal ArticleDOI
TL;DR: This work provides a precise termination condition for the PPM algorithm and name the new algorithm the rectified PPM (RPPM) algorithm, which can guarantee the correctness of the constructed attack graph under different probabilities that a router marks the attack packets.
Abstract: The probabilistic packet marking (PPM) algorithm is a promising way to discover the Internet map or an attack graph that the attack packets traversed during a distributed denial-of-service attack. However, the PPM algorithm is not perfect, as its termination condition is not well defined in the literature. More importantly, without a proper termination condition, the attack graph constructed by the PPM algorithm would be wrong. In this work, we provide a precise termination condition for the PPM algorithm and name the new algorithm the rectified PPM (RPPM) algorithm. The most significant merit of the RPPM algorithm is that when the algorithm terminates, the algorithm guarantees that the constructed attack graph is correct, with a specified level of confidence. We carry out simulations on the RPPM algorithm and show that the RPPM algorithm can guarantee the correctness of the constructed attack graph under 1) different probabilities that a router marks the attack packets and 2) different structures of the network graph. The RPPM algorithm provides an autonomous way for the original PPM algorithm to determine its termination, and it is a promising means of enhancing the reliability of the PPM algorithm.

Journal ArticleDOI
TL;DR: This work introduces two strategies toward realizing low-cost ScPs which employ only symmetric cryptographic primitives, based on novel ID-based key predistribution schemes that demand very low complexity of operations to be performed by the ScP and can take good advantage of the DOWN policy.
Abstract: Trustworthy computing modules like secure coprocessors (ScP) are already in extensive use today, albeit limited predominantly to scenarios where constraints on cost is not a serious limiting factor However, inexpensive trustworthy computers are required for many evolving application scenarios The problem of realizing inexpensive ScPs for large-scale networks consisting of low-complexity devices have not received adequate consideration thus far We introduce two strategies toward realizing low-cost ScPs The first is the decrypt only when necessary (DOWN) policy, which can substantially improve the ability of low-cost ScPs to protect their secrets The DOWN policy relies on the ability to operate with fractional parts of secrets Taking full advantage of the DOWN policy requires consideration of the nature of computations performed with secrets and even the mechanisms employed for distribution of secrets We discuss the feasibility of extending the DOWN policy to various asymmetric and symmetric cryptographic primitives The second is cryptographic authentication strategies which employ only symmetric cryptographic primitives, based on novel ID-based key predistribution schemes that demand very low complexity of operations to be performed by the ScP and can take good advantage of the DOWN policy

Journal ArticleDOI
TL;DR: This paper provides the first practical solution for estimating exact bounds in two-dimensional irregular data cubes (that is, data cubes in which certain cell values are known to a snooper) and proposes a new approach to improve the classic Frechet bounds for any high-dimensional data cube in the most general case.
Abstract: The fundamental problem for inference control in data cubes is how to efficiently calculate the lower and upper bounds for each cell value given the aggregations of cell values over multiple dimensions. In this paper, we provide the first practical solution for estimating exact bounds in two-dimensional irregular data cubes (that is, data cubes in which certain cell values are known to a snooper). Our results imply that the exact bounds cannot be obtained by a direct application of the Frechet bounds in some cases. We then propose a new approach to improve the classic Frechet bounds for any high-dimensional data cube in the most general case. The proposed approach improves upon the Frechet bounds in the sense that it gives bounds that are at least as tight as those computed by Frechet yet is simpler in terms of time complexity. Based on our solutions to the fundamental problem, we discuss various security applications such as privacy protection of released data, fine-grained access control, and auditing, and identify some future research directions.

Journal ArticleDOI
TL;DR: A dynamic MPLS/GMPLS path management strategy in which the path recovery mechanism can rapidly find an optimal backup path which satisfies the resilience constraints under multiple link failure occurrences.
Abstract: Most previous research on MPLS/GMPLS recovery management has focused on efficient routing or signaling methods from single failures. However, multiple simultaneous failures may occur in large-scale complex virtual paths of MPLS/GMPLS networks. In this paper, we present a dynamic MPLS/GMPLS path management strategy in which the path recovery mechanism can rapidly find an optimal backup path which satisfies the resilience constraints under multiple link failure occurrences. We derived the conditions to test the existence of resilience-guaranteed backup path, and developed a decomposition theorem and backup path construction algorithm for the fast restoration of resilience-guaranteed backup paths, for the primary path with an arbitrary configuration. Finally, simulation results are presented to evaluate the performance of the proposed approach.

Journal ArticleDOI
TL;DR: This paper presents the mechanisms for the temporal partitioning of communication resources in the dependable embedded components and systems (DECOS) integrated architecture and uses an experimental framework with an implementation of virtual networks on top of a time division multiple access (TDMA)-controlled Ethernet network.
Abstract: Integrated architectures in the automotive and avionic domain promise improved resource utilization and enable a better coordination of application subsystems compared to federated systems. An integrated architecture shares the system's communication resources by using a single physical network for exchanging messages of multiple application subsystems. Similarly, the computational resources (for example, memory and CPU time) of each node computer are available to multiple software components. In order to support a seamless system integration without unintended side effects in such an integrated architecture, it is important to ensure that the software components do not interfere through the use of these shared resources. For this reason, the DECOS integrated architecture encapsulates application subsystems and their constituting software components. At the level of the communication system, virtual networks on top of an underlying time-triggered physical network exhibit predefined temporal properties (that is, bandwidth, latency, and latency jitter). Due to encapsulation, the temporal properties of messages sent by a software component are independent from the behavior of other software components, in particular from those within other application subsystems. This paper presents the mechanisms for the temporal partitioning of communication resources in the dependable embedded components and systems (DECOS) integrated architecture. Furthermore, experimental evidence is provided in order to demonstrate that the messages sent by one software component do not affect the temporal properties of messages exchanged by other software components. Rigid temporal partitioning is achievable while at the same time meeting the performance requirements imposed by present-day automotive applications and those envisioned for the future (for example, X-by-wire). For this purpose, we use an experimental framework with an implementation of virtual networks on top of a time division multiple access (TDMA)-controlled Ethernet network.