scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Information and System Security in 2004"


Journal ArticleDOI
TL;DR: This paper introduces the family of UCONABC models for usage control (UCON), which integrate Authorizations, oBligations, and Conditions (C), and addresses the essence of U CON, leaving administration, delegation, and other important but second-order issues for later work.
Abstract: In this paper, we introduce the family of UCONABC models for usage control (UCON), which integrate Authorizations (A), oBligations (B), and Conditions (C). We call these core models because they address the essence of UCON, leaving administration, delegation, and other important but second-order issues for later work. The term usage control is a generalization of access control to cover authorizations, obligations, conditions, continuity (ongoing controls), and mutability. Traditionally, access control has dealt only with authorization decisions on users' access to target resources. Obligations are requirements that have to be fulfilled by obligation subjects for allowing access. Conditions are subject and object independent environmental or system requirements that have to be satisfied for access. In today's highly dynamic, distributed environment, obligations and conditions are also crucial decision factors for richer and finer controls on usage of digital resources. Although they have been discussed occasionally in recent literature, most authors have been motivated from specific target problems and thereby limited in their approaches. The UCONABC model integrates these diverse concepts in a unified framework. Traditional authorization decisions are generally made at the time of requests but hardly recognize ongoing controls for relatively long-lived access or for immediate revocation. Moreover, mutability issues that deal with updates on related subject or object attributes as a consequence of access have not been systematically studied.Unlike other studies that have targeted on specific problems or issues, the UCONABC model seeks to enrich and refine the access control discipline in its definition and scope. UCONABC covers traditional access controls such as mandatory, discretionary, and role-based access control. Digital rights management and other modern access controls are also covered. UCONABC lays the foundation for next generation access controls that are required for today's real-world information and systems security. This paper articulates the core of this new area of UCON and develops several detailed models.

983 citations


Journal ArticleDOI
TL;DR: This work investigates a novel group key agreement approach which blends key trees with Diffie--Hellman key exchange and yields a secure protocol suite called Tree-based Group Diffie-Hellman (TGDH) that is both simple and fault-tolerant.
Abstract: Secure and reliable group communication is an active area of research. Its popularity is fueled by the growing importance of group-oriented and collaborative applications. The central research challenge is secure and efficient group key management. While centralized methods are often appropriate for key distribution in large multicast-style groups, many collaborative group settings require distributed key agreement techniques. This work investigates a novel group key agreement approach which blends key trees with Diffie--Hellman key exchange. It yields a secure protocol suite called Tree-based Group Diffie--Hellman (TGDH) that is both simple and fault-tolerant. Moreover, the efficiency of TGDH appreciably surpasses that of prior art.

521 citations


Journal ArticleDOI
TL;DR: To handle large collections of alerts, this paper presents a set of interactive analysis utilities aimed at facilitating the investigation of large sets of intrusion alerts and the development of a toolkit named TIAA, which provides system support for interactive intrusion analysis.
Abstract: Traditional intrusion detection systems (IDSs) focus on low-level attacks or anomalies, and raise alerts independently, though there may be logical connections between them. In situations where there are intensive attacks, not only will actual alerts be mixed with false alerts, but the amount of alerts will also become unmanageable. As a result, it is difficult for human users or intrusion response systems to understand the alerts and take appropriate actions. This paper presents a sequence of techniques to address this issue. The first technique constructs attack scenarios by correlating alerts on the basis of prerequisites and consequences of attacks. Intuitively, the prerequisite of an attack is the necessary condition for the attack to be successful, while the consequence of an attack is the possible outcome of the attack. Based on the prerequisites and consequences of different types of attacks, the proposed method correlates alerts by (partially) matching the consequences of some prior alerts with the prerequisites of some later ones. Moreover, to handle large collections of alerts, this paper presents a set of interactive analysis utilities aimed at facilitating the investigation of large sets of intrusion alerts. This paper also presents the development of a toolkit named TIAA, which provides system support for interactive intrusion analysis. This paper finally reports the experiments conducted to validate the proposed techniques with the 2000 DARPA intrusion detection scenario-specific datasets, and the data collected at the DEFCON 8 Capture the Flag event.

274 citations


Journal ArticleDOI
TL;DR: It is proved that when a particular initiator continues communication with a particular responder across path reformations, existing protocols are subject to the attack, placing an upper bound on how long existing protocols can maintain anonymity in the face of the attacks described.
Abstract: There have been a number of protocols proposed for anonymous network communication. In this paper, we investigate attacks by corrupt group members that degrade the anonymity of each protocol over time. We prove that when a particular initiator continues communication with a particular responder across path reformations, existing protocols are subject to the attack. We use this result to place an upper bound on how long existing protocols, including Crowds, Onion Routing, Hordes, Web Mixes, and DC-Net, can maintain anonymity in the face of the attacks described. This provides a basis for comparing these protocols against each other. Our results show that fully connected DC-Net is the most resilient to these attacks, but it suffers from scalability issues that keep anonymity group sizes small. We also show through simulation that the underlying topography of the DC-Net affects the resilience of the protocol: as the number of neighbors a node has increases the strength of the protocol increases, at the cost of higher communication overhead.

228 citations


Journal ArticleDOI
TL;DR: A thorough performance evaluation of five notable distributed key management techniques integrated with a reliable group communication system and an in-depth comparison and analysis of the five techniques is presented based on experimental results obtained in actual local- and wide-area networks.
Abstract: Group key agreement is a fundamental building block for secure peer group communication systems. Several group key management techniques were proposed in the last decade, all assuming the existence of an underlying group communication infrastructure to provide reliable and ordered message delivery as well as group membership information. Despite analysis, implementation, and deployment of some of these techniques, the actual costs associated with group key management have been poorly understood so far. This resulted in an undesirable tendency: on the one hand, adopting suboptimal security for reliable group communication, while, on the other hand, constructing excessively costly group key management protocols.This paper presents a thorough performance evaluation of five notable distributed key management techniques (for collaborative peer groups) integrated with a reliable group communication system. An in-depth comparison and analysis of the five techniques is presented based on experimental results obtained in actual local- and wide-area networks. The extensive performance measurement experiments conducted for all methods offer insights into their scalability and practicality. Furthermore, our analysis of the experimental results highlights several observations that are not obvious from the theoretical analysis.

168 citations


Journal ArticleDOI
TL;DR: Just Fast Keying is described, a new key-exchange protocol primarily designed for use in the IP security architecture that is simple, efficient, and secure; a proof of the latter property is sketched.
Abstract: We describe Just Fast Keying (JFK), a new key-exchange protocol, primarily designed for use in the IP security architecture. It is simple, efficient, and secure; we sketch a proof of the latter property. JFK also has a number of novel engineering parameters that permit a variety of tradeoffs, most notably the ability to balance the need for perfect forward secrecy against susceptibility to denial-of-service attacks.

159 citations


Journal ArticleDOI
TL;DR: This work presents an engineering process for context constraints that is based on goal-oriented requirements engineering techniques, and describes how the design and implementation of an existing RBAC service was extended to enable the enforcement of context constraints.
Abstract: We present an approach that uses special purpose role-based access control (RBAC) constraints to base certain access control decisions on context information. In our approach a context constraint is defined as a dynamic RBAC constraint that checks the actual values of one or more contextual attributes for predefined conditions. If these conditions are satisfied, the corresponding access request can be permitted. Accordingly, a conditional permission is an RBAC permission that is constrained by one or more context constraints. We present an engineering process for context constraints that is based on goal-oriented requirements engineering techniques, and describe how we extended the design and implementation of an existing RBAC service to enable the enforcement of context constraints. With our approach we aim to preserve the advantages of RBAC and offer an additional means for the definition and enforcement of fine-grained context-dependent access control policies.

145 citations


Journal ArticleDOI
TL;DR: It is concluded that the 802.11b WEP standard is completely insecure, and recommendations on how this vulnerability could be mitigated and repaired are provided.
Abstract: In this paper, we present a practical key recovery attack on WEP, the link-layer security protocol for 802.11b wireless networks. The attack is based on a partial key exposure vulnerability in the RC4 stream cipher discovered by Fluhrer, Mantin, and Shamir. This paper describes how to apply this flaw to breaking WEP, our implementation of the attack, and optimizations that can be used to reduce the number of packets required for the attack. We conclude that the 802.11b WEP standard is completely insecure, and we provide recommendations on how this vulnerability could be mitigated and repaired.

133 citations


Journal ArticleDOI
TL;DR: A new simple schemes for verifiable encryption of digital signatures that makes use of a trusted third party but in an optimistic sense, that is, the TTP takes part in the protocol only if one user cheats or simply crashes.
Abstract: This paper presents a new simple schemes for verifiable encryption of digital signatures. We make use of a trusted third party (TTP) but in an optimistic sense, that is, the TTP takes part in the protocol only if one user cheats or simply crashes. Our schemes can be used as primitives to build efficient fair exchange and certified e-mail protocols.

110 citations


Journal ArticleDOI
TL;DR: The secure shell (SSH) protocol is one of the most popular cryptographic protocols on the Internet as mentioned in this paper, however, the current SSH authenticated encryption mechanism is insecure and it is not secure.
Abstract: The secure shell (SSH) protocol is one of the most popular cryptographic protocols on the Internet. Unfortunately, the current SSH authenticated encryption mechanism is insecure. In this paper, we propose several fixes to the SSH protocol and, using techniques from modern cryptography, we prove that our modified versions of SSH meet strong new chosen-ciphertext privacy and integrity requirements. Furthermore, our proposed fixes will require relatively little modification to the SSH protocol and to SSH implementations. We believe that our new notions of privacy and integrity for encryption schemes with stateful decryption algorithms will be of independent interest.

108 citations


Journal ArticleDOI
TL;DR: This paper addresses the identifier ownership problem by using characteristics of Statistical Uniqueness and Cryptographic Verifiability (SUCV) of certain entities which this document calls SUCV Identifiers and Addresses, or, alternatively, Crypto-based Identifiers.
Abstract: This paper addresses the identifier ownership problem. It does so by using characteristics of Statistical Uniqueness and Cryptographic Verifiability (SUCV) of certain entities which this document calls SUCV Identifiers and Addresses, or, alternatively, Crypto-based Identifiers. Their characteristics allow them to severely limit certain classes of denial-of-service attacks and hijacking attacks. SUCV addresses are particularly applicable to solve the address ownership problem that hinders mechanisms like Binding Updates in Mobile IPv6.

Journal ArticleDOI
TL;DR: The consistency approach for performing verification, the implementation of run-time tools that implement this approach, the anomalous situations found in an LSM-patched Linux 2.4.16 kernel, and an implementation of a static analysis version of this approach are described.
Abstract: We present a consistency analysis approach to assist the Linux community in verifying the correctness of authorization hook placement in the Linux Security Modules (LSM) framework The LSM framework consists of a set of authorization hooks inserted into the Linux kernel to enable additional authorizations to be performed (eg, for mandatory access control) When compared to system call interposition, authorization within the kernel has both security and performance advantages, but it is more difficult to verify that placement of the LSM hooks ensures that all the kernel's security-sensitive operations are authorized Static analysis has been used previously to verified mediation (ie, that some hook mediates access to a security-sensitive operation), but that work did not determine whether the necessary set of authorizations were checked In this paper, we develop an approach to test the consistency of the relationships between security-sensitive operations and LSM hooks The idea is that whenever a security-sensitive operation is performed as part of specifiable event, a particular set of LSM hooks must have mediated that operation This work demonstrates that the number of events that impact consistency is manageable and that the notion of consistency is useful for verifying correctness We describe our consistency approach for performing verification, the implementation of run-time tools that implement this approach, the anomalous situations found in an LSM-patched Linux 2416 kernel, and an implementation of a static analysis version of this approach

Journal ArticleDOI
TL;DR: The experimental results in this paper demonstrate the potential of techniques to hypothesize and reason about attacks possibly missed by intrusion detection systems in building high-level attack scenarios.
Abstract: Several alert correlation methods have been proposed over the past several years to construct high-level attack scenarios from low-level intrusion alerts reported by intrusion detection systems (IDSs). However, all of these methods depend heavily on the underlying IDSs, and cannot deal with attacks missed by IDSs. In order to improve the performance of intrusion alert correlation and reduce the impact of missed attacks, this paper presents a series of techniques to hypothesize and reason about attacks possibly missed by the IDSs. In addition, this paper also discusses techniques to infer attribute values for hypothesized attacks, to validate hypothesized attacks through raw audit data, and to consolidate hypothesized attacks to generate concise attack scenarios. The experimental results in this paper demonstrate the potential of these techniques in building high-level attack scenarios.

Journal ArticleDOI
TL;DR: This article proposes a general-purpose access control model designed to detect whenever sensitive information is being transmitted, and determine whether the sender or receiver is authorized, and addresses how to protect the sensitive content that clients disclose to and receive from servers.
Abstract: The focus of access control in client/server environments is on protecting sensitive server resources by determining whether or not a client is authorized to access those resources The set of resources is usually static, and an access control policy associated with each resource specifies who is authorized to access the resource In this article, we turn the traditional client/server access control model on its head and address how to protect the sensitive content that clients disclose to and receive from servers Since client content is often dynamically generated at run-time, the usual approach of associating a policy with the resource (content) a priori does not work We propose a general-purpose access control model designed to detect whenever sensitive information is being transmitted, and determine whether the sender or receiver is authorized The model identifies sensitive content, maps the sensitive content to an access control policy, and establishes the trustworthiness of the sender or receiver before the sensitive content is disclosed or received We have implemented the model within TrustBuilder, an architecture for negotiating trust between strangers based on properties other than identity The implementation targets open systems, where clients and servers do not have preexisting trust relationships The implementation is the first example of content-triggered trust negotiation It currently supports access control for sensitive content disclosed by web and email clients

Journal ArticleDOI
TL;DR: The "fast-track" mechanism provides a client-side cache of a server's public parameters and negotiated parameters in the course of an initial, enabling handshake, and the "client-side session caching" mechanism allows the server to store an encrypted version of the session information on a client, allowing a server to maintain a much larger number of active sessions in a given memory footprint.
Abstract: We propose two new mechanisms for caching handshake information on TLS clients. The "fast-track" mechanism provides a client-side cache of a server's public parameters and negotiated parameters in the course of an initial, enabling handshake. These parameters need not be resent on subsequent handshakes. Fast-track reduces both network traffic and the number of round trips, and requires no additional server state. These savings are most useful in high-latency environments such as wireless networks. The second mechanism, "client-side session caching," allows the server to store an encrypted version of the session information on a client, allowing a server to maintain a much larger number of active sessions in a given memory footprint. Our design is fully backward-compatible with TLS: extended clients can interoperate with servers unaware of our extensions and vice versa. We have implemented our fast-track proposal to demonstrate the resulting efficiency improvements.

Journal ArticleDOI
TL;DR: A novel secure group keying scheme using hash chain for many-to-many secure group communication and is suitable for applications where the population of a system is stable, group size is moderate, subgroup formation is frequent, and the application is delay sensitive.
Abstract: We propose a novel secure group keying scheme using hash chain for many-to-many secure group communication. This scheme requires a key predistribution center to generate multiple hash chains and allocates exactly one hash value from each chain to a group member. A group member can use its allocated hash values (secrets) to generate group and subgroup keys. Key distribution can be offline or online via the key distribution protocol. Once keys are distributed, this scheme enables a group member to communicate with any possible subgroups without the help of the key distribution center, and without having to leave the overall group, thus avoiding any setup delay. Our scheme is suitable for applications where the population of a system is stable, group size is moderate, subgroup formation is frequent, and the application is delay sensitive. Through analysis, we present effectiveness of our approach.

Journal ArticleDOI
TL;DR: This paper formulate the trade-off between the nested certification overhead and the time improvement on certificate path verification, and numerically analyzed for a 4-level 20-ary balanced tree-shaped PKI.
Abstract: Certification is a common mechanism for authentic public key distribution. In order to obtain a public key, verifiers need to extract a certificate path from a network of certificates, which is called public key infrastructure (PKI), and verify the certificates on this path recursively. This is classical methodology. Nested certification is a novel methodology for efficient certificate path verification. Basic idea is to issue special certificates (called nested certificates) for other certificates. Nested certificates can be used together with classical certificates in PKIs. Such a PKI, which is called nested certificate-based PKI (NPKI), is proposed in this paper as an alternative to classical PKI. The concept of "certificates for other certificates" results in nested certificate paths in which the first certificate is verified cryptographically while others are verified by just fast hash computations. Thus, we can employ efficiently verifiable nested certificate paths instead of classical certificate paths. NPKI is a dynamic system and involves several authorities in order to add a new user to the system. This uses the authorities' idle time to the benefit of the verifiers. We formulate the trade-off between the nested certification overhead and the time improvement on certificate path verification. This trade-off is numerically analyzed for a 4-level 20-ary balanced tree-shaped PKI and it has been shown that the extra cost of nested certification is in acceptable limits in order to generate quickly verifiable certificate paths for certain applications. Moreover, PKI-to-NPKI transition preserves the existing hierarchy and trust relationships in the PKI, so that it can be used for PKIs with fixed topology. Although there are many certificates in NPKI, certificate revocation is no more of a problem than with classical PKIs. NPKI even has an advantage on the number of certificate revocation controls: at most two certificate revocation controls are sufficient independent of the path length. Nested certificates can be easily adopted into X.509 standard certificate structure. Both verification efficiency and revocation advantage of NPKI and nested certificates make them suitable for hierarchical PKIs of wireless applications where wireless end users have limited processing power.

Journal ArticleDOI
TL;DR: The Session Token Protocol (STOP) is a new protocol that can assist in the forensic analysis of a computer involved in malicious network activity and utilizes the Identification Protocol infrastructure, improving both its capabilities and user privacy.
Abstract: In this paper we present the Session Token Protocol (STOP), a new protocol that can assist in the forensic analysis of a computer involved in malicious network activity. It has been designed to help automate the process of tracing attackers who log on to a series of hosts to hide their identity. STOP utilizes the Identification Protocol infrastructure, improving both its capabilities and user privacy. On request, the STOP protocol saves user-level and application-level data associated with a particular TCP connection and returns a random token specifically related to that session. The saved data are not revealed to the requester unless the token is returned to the local administrator, who verifies the legitimacy of the need for the release of information. The protocol supports recursive traceback requests to gather information about the entire path of a connection. This allows an incident investigator to trace attackers to their home systems, but does not violate the privacy of normal users. This paper details the new protocol and presents implementation and performance results.

Journal ArticleDOI
TL;DR: A role-based access control (RBAC) approach for specifying distributed administration requirements, and procedures between administrators, or administration teams, extending earlier work on distributed (modular) authorization is developed.
Abstract: In large organizations the administration of access privileges (such as the assignment of access rights to a user in a particular role) is handled cooperatively through distributed administrators in various different capacities. A quorum may be necessary, or a veto may be possible for such a decision. In this paper, we present two major contributions: We develop a role-based access control (RBAC) approach for specifying distributed administration requirements, and procedures between administrators, or administration teams, extending earlier work on distributed (modular) authorization. While a comprehensive specification in such a language is conceivable it would be quite tedious to evaluate, or analyze, their operational aspects and properties in practice. For this reason we create a new class of extended Petri Nets called Administration Nets (Adm-Nets) such that any RBAC specification of (cooperative) administration requirements (given in terms of predicate logic formulas) can be embedded into an Adm-Net. This net behaves within the constraints specified by the logical formulas, and at the same time, it explicitly exhibits all needed operational details such as allowing for an efficient and comprehensive formal analysis of administrative behavior. We introduce the new concepts and illustrate their use in several examples. While Adm-Nets are much more refined and (behaviorally) explicit than workflow systems our work provides for a constructive step towards novel workflow management tools as well. We demonstrate the usefulness of Adm-Nets by modeling typical examples of administration processes concerned with sets of distributed authorization rules.

Journal ArticleDOI
TL;DR: This work examines a case where confidentiality is irrelevant to the process being modeled, and presents a new security model that captures the recordation process, and augments, rather than supplants, existing models.
Abstract: Security models generally incorporate elements of both confidentiality and integrity. We examine a case where confidentiality is irrelevant to the process being modeled. In this case, integrity includes not only the authentication of origin and the lack of unauthorized changes to a document, but also the acceptance of all parties that the document is complete, signed by all parties, and cannot be modified further. This is especially critical when the document is recorded, so that it is legally the agreement or statement of record, and any copies of the document have no legal force. We show that current security models do not capture the details of this process. We then present a new security model for this process. This model captures the recordation process, and augments, rather than supplants, existing models. Hence it can also be used with existing security models to describe other situations.