scispace - formally typeset
Search or ask a question

Showing papers in "ACM Transactions on Information and System Security in 1999"


Journal ArticleDOI
TL;DR: A language is presented to express both static and dynamic authorization constraints as clauses in a logic program and formal notions of constraint consistency are provided to check the consistency of constraints and assign users and roles to tasks that constitute the workflow in such a way that no constraints are violated.
Abstract: In recent years, workflow management systems (WFMSs) have gained popularity in both research and commercial sectors WFMSs are used to coordinate and streamline business processes Very large WFMSs are often used in organizations with users in the range of several thousands and process instances in the range of tens and thousands To simplify the complexity of security administration, it is common practice in many businesses to allocate a role for each activity in the process and then assign one or more users to each role—granting an authorization to roles rather than to users Typically, security policies are expressed as constraints (or rules) on users and roles; separation of duties is a well-known constraint Unfortunately, current role-based access control models are not adequate to model such constraints To address this issue we (1) present a language to express both static and dynamic authorization constraints as clauses in a logic program; (2) provide formal notions of constraint consistency; and (3) propose algorithms to check the consistency of constraints and assign users and roles to tasks that constitute the workflow in such a way that no constraints are violated

595 citations


Journal ArticleDOI
TL;DR: The motivation, intuition, and formal definition of a new role-based model for RBAC administration is described for the first time and this model is called ARBAC97 (administrative RBAC '97) and has three components: URA97 (user-role assignment '97), RPA97 (permission-role assignments '97, and RRA97) dealing with different aspects ofRBAC administration.
Abstract: In role-based access control (RBAC), permissions are associated with roles' and users are made members of roles, thereby acquiring the roles; permissions. RBAC's motivation is to simplify administration of authorizations. An appealing possibility is to use RBAC itself to manage RBAC, to further provide administrative convenience and scalability, especially in decentralizing administrative authority, responsibility, and chores. This paper describes the motivation, intuition, and formal definition of a new role-based model for RBAC administration. This model is called ARBAC97 (administrative RBAC '97) and has three components: URA97 (user-role assignment '97), RPA97 (permission-role assignment '97), and RRA97 (role-role assignment '97) dealing with different aspects of RBAC administration. URA97, PRA97, and an outline of RRA97 were defined in 1997, hence the designation given to the entire model. RRA97 was completed in 1998. ARBAC97 is described completely in this paper for the first time. We also discusses possible extensions of ARBAC97.

573 citations


Journal ArticleDOI
TL;DR: A computationally cheap method is described for making all log entries generated prior to the logging machine's compromise impossible for the attacker to read, and also impossible to modify or destroy undetectably.
Abstract: In many real-world applications, sensitive information must be kept it log files on an untrusted machine. In the event that an attacker captures this machine, we would like to guarantee that he will gain little or no information from the log files and to limit his ability to corrupt the log files. We describe a computationally cheap method for making all log entries generated prior to the logging machine's compromise impossible for the attacker to read, and also impossible to modify or destroy undetectably.

450 citations


Journal ArticleDOI
TL;DR: NIST's enhanced RBAC model and the approach to designing and implementing RBAC features for networked Web servers are described, which provides administrators with a means of managing authorization data at the enterprise level, in a manner consistent with the current set of laws, regulations, and practices.
Abstract: This paper describes NIST's enhanced RBAC model and our approach to designing and implementing RBAC features for networked Web servers. The RBAC model formalized in this paper is based on the properties that were first described in Ferraiolo and Kuhn [1992] and Ferraiolo et al. [1995], with adjustments resulting from experience gained by prototype implementations, market analysis, and observations made by Jansen [1988] and Hoffman [1996]. The implementation of RBAC for the Web (RBAC/Web) provides an alternative to the conventional means of administering and enforcing authorization policy on a server-by-server basis. RBAC/Web provides administrators with a means of managing authorization data at the enterprise level, in a manner consistent with the current set of laws, regulations, and practices.

442 citations


Journal ArticleDOI
TL;DR: An approach that transforms temporal sequences of discrete, unordered observations into a metric space via a similarity measure that encodes intra-attribute dependencies and demonstrates that it can accurately differentiate the profiled user from alternative users when the available features encode sufficient information.
Abstract: The anomaly-detection problem can be formulated as one of learning to characterize the behaviors of an individual, system, or network in terms of temporal sequences of discrete data. We present an approach on the basis of instance-based learning (IBL) techniques. To cast the anomaly-detection task in an IBL framework, we employ an approach that transforms temporal sequences of discrete, unordered observations into a metric space via a similarity measure that encodes intra-attribute dependencies. Classification boundaries are selected from an a posteriori characterization of valid user behaviors, coupled with a domain heuristic. An empirical evaluation of the approach on user command data demonstrates that we can accurately differentiate the profiled user from alternative users when the available features encode sufficient information. Furthermore, we demonstrate that the system detects anomalous conditions quickly — an important quality for reducing potential damage by a malicious user. We present several techniques for reducing data storage requirements of the user profile, including instance-selection methods and clustering. As empirical evaluation shows that a new greedy clustering algorithm reduces the size of the user model by 70%, with only a small loss in accuracy.

395 citations


Journal ArticleDOI
TL;DR: This work presents and analyze several simple password authentication protocols, and shows optimal resistance to off-line password guessing attacks under the choice of suitable public key encryption functions, and introduces the notion of public passwords that enables the use of the above protocols in situations where the client's machine does not have the means to validate the server's public key.
Abstract: We study protocols for strong authentication and key exchange in asymmetric scenarios where the authentication server possesses ~a pair of private and public keys while the client has only a weak human-memorizable password as its authentication key. We present and analyze several simple password authentication protocols in this scenario, and show that the security of these protocols can be formally proven based on standard cryptographic assumptions. Remarkably, our analysis shows optimal resistance to off-line password guessing attacks under the choice of suitable public key encryption functions. In addition to user authentication, we describe ways to enhance these protocols to provide two-way authentication, authenticated key exchange, defense against server's compromise, and user anonymity. We complement these results with a proof that strongly indicates that public key techniques are unavoidable for password protocols that resist off-line guessing attacks.As a further contribution, we introduce the notion of public passwords that enables the use of the above protocols in situations where the client's machine does not have the means to validate the server's public key. Public passwords serve as "hand-held certificates" that the user can carry without the need for specal computing devices.

338 citations


Journal ArticleDOI
TL;DR: This work describes in more detail the reference model for role-based access control introduced by Nyanchama and Osborn, and the role-graph model with its accompanying algorithms, which is one way of implementing role-role relationships.
Abstract: We describe in more detail than before the reference model for role-based access control introduced by Nyanchama and Osborn, and the role-graph model with its accompanying algorithms, which is one way of implementing role-role relationships. An alternative role insertion algorithm is added, and it is shown how the role creation policies of Fernandez et al. correspond to role addition algorithms in our model. We then use our reference model to provide a taxonomy for kinds of conflict. We then go on to consider in some detail privilege-privilege and and role-role conflicts in conjunction with the role graph model. We show how role-role conflicts lead to a partitioning of the role graph into nonconflicting collections that can together be safely authorized to a given user. Finally, in an appendix, we present the role graph algorithms with additional logic to disallow roles that contain conflicting privileges.

304 citations


Journal ArticleDOI
TL;DR: In this article, an inductive analysis of TLS (a descendant of SSL 3.0) has been performed using the theorem prover Isabelle, based on higher-order logic and making no assumptions concerning beliefs of finiteness.
Abstract: Internet browsers use security protocols to protect sensitive messages. An inductive analysis of TLS (a descendant of SSL 3.0) has been performed using the theorem prover Isabelle. Proofs are based on higher-order logic and make no assumptions concerning beliefs of finiteness. All the obvious security goals can be proved; session resumption appears to be secure even if old session keys are compromised. The proofs suggest minor changes to simplify the analysis.TLS, even at an abstract level, is much more complicated than most protocols verified by researchers. Session keys are negotiated rather than distributed, and the protocol has many optional parts. Netherless, the resources needed to verify TLS are modest: six man-weeks of effort and three minutes of processor time.

250 citations


Journal ArticleDOI
TL;DR: A set of guiding principles for the design of metrics to evaluate the confidence afforded by a set of paths and a new metric that appears to meet these principles is proposed, and so to be a satisfactory metric of authenticaiton.
Abstract: Authentication using a path of trusted intermediaries, each able to authenicate the next in the path, is a well-known technique for authenicating entities in a large-scale system. Recent work has extended this technique to include multiple paths in an effort to bolster authentication, but the success of this approach may be unclear in the face of intersecting paths, ambiguities in the meaning of certificates, and interdependencies in the use of different keys. Thus, several authors have proposed metrics to evaluate the confidence afforded by a set of paths. In this paper we develop a set of guiding principles for the design of such metrics. We motivate our principles by showing how previous approaches failed with respect to these principles and what the consequences to authentication might be. We then propose a new metric that appears to meet our principles, and so to be a satisfactory metric of authenticaiton.

167 citations


Journal ArticleDOI
TL;DR: This work presents a protocol for unlinkable serial transactions suitable for a variety of network-based subscription services, and is the first protocol to use cryptographic blinding to enable subscription services.
Abstract: We present a protocol for unlinkable serial transactions suitable for a variety of network-based subscription services. It is the first protocol to use cryptographic blinding to enable subscription services. The protocol prevents the service from tracking the behavior of its customers, while protecting the service vendor from abuse due to simultaneous or cloned use by a single subscriber. Our basic protocol structure and recovery protocol are robust against failure in protocol termination. We evaluate the security of the basic protocol and extend the basic protocol to include auditing, which further deters subscription sharing. We describe other applications of unlinkable serial transactions for pay-per-use trans subscription, third-party subscription management, multivendor coupons, proof of group membership, and voting.

77 citations


Journal ArticleDOI
TL;DR: A security architecture that enables system and application access control requirements to be enforced on applications composed from downloaded executable content and an implementation of this architecture is described, called the Lava security architecture.
Abstract: We present a security architecture that enables system and application a ccess control requirements to be enforced on applications composed from downloaded executable content. Downloaded executable content consists of messages downloaded from remote hosts that contain executables that run, upon receipt, on the downloading principal's machine. Unless restricted, this content can perform malicious actions, including accessing its downloading principal's private data and sending messages on this principal's behalf. Current security architectures for controlling downloaded executable content (e.g., JDK 1.2) enable specification of access control requirements for content based on its provider and identity. Since these access control requirements must cover every legal use of the class, they may include rights that are not necessary for a particular application of content. Therefore, using these systems, an application composed from downloaded executable content cannot enforce its access control requirements without the addition of application-specific security mechanisms. In this paper, we define an access control model with the following properties: (1) system administrators can define system access control requirements on applications and (2) application developers can use the same model to enforce application access control requirements without the need for ad hoc security mechanisms. This access control model uses features of role-based access control models to enable (1) specification of a single role that applies to multiple application instances; (2) selection of a content's access rights based on the content's application and role in the application; (3) consistency maintained between application state and content access rights; and (4) control of role administration. We detail a system architecture that uses this access control model to implement secure collaborative applications. Lastly, we describe an implementation of this architecture, called the Lava security architecture.

Journal ArticleDOI
TL;DR: This work shows that a timing attack yields the Hamming weight of the key used by both DES implementations, and shows that all the design characteristics of the target system can be inferred from timing measurements.
Abstract: We study the vulnerability of two implementations of the Data Encryption Standard (DES) cryptosystem under a timing attack. A timing attack is a method, recently proposed by Paul Kocher, that is designed to break cryptographic systems. It exploits the engineering aspects involved in the implementation of cryptosystems and might succeed even against cryptosys-tems that remain impervious to sophisticated cryptanalytic techniques. A timing attack is, essentially, a way of obtaining some users private information by carefully measuring the time it takes the user to carry out cryptographic operations. In this work, we analyze two implementations of DES. We show that a timing attack yields the Hamming weight of the key used by both DES implementations. Moreover, the attack is computationally inexpensive. We also show that all the design characteristics of the target system, necessary to carry out the timing attack, can be inferred from timing measurements.

Journal ArticleDOI
TL;DR: Janus as mentioned in this paper is a cryptographic engine, which assists clients in establishing and maintaining secure and pseudonymous relationships with multiple servers on the Internet, where each interaction preserves privacy by neither revealing a clients true identity nor the set of servers with which a particular client interacts.
Abstract: This paper introduces a cryptographic engine, Janus, which assists clients in establishing and maintaining secure and pseudonymous relationships with multiple servers. The setting is such that clients reside on a particular subnet (e.g., corporate intranet, ISP) and the servers reside anywhere on the Internet. The Janus engine allows each client-server relationship to use either weak or strong authentication on each interaction. At the same time, each interaction preserves privacy by neither revealing a clients true identity (except for the subnet) nor the set of servers with which a particular client interacts. Furthermore, clients do not need any secure long-term memory, enabling scalability and mobility. The interaction model extends to allow servers to send data back to clients via e-mail at a later date. Hence, our results complement the functionality of current network anonymity tools and remailers. The paper also describes the design and implementation of the Lucent Personalized Web Assistant (LPWA), which is a practical system that provides secure and pseudonymous relations with multiple servers on the Internet. LPWA employs the Janus function to generate site-specific personae, which consist of alias usernames, passwords, and e-mail addresses.

Journal ArticleDOI
TL;DR: This paper proposes a novel firewall design philosophy, called Quality of Firewalling (QoF), that applies security measures of different strength to traffic with different risk levels and shows how it can be implemented in the firewall.
Abstract: A router-based packet-filtering firewall is an effective way of protecting an enterprise network from unauthorized access. However, it will not work efficiently in an ATM network because it requires the termination of end-to-end ATM connections at a packet-filtering router, which incurs huge overhead of SAR (Segmentation and Reassembly). Very few approaches to this problem have been proposed in the literature, and none is completely satisfactory. In this paper we present the hardware design of a high-speed ATM firewall that does not require the termination of an end-to-end connection in the middle. We propose a novel firewall design philosophy, called Quality of Firewalling (QoF), that applies security measures of different strength to traffic with different risk levels and show how it can be implemented in our firewall. Compared with the traditional firewalls, this ATM firewall performs exactly the same packet-level filtering without compromising the performance and has the same "look and feel" by sitting at the chokepoint between the trusted ATM LAN and untrusted ATM WAN. It is also easy to manage and flexible to use.