scispace - formally typeset
Search or ask a question

Showing papers presented at "Annual Computer Security Applications Conference in 2002"


Proceedings Article•DOI•
M.M. Williamson1•
09 Dec 2002
TL;DR: A simple technique to limit the rate of connections to "new" machines that is remarkably effective at both slowing and halting virus propagation without affecting normal traffic is described.
Abstract: Modern computer viruses spread incredibly quickly, far faster than human-mediated responses. This greatly increases the damage that they cause. This paper presents an approach to restricting this high speed propagation automatically. The approach is based on the observation that during virus propagation, an infected machine will connect to as many different machines as fast as possible. An uninfected machine has a different behaviour: connections are made at a lower rate, and are locally correlated (repeat connections to recently accessed machines are likely). This paper describes a simple technique to limit the rate of connections to "new" machines that is remarkably effective at both slowing and halting virus propagation without affecting normal traffic. Results of applying the filter to Web browsing data are included. The paper concludes by suggesting an implementation and discussing the potential and limitations of this approach.

599 citations


Proceedings Article•DOI•
09 Dec 2002
TL;DR: Controlled physical random functions (CPUFs) are introduced which are PUFs that can only be accessed via an algorithm that is physically bound to the PUF in an inseparable way.
Abstract: A physical random function (PUF) is a random function that can only be evaluated with the help of a complex physical system. We introduce controlled physical random functions (CPUFs) which are PUFs that can only be accessed via an algorithm that is physically bound to the PUF in an inseparable way. CPUFs can be used to establish a shared secret between a physical device and a remote user. We present protocols that make this possible in a secure and flexible way, even in the case of multiple mutually mistrusting parties. Once established, the shared secret can be used to enable a wide range of applications. We describe certified execution, where a certificate is produced that proves that a specific computation was carried out on a specific processor. Certified execution has many benefits, including protection against malicious nodes in distributed computation networks. We also briefly discuss a software licensing application.

430 citations


Proceedings Article•DOI•
09 Dec 2002
TL;DR: This document provides a concrete realization of a generalized access control model that makes significant use of contextual information in policy definition by presenting a system-level service architecture, as well as early implementation experience with the framework.
Abstract: We describe an approach to building security services for context-aware environments Specifically, we focus on the design of security services that incorporate the use of security-relevant "context" to provide flexible access control and policy enforcement We previously presented a generalized access control model that makes significant use of contextual information in policy definition This document provides a concrete realization of such a model by presenting a system-level service architecture, as well as early implementation experience with the framework Through our context-aware security services, our system architecture offers enhanced authentication services, more flexible access control and a security subsystem that can adapt itself based on current conditions in the environment We discuss our architecture and implementation and show how it can be used to secure several sample applications

217 citations


Proceedings Article•DOI•
09 Dec 2002
TL;DR: The central contribution of this paper is to describe a model to dynamically assign users to roles based on a finite set of rules defined by the enterprise, which provides a language to express these rules and defines a mechanism to determine seniority among different rules.
Abstract: The role-based access control (RBAC) model is traditionally used to manually assign users to appropriate roles, based on a specific enterprise policy, thereby authorizing them to use the roles' permissions. In environments where the service-providing enterprise has a huge customer base this task becomes formidable. An appealing solution is to automatically assign users to roles. The central contribution of this paper is to describe a model to dynamically assign users to roles based on a finite set of rules defined by the enterprise. These rules take into consideration the attributes of users and any constraints set forth by the enterprise's security policy. The model also allows dynamic revocation of assigned roles based on conditions specified in the security policy. The model provides a language to express these rules and defines a mechanism to determine seniority among different rules. The paper also shows how to use the model to express mandatory access controls (MAC).

184 citations


Proceedings Article•DOI•
09 Dec 2002
TL;DR: The authors used an extended set of content-free e-mail document features such as style markers, structural characteristics and gender-preferential language features together with a support vector machine learning algorithm.
Abstract: This paper describes an investigation of authorship gender attribution mining from e-mail text documents. We used an extended set of predominantly topic content-free e-mail document features such as style markers, structural characteristics and gender-preferential language features together with a support vector machine learning algorithm. Experiments using a corpus of e-mail documents generated by a large number of authors of both genders gave promising results for author gender categorisation.

174 citations


Proceedings Article•DOI•
09 Dec 2002
TL;DR: A network model and an algorithm are presented that allows the IRS to select the response among several alternatives which fulfills the security requirements and has a minimal negative effect on legitimate users.
Abstract: Intrusion detection systems (IDSs) have reached a high level of sophistication and are able to detect intrusions with a variety of methods. Unfortunately, system administrators neither can keep up with the pace that an IDS is delivering alerts, nor can they react upon these within adequate time limits. Automatic response systems have to take over that task. In case of an identified intrusion, these components have to initiate appropriate actions to counter emerging threats. Most current intrusion response systems (IRSs) utilize static mappings to determine adequate response actions in reaction to detected intrusions. The problem with this approach is its inherent inflexibility. Countermeasures (such as changes of firewall rules) often do not only defend against the detected attack but may also have negative effects on legitimate users of the network and its services. To prevent a situation where a response action causes more damage that the actual attack, a mechanism is needed that compares the severity of an attack to the effects of a possible response mechanism. In this paper, we present a network model and an algorithm to evaluate the impact of response actions on the entities of a network. This allows the IRS to select the response among several alternatives which fulfills the security requirements and has a minimal negative effect on legitimate users.

162 citations


Proceedings Article•DOI•
09 Dec 2002
TL;DR: An access control system that automates the creation and enforcement of access control policies for different configurations of an Active Space, and explicitly recognizes different modes of cooperation between groups of users, and the dependence between physical and virtual aspects of security in Active Spaces.
Abstract: Active Spaces are physical spaces augmented with heterogeneous computing and communication devices along with supporting software infrastructure. This integration facilitates collaboration between users, and promotes greater levels of interaction between users and devices. An Active Space can be configured for different types of applications at different times. We present an access control system that automates the creation and enforcement of access control policies for different configurations of an Active Space. Our system explicitly recognizes different modes of cooperation between groups of users, and the dependence between physical and virtual aspects of security in Active Spaces. Our model provides support for both discretionary and mandatory access control policies, and uses role-based access control techniques for easy administration of users and permissions. We dynamically assign permissions to user roles based on context information. We show how we can create dynamic protection domains. This allows administrators and application developers the ability to customize access control policies on a need-to-protect basis. We also provide a semi-formal specification and analysis of our model and show how we preserve safety properties in spite of dynamic changes to access control permissions.

139 citations


Proceedings Article•DOI•
09 Dec 2002
TL;DR: An efficient solution for packet header compression, which is called cIPsec, for VoIPsec traffic, and results show that the proposed compression scheme significantly reduces the overhead of packet headers, thus increasing the effective bandwidth used by the transmission.
Abstract: In this paper we present the results of the experimental analysis of the transmission of voice over secure communication links implementing IPsec. Critical parameters characterizing the real-time transmission of voice over an IPsec-ured Internet connection, as well as techniques that could be adopted to overcome some of the limitations of VoIPsec (Voice over IPsec), are presented Our results show that the effective bandwidth can be reduced up to 50% with respect to VoIP in case of VoIPsec. Furthermore, we show that the cryptographic engine may hurt the performance of voice traffic because of the impossibility to schedule the access to it in order to prioritize traffic. We present an efficient solution for packet header compression, which we call cIPsec, for VoIPsec traffic. Simulation results show that the proposed compression scheme significantly reduces the overhead of packet headers, thus increasing the effective bandwidth used by the transmission. In particular, when cIPsec is adopted, the average packet size is only 2% bigger than in the plain case (VoIP), which makes VoIPsec and VoIP equivalent from the bandwidth usage point of view.

111 citations


Proceedings Article•DOI•
09 Dec 2002
TL;DR: The concept called "security layer" is discussed as the core part of the security architecture, which basically is an open interface that hides the security-relevant functionality of the citizen card on a high abstraction level.
Abstract: When admitting electronic media as a means for citizens to approach public authorities (e-government), security is an indispensable precondition for concerns of legal certainty and for achieving acceptance by the citizens. While the security-enabling technologies such as smartcards, digital signatures, and PKI are mature, questions of scalability, technology-neutrality, and forward-compatibility arise when being deployed on the large scale. The security architecture of the Austrian citizen card is presented. We briefly present the legal provisions that enable e-government. We then reflect on requirements to be fulfilled to achieve a lasting security architecture that provides swift deployment of applications, but provides the flexibility to not discriminate against service providers and technologies that will emerge in future. The concept called "security layer" is discussed as the core part of the security architecture, which basically is an open interface that hides the security-relevant functionality of the citizen card on a high abstraction level. A few e-government applications that are being launched in the short-term are sketched.

110 citations


Proceedings Article•DOI•
09 Dec 2002
TL;DR: The organization of Strata is described and its extension is demonstrated by building two SVE systems: system call interposition and stack-smashing prevention, which ensures that SVE applications implemented in Strata are available to a wide variety of host systems.
Abstract: Safe virtual execution (SVE) allows a host computer system to reduce the risks associated with running untrusted programs. SVE prevents untrusted programs from directly accessing system resources, thereby giving the host the ability to control how individual resources may be used. SVE is used in a variety, of safety-conscious software systems, including the Java Virtual Machine (JVM), software fault isolation (SFI), system call interposition layers, and execution monitors. While SVE is the conceptual foundation for these systems, each uses a different implementation technology. The lack of a unifying framework for building SVE systems results in a variety of problems: many useful SVE systems are not portable and therefore are usable only on a limited number of platforms; code reuse among different SVE systems is often difficult or impossible; and building SVE systems from scratch can be both time consuming and error prone. To address these concerns, we have developed a portable, extensible framework for constructing SVE systems. Our framework, called Strata, is based on software dynamic translation (SDT), a technique for modifying binary programs as they execute. Strata is designed to be ported easily to new platforms and to date has been targeted to SPARC/Solaris, x86/Linux, and MIPS/IRIX. This portability ensures that SVE applications implemented in Strata are available to a wide variety of host systems. Strata also affords the opportunity for code reuse among different SVE applications by establishing a common implementation framework. Strata implements a basic safe virtual execution engine using SDT The base functionality supplied by this engine is easily extended to implement specific SVE systems. In this paper we describe the organization of Strata and demonstrate its extension by building two SVE systems: system call interposition and stack-smashing prevention. To illustrate the use of the system call interposition extensions, the paper presents implementations of several useful security policies.

108 citations


Proceedings Article•DOI•
09 Dec 2002
TL;DR: A modem of network connectivity at multiple levels of the TCP/IP stack appropriate for use in a model checker is presented and it is possible to represent realistic networks including common network security devices such as firewalls, filtering routers, and switches.
Abstract: The individual vulnerabilities of hosts on a network can be combined by an attacker to gain access that would not be possible if the hosts were not interconnected. Currently available tools report vulnerabilities in isolation and in the context of individual hosts in a network. Topological vulnerability analysis (TVA) extends this by searching for sequences of interdependent vulnerabilities, distributed among the various network hosts. Model checking has been applied to the analysis of this problem with some interesting initial results. However previous efforts did not take into account a realistic representation of network connectivity. These models were enough to demonstrate the usefulness of the model checking approach but would not be sufficient to analyze real-world network security problems. This paper presents a modem of network connectivity at multiple levels of the TCP/IP stack appropriate for use in a model checker. With this enhancement, it is possible to represent realistic networks including common network security devices such as firewalls, filtering routers, and switches.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: A security evaluation of Multics for potential use as a two-level (Secret/Top Secret) system in the Air Force Data Services Center (AFDSC) concludes that Multics as implemented today is not certifiably secure and cannot be used in an open use multi-level system.
Abstract: A security evaluation of Multics for potential use as a two-level (Secret/Top Secret) system in the Air Force Data Services Center (AFDSC) is presented. An overview is provided of the present implementation of the Multics Security controls. The report then details the results of a penetration exercise of Multics on the HIS 645 computer. In addition, preliminary results of a penetration exercise of Multics on the new HIS 6180 computer are presented. The report concludes that Multics as implemented today is not certifiably secure and cannot be used in an open use multi-level system. However, the Multics security design principles are significantly better than other contemporary systems. Thus, Multics as implemented today, can be used in a benign Secret/Top Secret environment. In addition, Multics forms a base from which a certifiably secure open use multi-level system can be developed.

Proceedings Article•DOI•
P.A. Karger1, Roger R. Schell•
09 Dec 2002
TL;DR: The lessons learned from the vulnerability assessment of Multics are highly applicable today as governments and industry strive to "secure" today's weaker operating systems through add-ons, "hardening", and intrusion detection schemes.
Abstract: Almost thirty years ago a vulnerability assessment of Multics identified significant vulnerabilities, despite the fact that Multics was more secure than other contemporary (and current) computer systems. Considerably more important than any of the individual design and implementation flaws was the demonstration of subversion of the protection mechanism using malicious software (e.g., trap doors and Trojan horses). A series of enhancements were suggested that enabled Multics to serve in a relatively benign environment. These included addition of "mandatory access controls" and these enhancements were greatly enabled by the fact the Multics was designed from the start for security. However, the bottom-line conclusion was that "restructuring is essential" around a verifiable "security kernel" before using Multics (or any other system) in an open environment (as in today's Internet) with the existence of well-motivated professional attackers employing subversion. The lessons learned from the vulnerability assessment are highly applicable today as governments and industry strive (unsuccessfully) to "secure" today's weaker operating systems through add-ons, "hardening", and intrusion detection schemes.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: Penetration testing is the art of finding an open door as discussed by the authors, and it is not a science as science depends on falsifiable hypotheses, such as the list of potential insecurities, which is unknown and hence unenumerable.
Abstract: Penetration testing is the art of finding an open door. It is not a science as science depends on falsifiable hypotheses. The most penetration testing can hope for is to be the science of insecurity - not the science of security nasmuch as penetration testing can at most prove insecurity by falsifying the hypothesis that any system, network, or application is secure. To be a science of security would require falsifiable hypotheses that any given system, network, or application was insecure, something that could only be done if the number of potential insecurities were known and enumerated such that the penetration tester could thereby falsify (test) a known-to-be-complete list of vulnerabilities claimed to not be present. Because the list of potential insecurities is unknowable and hence unenumerable, no penetration tester can prove security, just as no doctor can prove that you are without occult disease. Putting it as Picasso did, "Art is a lie that shows the truth" and security by penetration testing is a lie in that on a good day can show the truth. These incompleteness and proof-by-demonstration characteristics of penetration testing ensure that it remains an art so long as high rates of technical advance remains brisk and hence enumeration of vulnerabilities an impossibility. Brisk technical advance equals productivity growth and thereby wealth creation, so it is forbidden to long for a day when penetration testing could achieve the status of science.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: This paper presents a framework in which organisational control principles can be formally expressed and analysed using the Alloy specification language and its constraint analysis tools.
Abstract: Organisational control principles, such as those expressed in the separation of duties, supervision, review and delegation, support the main business goals and activities of an organisation. Some of these principles have previously been described and analysed within the context of role- and policy-based distributed systems, but little has been done with respect to the more general context they are placed in and the analysis of relationships between them. This paper presents a framework in which organisational control principles can be formally expressed and analysed using the Alloy specification language and its constraint analysis tools.

Proceedings Article•DOI•
Peng Liu1•
09 Dec 2002
TL;DR: Four architectures for intrusion-tolerant database systems are proposed that have the ability to deliver differential, quantitative QoIA services to customers who have subscribed for these services even in the face of attacks.
Abstract: In this paper we propose four architectures for intrusion-tolerant database systems. While traditional secure database systems rely on prevention controls, an intrusion-tolerant database system can operate through attacks in such a way that the system can continue delivering essential services in the face of attacks. With a focus on attacks by malicious transactions, Architecture I can detect intrusions, and locate and repair the damage caused by the intrusions. Architecture II enhances Architecture I with the ability to isolate attacks so that the database can be immunized from the damage caused by a lot of attacks. Architecture III enhances Architecture I with the ability to dynamically contain the damage in such a way that no damage will leak out during the attack recovery process. Architecture IV enhances Architectures II and III with the ability to adapt the intrusion-tolerance controls to the changing environment so that a stabilized level of trustworthiness can be maintained. Architecture IV enhances Architecture IV with the ability to deliver differential, quantitative QoIA services to customers who have subscribed for these services even in the face of attacks.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: This work presents the investigation into structural feature analysis, the development of these ideas into the PEAT prototype, and results that illustrate PEAT's practical effectiveness.
Abstract: We present PEAT: the Portable Executable Analysis Toolkit. It is a software prototype designed to provide a selection of tools that an analyst may use in order to examine structural aspects of a Windows Portable Executable (PE) file, with the goal of determining whether malicious code has been inserted into an application after compilation. These tools rely on structural features of executables that are likely to indicate the presence of inserted malicious code. The underlying premise is that typical application programs are compiled into one binary, homogeneous from beginning to end with respect to certain structural features; any disruption of this homogeneity is a strong indicator that the binary has been tampered with. For example, it could now harbor a virus or a Trojan horse program. We present our investigation into structural feature analysis, the development of these ideas into the PEAT prototype, and results that illustrate PEAT's practical effectiveness.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: This article defines enterprise roles capable of spanning all IT systems in an organisation and shows how the enterprise role-based access control (ERBAC) model exploits the RBAC model outlined in the NIST standard draft and describes its extensions.
Abstract: The administration of users and access rights in large enterprises is a complex and challenging task. Roles are a powerful concept for simplifying access control, but their implementation is normally restricted to single systems and applications. In this article we define enterprise roles capable of spanning all IT systems in an organisation. We show how the enterprise role-based access control (ERBAC) model exploits the RBAC model outlined in the NIST standard draft and describe its extensions. We have implemented ERBAC as a basic concept of SAM Jupiter, a commercial security administration tool. Based on practical experience with the deployment of Enterprise Roles during SAM implementation projects in large organisations, we have enhanced the ERBAC model by including different ways of parametrising the roles. We show that using parameters can significantly reduce the number of roles needed in an enterprise and simplify the role structure, thereby reducing the administration effort considerably. The enhanced ERBAC features are illustrated by real-life examples.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: A methodology for discovering storage and timing channels that can be used through all phases of the software life cycle to increase confidence that all channels have been identified is presented.
Abstract: Secure computer systems use both mandatory and discretionary access controls to restrict the flow of information through legitimate communication channels such as files, shared memory and process signals. Unfortunately, in practice one finds that computer systems are built such that users are not limited to communicating only through the intended communication channels. As a result, a well-founded concern of security-conscious system designers is the potential exploitation of system storage locations and timing facilities to provide unforeseen communication channels to users. These illegitimate channels are known as covert storage and timing channels. Prior to the presentation of this paper twenty years ago the covert channel analysis that took place was mostly ad hoc. Methods for discovering and dealing with these channels were mostly informal, and the formal methods were restricted to a particular specification language. This paper presents a methodology for discovering storage and timing channels that can be used through all phases of the software life cycle to increase confidence that all channels have been identified. In the original paper the methodology was presented and applied to an example system having three different descriptions: English, formal specification, and high order language implementation. In this paper only the English requirements are considered. However the paper also presents how the methodology has evolved and the influence it had on other work.

Proceedings Article•DOI•
M. Schmid1, F. Hill1, A.K. Ghosh2•
09 Dec 2002
TL;DR: A prototype Windows NT/2000 tool that addresses malicious software threats to user data by extending the existing set of file-access permissions is presented, producing an intuitive data-centric method of protecting valuable documents that provides an additional layer of defense beyond existing antivirus solutions.
Abstract: Corruption or disclosure of sensitive user documents can be among the most lasting and costly effects of malicious software attacks. Many malicious programs specifically target files that are likely to contain important user data. Researchers have approached this problem by developing techniques for restricting access to resources on an application-by-application basis. These so-called "sandbox environments," though effective, are cumbersome and difficult to use. In this paper, we present a prototype Windows NT/2000 tool that addresses malicious software threats to user data by extending the existing set of file-access permissions. Management and configuration options make the tool unobtrusive and easy to use. We have conducted preliminary experiments to assess the usability of the tool and to evaluate the effects of improvements we have made. Our work has produced an intuitive data-centric method of protecting valuable documents that provides an additional layer of defense beyond existing antivirus solutions.

Proceedings Article•DOI•
Tuomas Aura1, Michael Roe1, Jari Arkko2•
09 Dec 2002
TL;DR: This paper discusses several threats created by location management that go beyond unauthentic location data, and introduces and analyze protection mechanisms with focus on ones that work for all Internet nodes and do not need a PKI or other new security infrastructure.
Abstract: In the Mobile IPv6 protocol, the mobile node sends binding updates to its correspondents to inform them about its current location. It is well-known that the origin of this location information must be authenticated. This paper discusses several threats created by location management that go beyond unauthentic location data. In particular, the attacker can redirect data to bomb third parties and induce unnecessary authentication. We introduce and analyze protection mechanisms with focus on ones that work for all Internet nodes and do not need a PKI or other new security infrastructure. Our threat analysis and assessment of the defense mechanisms formed the basis for the design of a secure location management protocol for Mobile IPv6. Many of the same threats should be considered when designing any location management mechanism for open networks.

Proceedings Article•DOI•
Marcel Waldvogel1•
09 Dec 2002
TL;DR: This work analyzes the effects of an attacker using GOSSIB against CEFS and shows that the attacker can seed misinformation much more efficiently than the network is able to contribute real traceback information, rendering PPM effectively useless.
Abstract: To identify sources of distributed denial-of-service attacks, path traceback mechanisms have been proposed. Traceback mechanisms relying on probabilistic packet marking (PPM) have received most attention, as they are easy to implement and deploy incrementally. We introduce a new concept, namely Groups Of Strongly SImilar Birthdays (GOSSIB), that can be used by to obtain effects similar to a successful birthday attack on PPM schemes. The original and most widely known IP traceback mechanism, compressed edge fragment sampling (CEFS), was developed by Savage et al. (2000). We analyze the effects of an attacker using GOSSIB against CEFS and show that the attacker can seed misinformation much more efficiently than the network is able to contribute real traceback information. Thus, GOSSIB will render PPM effectively useless. It can be expected that GOSSIB has similar effects on other PPM traceback schemes and that standard modifications to the systems will not solve the problem.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: This work describes an approach based on load-time verification of onboard device drivers against a standard security policy designed to limit access to system resources, and describes the ongoing effort to construct a prototype of this technique for open firmware boot platforms.
Abstract: Malicious boot firmware is a largely unrecognized but significant security risk to our global information infrastructure. Since boot firmware executes before the operating system is loaded, it can easily circumvent any operating system-based security mechanism. Boot firmware programs are typically written by third-party device manufacturers and may come from various suppliers of unknown origin. We describe an approach to this problem based on load-time verification of onboard device drivers against a standard security policy designed to limit access to system resources. We also describe our ongoing effort to construct a prototype of this technique for open firmware boot platforms.

Proceedings Article•DOI•
Mary Ellen Zurko1, C. Kaufman1, K. Spanbauer1, C. Bassett1•
09 Dec 2002
TL;DR: It is found that the default configuration of the majority of users did not allow unsigned active content to run, however, it was found that when presented with a choice during their workflow, many of those otherwise secured users would allow unsignedactivecontent to run.
Abstract: Designers are often faced with difficult tradeoffs between easing the user's burden by making security decisions for them and offering features that ensure that users can make the security decisions that are right for them and their environment. Users often do not understand enough about the impact of a security decision to make an informed choice. We report on the experience in a 500-person organization on the security of each user's Lotus Notes client against unsigned active content. We found that the default configuration of the majority of users did not allow unsigned active content to run. However, we found that when presented with a choice during their workflow, many of those otherwise secured users would allow unsigned active content to run. We discuss the features that are in Lotus Notes that provide security for active content and that respond to the usability issues from this study.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: This work introduces and discusses in detail the secure distributed storage of sensitive information using HTTP cookie encryption, and is able to employ One-Time Pads to encrypt the cookies, because encryption and decryption are both done by the server.
Abstract: The blooming e-commerce is demanding better methods to protect online users' privacy, especially the credit card information that is widely used in online shopping. Holding all these data in a central database of the Web sites would attract hackers' attacks, impose unnecessary liability on the merchant Web sites, and raise the customers' privacy concerns. We introduce and discuss in detail the secure distributed storage of sensitive information using HTTP cookie encryption. We are able to employ One-Time Pads to encrypt the cookies, because encryption and decryption are both done by the server, which is an interesting characteristic overlooked by the existing systems. We implemented this protocol and showed that it is simple, fast and easy to program with.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: A general security architecture for a large-scale object-based distributed system that includes ways for servers toauthenticate clients, clients to authenticate servers, new secure servers to be instantiated without manual intervention, and ways to restrict which client can perform which operation on which object.
Abstract: Large-scale distributed systems present numerous security problems not present in local systems. We present a general security architecture for a large-scale object-based distributed system. Its main features include ways for servers to authenticate clients, clients to authenticate servers, new secure servers to be instantiated without manual intervention, and ways to restrict which client can perform which operation on which object. All of these features are done in a platform- and application-independent way, so the results are quite general. The basic idea behind the scheme is to have each object owner issue cryptographically sealed certificates to users to prove which operations they may request and to servers to prove which operations they are authorized to execute. These certificates are used to ensure secure binding and secure method invocation. The paper discusses the required certificates and security protocols for using them.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: NetMap is presented, a security tool for network modeling, discovery, and analysis that relies on a comprehensive network model that is not limited to a specific network level; it integrates network information throughout the layers.
Abstract: Security analysis should take advantage of a reliable knowledge base that contains semantically-rich information about a protected network. This knowledge is provided by network mapping tools. These tools rely on models to represent the entities of interest, and they leverage off network discovery techniques to populate the model structure with the data that is pertinent to a specific target network. Unfortunately, existing tools rely on incomplete data models. Networks are complex systems and most approaches oversimplify their target models in an effort to limit the problem space. In addition, the techniques used to populate the models are limited in scope and are difficult to extend. This paper presents NetMap, a security tool for network modeling, discovery, and analysis. NetMap relies on a comprehensive network model that is not limited to a specific network level; it integrates network information throughout the layers. The model contains information about topology, infrastructure, and deployed services. In addition, the relationships among different entities in different layers of the model are made explicit. The modeled information is managed by using a suite of composable network tools that can determine various aspects of network configurations through scanning techniques and heuristics. Tools in the suite are responsible for a single, well-defined task.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: Techniques for remote identification of web servers, even where server information has been omitted are described and methodologies for detecting and limiting such activity are discussed.
Abstract: Cyber attacks continue to increase in sophistication. Advanced attackers often gather information about a target system before launching a precise attack to exploit a discovered vulnerability. This paper discusses techniques for remote identification of web servers and suggests possible defenses to the probing activity. General concepts of finger-printing and their application to the identification of Web servers, even where server information has been omitted are described and methodologies for detecting and limiting such activity are discussed.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: This hardware-based approach has brought the LOCK project into many uncharted areas in the design, verification, and evaluation of an integrated information security system.
Abstract: LOCK is an advanced development of hardware-based computer security and crypto-graphic service modules. Much of the design and some of the implementation specifications are complete. The Formal Top Level Specification (FTLS) also is complete and the advanced noninterference proofs are beginning. This hardware-based approach has brought the LOCK project into many uncharted areas in the design, verification, and evaluation of an integrated information security system. System integration promises to be the single largest programmatic problem. Our verification tools seem able to verify design only and not implementation.

Proceedings Article•DOI•
09 Dec 2002
TL;DR: This paper presents a technique that is used in the fourth stage to detect the class of worms that use a horizontal scan to propagate, and an argument is made that detection in theFourth stage is a viable, but under-used technique.
Abstract: Worms continue to be a leading security threat on the Internet. This paper analyzes several of the more widespread worms and develops a general life-cycle for them. The lifecycle, from the point of view of the victim host, consists of four stages: target selection, exploitation, infection, and propagation. While not all worms fall into this framework perfectly, by understanding them in this way, it becomes apparent that the majority of detection techniques used today focus on the first three stages. This paper presents a technique that is used in the fourth stage to detect the class of worms that use a horizontal scan to propagate. An argument is also made that detection in the fourth stage is a viable, but under-used technique.