scispace - formally typeset
Search or ask a question

Showing papers presented at "Annual Computer Security Applications Conference in 2003"


Proceedings ArticleDOI
08 Dec 2003
TL;DR: Experimental results show that the accuracy of the event classification process is significantly improved using the proposed Bayesian networks, which improve the aggregation of different model outputs and allow one to seamlessly incorporate additional information.
Abstract: Intrusion detection systems (IDSs) attempt to identify attacks by comparing collected data to predefined signatures known to be malicious (misuse-based IDSs) or to a model of legal behavior (anomaly-based IDSs). Anomaly-based approaches have the advantage of being able to detect previously unknown attacks, but they suffer from the difficulty of building robust models of acceptable behavior, which may result in a large number of false alarms. Almost all current anomaly-based intrusion detection systems classify an input event as normal or anomalous by analyzing its features, utilizing a number of different models. A decision for an input event is made by aggregating the results of all employed models. We have identified two reasons for the large number of false alarms, caused by incorrect classification of events in current systems. One is the simplistic aggregation of model outputs in the decision phase. Often, only the sum of the model results is calculated and compared to a threshold. The other reason is the lack of integration of additional information into the decision process. This additional information can be related to the models, such as the confidence in a model's output, or can be extracted from external sources. To mitigate these shortcomings, we propose an event classification scheme that is based on Bayesian networks. Bayesian networks improve the aggregation of different model outputs and allow one to seamlessly incorporate additional information. Experimental results show that the accuracy of the event classification process is significantly improved using our proposed approach.

392 citations


Proceedings ArticleDOI
08 Dec 2003
TL;DR: Honeypot technologies can be used to detect, identify, and gather information on these specific threats, including the advance insider, the trusted individual who knows internal organization.
Abstract: In the past several years there has been extensive research into honeypot technologies, primarily for detection and information gathering against external threats. However, little research has been done for one of the most dangerous threats, the advance insider, the trusted individual who knows our internal organization. These individuals are not after our systems, they are after our information. We discuss how honeypot technologies can be used to detect, identify, and gather information on these specific threats.

361 citations


Proceedings ArticleDOI
08 Dec 2003
TL;DR: This work goes beyond attack paths to compute actual sets of hardening measures that guarantee the safety of given critical resources, and uses an efficient exploit-dependency representation based on monotonic logic that has polynomial complexity, as opposed to many previous attack graph representations having exponential complexity.
Abstract: In-depth analysis of network security vulnerability must consider attacker exploits not just in isolation, but also in combination. The general approach to this problem is to compute attack paths (combinations of exploits), from which one can decide whether a given set of network hardening measures guarantees the safety of given critical resources. We go beyond attack paths to compute actual sets of hardening measures (assignments of initial network conditions) that guarantee the safety of given critical resources. Moreover, for given costs associated with individual hardening measures, we compute assignments that minimize overall cost. By doing our minimization at the level of initial conditions rather than exploits, we resolve hardening irrelevancies and redundancies in a way that cannot be done through previously proposed exploit-level approaches. Also, we use an efficient exploit-dependency representation based on monotonic logic that has polynomial complexity, as opposed to many previous attack graph representations having exponential complexity.

271 citations


Proceedings ArticleDOI
08 Dec 2003
TL;DR: A secure version of ARP that provides protection against ARP poisoning and performance measurements show that PKI based strong authentication is feasible to secure even low level protocols, as long as the overhead for key validity verification is kept small.
Abstract: Tapping into the communication between two hosts on a LAN has become quite simple thanks to tools that can be downloaded from the Internet. Such tools use the address resolution protocol (ARP) poisoning technique, which relies on hosts caching reply messages even though the corresponding requests were never sent. Since no message authentication is provided, any host of the LAN can forge a message containing malicious information. We present a secure version of ARP that provides protection against ARP poisoning. Each host has a public/private key pair certified by a local trusted party on the LAN, which acts as a certification authority. Messages are digitally signed by the sender, thus preventing the injection of spurious and/or spoofed information. As a proof of concept, the proposed solution was implemented on a Linux box. Performance measurements show that PKI based strong authentication is feasible to secure even low level protocols, as long as the overhead for key validity verification is kept small.

190 citations


Proceedings ArticleDOI
Thomas Gross1
08 Dec 2003
TL;DR: This work presents a security analysis of the SAML single sign-on browser/artifact profile, which is the first one for such a protocol standard and reveals several flaws in the specification that can lead to vulnerable implementations.
Abstract: Many influential industrial players are currently pursuing the development of new protocols for federated identity management. The security assertion markup language (SAML) is an important standardized example of this new protocol class and will be widely used in business-to-business scenarios to reduce user-management costs. SAML utilizes a constraint-based specification that is a popular design technique of this protocol class. It does not include a general security analysis, but provides an attack-by-attack list of countermeasures as security consideration. We present a security analysis of the SAML single sign-on browser/artifact profile, which is the first one for such a protocol standard. Our analysis of the protocol design reveals several flaws in the specification that can lead to vulnerable implementations. To demonstrate their impact, we exploit some of these flaws to mount attacks on the protocol.

172 citations


Proceedings ArticleDOI
08 Dec 2003
TL;DR: This work addresses the problem of detecting masquerading by using a semiglobal alignment and a unique scoring system to measure similarity between a sequence of commands produced by a potential intruder and the user signature, which is a sequences of commands collected from a legitimate user.
Abstract: We address the problem of detecting masquerading, a security attack in which an intruder assumes the identity of a legitimate user. Many approaches based on hidden Markov models and various forms of finite state automata have been proposed to solve this problem. The novelty of our approach results from the application of techniques used in bioinformatics for a pair-wise sequence alignment to compare the monitored session with past user behavior. Our algorithm uses a semiglobal alignment and a unique scoring system to measure similarity between a sequence of commands produced by a potential intruder and the user signature, which is a sequence of commands collected from a legitimate user. We tested this algorithm on the standard intrusion data collection set. As discussed, the results of the test showed that the described algorithm yields a promising combination of intrusion detection rate and false positive rate, when compared to published intrusion detection algorithms.

148 citations


Proceedings ArticleDOI
08 Dec 2003
TL;DR: WebSTAT is presented, an intrusion detection system that analyzes Web requests looking for evidence of malicious behavior and provides a sophisticated language to describe multistep attacks in terms of states and transitions to achieve more effective detection of Web-based attacks.
Abstract: Web servers are ubiquitous, remotely accessible, and often misconfigured. In addition, custom Web-based applications may introduce vulnerabilities that are overlooked even by the most security-conscious server administrators. Consequently, Web servers are a popular target for hackers. To mitigate the security exposure associated with Web servers, intrusion detection systems are deployed to analyze and screen incoming requests. The goal is to perform early detection of malicious activity and possibly prevent more serious damage to the protected site. Even though intrusion detection is critical for the security of Web servers, the intrusion detection systems available today only perform very simple analyses and are often vulnerable to simple evasion techniques. In addition, most systems do not provide sophisticated attack languages that allow a system administrator to specify custom, complex attack scenarios to be detected. We present WebSTAT, an intrusion detection system that analyzes Web requests looking for evidence of malicious behavior. The system is novel in several ways. First of all, it provides a sophisticated language to describe multistep attacks in terms of states and transitions. In addition, the modular nature of the system supports the integrated analysis of network traffic sent to the server host, operating system-level audit data produced by the server host, and the access logs produced by the Web server. By correlating different streams of events, it is possible to achieve more effective detection of Web-based attacks.

140 citations


Proceedings ArticleDOI
08 Dec 2003
TL;DR: This work presents a taxonomy of different types of context and investigates the data the information system must manage in order to deal with these different contexts and explains how to model them in the Or-BAC model.
Abstract: As computer infrastructures become more complex, security models must provide means to handle more flexible and dynamic requirements. In the organization based access control (Or-BAC) model, it is possible to express such requirements using the notion of context. In Or-BAC, each privilege (permission or obligation or prohibition) only applies in a given context. A context is viewed as an extra condition that must be satisfied to activate a given privilege. We present a taxonomy of different types of context and investigate the data the information system must manage in order to deal with these different contexts. We then explain how to model them in the Or-BAC model.

132 citations


Proceedings ArticleDOI
08 Dec 2003
TL;DR: It is argued the need for correlating data among different logs to improve intrusion detection systems accuracy and the use of data mining tools (RIPPER) and correlation among logs improves the effectiveness of an intrusion detection system while reducing false positives.
Abstract: Intrusion detection is an important part of networked-systems security protection. Although commercial products exist, finding intrusions has proven to be a difficult task with limitations under current techniques. Therefore, improved techniques are needed. We argue the need for correlating data among different logs to improve intrusion detection systems accuracy. We show how different attacks are reflected in different logs and argue that some attacks are not evident when a single log is analyzed. We present experimental results using anomaly detection for the virus Yaha. Through the use of data mining tools (RIPPER) and correlation among logs we improve the effectiveness of an intrusion detection system while reducing false positives.

123 citations


Proceedings ArticleDOI
08 Dec 2003
TL;DR: In this article, the authors present a collaborative intrusion detection system (CIDS) for accurate and efficient intrusion detection in a distributed system, which employs multiple specialized detectors at the different layers - network, kernel and application - and a manager based framework for aggregating the alarms from the different detectors to provide a combined alarm for an intrusion.
Abstract: We present the design and implementation of a collaborative intrusion detection system (CIDS) for accurate and efficient intrusion detection in a distributed system. CIDS employs multiple specialized detectors at the different layers - network, kernel and application - and a manager based framework for aggregating the alarms from the different detectors to provide a combined alarm for an intrusion. The premise is that a carefully designed and configured CIDS can increase the accuracy of detection compared to individual detectors, without a substantial degradation in performance. In order to validate the premise, we present the design and implementation of a CIDS which employs Snort, Libsafe, and a new kernel level IDS called Sysmon. The manager has a graph-based and a Bayesian network based aggregation method for combining the alarms to finally come up with a decision about the intrusion. The system is evaluated using a Web-based electronic store front application and under three different classes of attacks - buffer overflow, flooding and script-based attacks. The results show performance degradations compared to no detection of 3.9% and 6.3% under normal workload and a buffer overflow attack respectively. The experiments to evaluate the accuracy of the system show that the normal workload generates false alarms for Snort and the elementary detectors produce missed alarms. CIDS does not flag the false alarm and reduces the incidence of missed alarms to 1 of the 7 cases. CIDS can also be used to measure the propagation time of an intrusion which is useful in choosing an appropriate response strategy.

114 citations


Proceedings ArticleDOI
08 Dec 2003
TL;DR: Key benefits of this approach are that it requires no changes to the untrusted programs (to be isolated) or the underlying operating system; it cannot be subverted by malicious programs; and it achieves these benefits with acceptable runtime overheads.
Abstract: We present a new approach for safe execution of untrusted programs by isolating their effects from the rest of the system. Isolation is achieved by intercepting file operations made by untrusted processes, and redirecting any change operations to a "modification cache" that is invisible to other processes in the system. File read operations performed by the untrusted process are also correspondingly modified, so that the process has a consistent view of system state that incorporates the contents of the file system as well as the modification cache. On termination of the untrusted process, its user is presented with a concise summary of the files modified by the process. Additionally, the user can inspect these files using various software utilities (e.g., helper applications to view multimedia files) to determine if the modifications are acceptable. The user then has the option to commit these modifications, or simply discard them. Essentially, our approach provides "play" and "rewind" buttons for running untrusted software. Key benefits of our approach are that it requires no changes to the untrusted programs (to be isolated) or the underlying operating system; it cannot be subverted by malicious programs; and it achieves these benefits with acceptable runtime overheads. We describe a prototype implementation of this system for Linux called Alcatraz and discuss its performance and effectiveness.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: The data generation verifies a methodology previously developed by the present authors that ensures that important statistical properties of the authentic data are preserved by using authentic normal data and fraud as a seed for generating synthetic data, thus creating the necessary adaptation of the system to a specific environment.
Abstract: We report an experiment aimed at generating synthetic test data for fraud detection in an IP based video-on-demand service. The data generation verifies a methodology previously developed by the present authors [E. Lundin et al., (2002)] that ensures that important statistical properties of the authentic data are preserved by using authentic normal data and fraud as a seed for generating synthetic data. This enables us to create realistic behavior profiles for users and attackers. The data is used to train the fraud detection system itself, thus creating the necessary adaptation of the system to a specific environment. Here we aim to verify the usability and applicability of the synthetic data, by using them to train a fraud detection system. The system is then exposed to a set of authentic data to measure parameters such as detection capability and false alarm rate as well as to a corresponding set of synthetic data, and the results are compared.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: The first prototype of a tool is described that automatically generates network traffic using the signatures of the Snort network-based intrusion detection system, and an evasion attack that was discovered as a result of analyzing the test results is presented.
Abstract: Signature-based intrusion detection systems use a set of attack descriptions to analyze event streams, looking for evidence of malicious behavior. If the signatures are expressed in a well-defined language, it is possible to analyze the attack signatures and automatically generate events or series of events that conform to the attack descriptions. This approach has been used in tools whose goal is to force intrusion detection systems to generate a large number of detection alerts. The resulting "alert storm" is used to desensitize intrusion detection system administrators and hide attacks in the event stream. We apply a similar technique to perform testing of intrusion detection systems. Signatures from one intrusion detection system are used as input to an event stream generator that produces randomized synthetic events that match the input signatures. The resulting event stream is then fed to a number of different intrusion detection systems and the results are analyzed. This paper presents the general testing approach and describes the first prototype of a tool, called Mucus, that automatically generates network traffic using the signatures of the Snort network-based intrusion detection system. The paper describes preliminary cross-testing experiments with both an open-source and a commercial tool and reports the results. An evasion attack that was discovered as a result of analyzing the test results is also presented.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: An expert system is presented with a decision tree that uses predetermined invariant relationships between redundant digital objects to detect semantic incongruities and automatically identifying relevant evidence so experts can focus on the relevant files, users, times and other facts first.
Abstract: When computer security violations are detected, computer forensic analysts attempting to determine the relevant causes and effects are forced to perform the tedious tasks of finding and preserving useful clues in large networks of operational machines. To augment a computer crime investigator's efforts, we present an expert system with a decision tree that uses predetermined invariant relationships between redundant digital objects to detect semantic incongruities. By analyzing data from a host or network and searching for violations of known data relationships, particularly when an attacker is attempting to hide his presence, an attacker's unauthorized changes may be automatically identified. Examples of such invariant data relationships are provided, as are techniques to identify new, useful ones. By automatically identifying relevant evidence, experts can focus on the relevant files, users, times and other facts first.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: This work presents an approach to classify server traffic based on decision trees learned during a training phase, which provides a more accurate classification in the presence of malicious activity.
Abstract: Understanding the nature of the information flowing into and out of a system or network is fundamental to determining if there is adherence to a usage policy. Traditional methods of determining traffic type rely on the port label carried in the packet header. This method can fail, however, in the presence of proxy servers that remap port numbers or host services that have been compromised to act as backdoors or covert channels. We present an approach to classify server traffic based on decision trees learned during a training phase. The trees are constructed from traffic described using a set of features we designed to capture stream behavior. Because our classification of the traffic type is independent of port label, it provides a more accurate classification in the presence of malicious activity. An empirical evaluation illustrates that models of both aggregate protocol behavior and host-specific protocol behavior obtain classification accuracies ranging from 82-100%.

Proceedings ArticleDOI
M.M. Williamson1
08 Dec 2003
TL;DR: An approach to preventing the damage caused by viruses that travel via email is presented, using a rate-limiter or virus throttle that does not affect normal traffic, but quickly slows and stops viral traffic.
Abstract: We present an approach to preventing the damage caused by viruses that travel via email. The approach prevents an infected machine spreading the virus further. This directly addresses the two ways that viruses cause damage: less machines spreading the virus will reduce the number of machines infected and reduce the traffic generated by the virus. The approach relies on the observation that normal entailing behaviour is quite different from the behaviour of a spreading virus, with the virus sending messages at a much higher rate, to different addresses. To limit propagation a rate-limiter or virus throttle is described that does not affect normal traffic, but quickly slows and stops viral traffic. We include an analysis of normal emailing behaviour, and details of the throttle design. In addition an implementation is described and tested with real viruses, showing that the approach is practical.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: It is argued that security is emerging as an inherent design issue for online games, after graphics and artificial intelligence, which have become important issues of the design of most games for decades.
Abstract: The emergence of online games has fundamentally changed security requirements for computer games, which previously were largely concerned with copy protection. We examine how new security requirements impact the design of online games by using online bridge, a simple client-server game, as our case study. We argue that security is emerging as an inherent design issue for online games, after graphics and artificial intelligence, which have become important issues of the design of most games for decades. The most important new security concern in online game design is fairness enforcement, and most security mechanisms all contribute to a single objective, namely, making the play fair for each user.

Proceedings Article
08 Dec 2003
TL;DR: A retrospective view of the design of SRI's Provably Secure Operating System (PSOS), a formally specified tagged-capability hierarchical system architecture, examines PSOS in the light of what has happened in computer system developments since 1980.
Abstract: We provide a retrospective view of the design of SRI's Provably Secure Operating System (PSOS), a formally specified tagged-capability hierarchical system architecture. It examines PSOS in the light of what has happened in computer system developments since 1980, and assesses the relevance of the PSOS concepts in that light.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: The HSAP technique is applied to any type of processors to defend against buffer overflow attacks and it is shown that the applicability of the technique is independent of architectures.
Abstract: Buffer over-flow attacks have been causing serious security problems for decades. With more embedded systems networked, it becomes an important research problem to defend embedded systems against buffer overflow attacks. We propose the hardware/software address protection (HSAP) technique to solve this problem. We first classify buffer overflow attacks into two categories (stack smashing attacks and function pointer attacks) and then provide two corresponding defending strategies. In our technique, hardware boundary check method and function pointer XOR method are used to protect a system against stack smashing attacks and function pointer attacks, respectively. Although the focus of the HSAP technique is on embedded systems because of the availability of hardware support, we show that the HSAP technique is applied to any type of processors to defend against buffer overflow attacks. We use four classes of processors to illustrate that the applicability of our technique is independent of architectures. We experiment with our HSAP technique in ARM Evaluator-7T simulation development environments. The results show that our HSAP technique defends a system against more types of buffer overflow attacks with little overhead.

Proceedings ArticleDOI
John Viega1
08 Dec 2003
TL;DR: It is demonstrated that universal hash functions are a theoretically appealing and efficient mechanism for accumulating entropy, and it is argued that systems should provide both computational security and information theoretic security through separate interfaces.
Abstract: There is a large gap between the theory and practice for random number generation. For example, on most operating systems, using /dev/random to generate a 256-bit AES key is highly likely to produce a key with no more than 160 bits of security. We propose solutions to many of the issues that real software-based random number infrastructures have encountered. Particularly, we demonstrate that universal hash functions are a theoretically appealing and efficient mechanism for accumulating entropy, we show how to deal with forking processes without using a two-phase commit, we explore better metrics for estimating entropy and argue that systems should provide both computational security and information theoretic security through separate interfaces.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: This work proposes a general process model for automatically analyzing a collection fragments to reconstruct the original document by placing the fragments in proper order, and shows the problem of finding the optimal ordering to be equivalent to finding a maximum weight Hamiltonian path in a complete graph.
Abstract: Reassembly of fragmented objects from a collection of randomly mixed fragments is a common problem in classical forensics. We address the digital forensic equivalent, i.e., reassembly of document fragments, using statistical modelling tools applied in data compression. We propose a general process model for automatically analyzing a collection fragments to reconstruct the original document by placing the fragments in proper order. Probabilities are assigned to the likelihood that two given fragments are adjacent in the original using context modelling techniques in data compression. The problem of finding the optimal ordering is shown to be equivalent to finding a maximum weight Hamiltonian path in a complete graph. Heuristics are designed and explored and implementation results provided which demonstrate the validity of the proposed technique.

Proceedings ArticleDOI
Dirk Balfanz1
08 Dec 2003
TL;DR: This work proposes a usable access control systems for the World Wide Web, i.e., a system that is easy to use both for content providers and content consumers (who want hassle-free access to such protected content).
Abstract: While publishing content on the World Wide Web has moved within reach of the nontechnical mainstream, controlling access to published content still requires expertise in Web server configuration, public-key certification, and a variety of access control mechanisms. Lack of such expertise results in unnecessary exposure of content published by nonexperts, or force cautious nonexperts to leave their content off-line. Recent research has focused on making access control systems more flexible and powerful, but not on making them easier to use. We propose a usable access control systems for the World Wide Web, i.e., a system that is easy to use both for content providers (who want to protect their content from unauthorized access) and (authorized) content consumers (who want hassle-free access to such protected content). Our system is constructed with judicious use of conventional building blocks, such as access control lists and public-key certificates. We point out peculiarities in existing software that make it unnecessarily hard to achieve our goal of usable access control, and assess the security provided by our usable system.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: The results show that the model fulfills its goals and serves as a successful runtime policy-based intrusion detector on a standard OS.
Abstract: In 2002 we proposed a model for policy-based intrusion detection, based on information flow control. In the present paper, we show its applicability and effectiveness on a standard OS. We present results of two set of experiments, one carried out in a completely controlled environment, the other on an operational server with real network traffic. Our results show that the model fulfills its goals and serves as a successful runtime policy-based intrusion detector.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: A brief chronology of both the spread and eradication of the program, a presentation about how the program worked, and details of the aftermath are provided, which supports the title-that the community has failed to learn from the past.
Abstract: On the evening of 2 November 1988, someone "infected" the Internet with a worm program. That program exploited flaws in utility programs in systems based on BSD-derived versions of UNIX. The flaws allowed the program to break into those machines and copy itself, thus infecting those systems. This program eventually spread to thousands of machines, and disrupted normal activities and Internet connectivity for many days. It was the first major network-wide attack on computer systems, and thus was a matter of considerable interest. We provide a brief chronology of both the spread and eradication of the program, a presentation about how the program worked, and details of the aftermath. That is followed by discussion of some observations of what has happened in the years since that incident. The discussion supports the title-that the community has failed to learn from the past.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: The proposed marking scheme contains three algorithms, namely the marking, reflection and reconstruction algorithms, which have been well tested through extensive simulation experiments and show that the marking scheme can achieve a high performance in tracing the sources of the potential attack packets.
Abstract: Reflector attack [Vern Paxson (2001)] belongs to one of the most serious types of denial-of-service (DoS) attacks, which can hardly be traced by contemporary traceback techniques, since the marked information written by any routers between the attacker and the reflectors will be lost in the replied packets from the reflectors. We propose a reflective algebraic marking scheme for tracing DoS and DDoS attacks, as well as reflector attacks. The proposed marking scheme contains three algorithms, namely the marking, reflection and reconstruction algorithms, which have been well tested through extensive simulation experiments. The results show that the marking scheme can achieve a high performance in tracing the sources of the potential attack packets. In addition, it produces negligible false positives; whereas other current methods usually produce a certain amount of false positives.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: A functional model and a verified formal specification of MLS-PCA, a new security architecture based upon DARPA polymorphic computing architecture (PCA) advances, and a new distributed process-level encryption scheme, for high assurance.
Abstract: DOD Joint Vision 2020 (JV2020) is the integrated multiservice planning document for conduct among coalition forces of future warfare. It requires the confluence of a number of key avionics technical developments: integrating the network-centric battlefield, management of hundred thousands of distributed processors, high assurance multilevel security (MLS) in the battlefield, and low cost high assurance engineering. We describe the results of a study and modeling of a new security architecture, (MLS-PCA), that yields a practical solution for JV2020 based upon DARPA polymorphic computing architecture (PCA) advances, and a new distributed process-level encryption scheme. We define a functional model and a verified formal specification of MLS-PCA, for high assurance, with the constraints PCA software and hardware morphware must support. Also, we show a viable mapping of the MLS-PCA model to the PCA hardware. MLS-PCA is designed to support upwards of 500,000 CPUs predicted by Moore's law to be available circa 2020. To test such speculation, we conclude with a description of an in-progress proof-of-concept implementation of MLS-PCA using a 100-node grid computing system and an MLS distributed targeting application.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: The overall design and philosophy of Poly/sup 2/, an approach to build a hardened framework for network services from commodity hardware and software, is discussed, an initial implementation is presented, and future work is outlined.
Abstract: General-purpose operating systems provide a rich computing environment both to the user and the attacker. The declining cost of hardware and the growing security concerns of software necessitate a revalidation of the many assumptions made in network service architectures. Enforcing sound design principles while retaining usability and flexibility is key to practical security. Poly/sup 2/ is an approach to build a hardened framework for network services from commodity hardware and software. Guided by well-known security design principles such as least common mechanism and economy of mechanism, and driven by goals such as psychological acceptability and immediate usability, Poly/sup 2/ provides a secure platform for network services. It also serves as a testbed for several security-related research areas such as intrusion detection, forensics, and high availability. This paper discusses the overall design and philosophy of Poly/sup 2/, presents an initial implementation, and outlines future work.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: This work introduces practical techniques for on-line attack recovery, which include rules for locating damage and rules for execution order, and introduces multiversion data objects to reduce unnecessary blocking of normal task execution and improve the performance of the whole system.
Abstract: Workflow systems are popular in daily business processing. Since vulnerabilities cannot be totally removed from a system, recovery from successful attacks is unavoidable. We focus on attacks that inject malicious tasks into workflow management systems. We introduce practical techniques for on-line attack recovery, which include rules for locating damage and rules for execution order. In our system, an independent intrusion detection system reports identified malicious tasks periodically. The recovery system detects all damage caused by the malicious tasks and automatically repairs the damage according to dependency relations. Without multiple versions of data objects, recovery tasks may be corrupted by executing normal tasks when we try to run damage analysis and normal tasks concurrently. We address the problem by introducing multiversion data objects to reduce unnecessary blocking of normal task execution and improve the performance of the whole system. We analyze the integrity level and performance of our system. The analytic results demonstrate guidelines for designing such kinds of systems.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: Methods are presented to increase resiliency to server failures by migrating long running, secure TCP-based connections to backup servers, thus mitigating damage from servers disabled by attacks or accidental failures, and providing an immediate way to enhance reliability, and thus resistance to attack.
Abstract: Methods are presented to increase resiliency to server failures by migrating long running, secure TCP-based connections to backup servers, thus mitigating damage from servers disabled by attacks or accidental failures. The failover mechanism described is completely transparent to the client. Using these techniques, simple, practical systems can be built that can be retrofitted into the existing infrastructure, i.e. without requiring changes either to the TCP/IP protocol, or to the client system. The end result is a drop-in method of adding significant robustness to secure network connections such as those using the secure shell protocol (SSH). As there is a large installed universe of TCP-based user agent software, it will be some time before widespread adoption takes place of other approaches designed to withstand these kind of service failures; our methods provide an immediate way to enhance reliability, and thus resistance to attack, without having to wait for clients to upgrade software at their end. The practical viability of our approach is demonstrated by providing details of a system we have built that satisfies these requirements.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: A password authentication system that can tolerate server compromises and can be used to build intrusion-tolerant applications is described.
Abstract: In a password-based authentication system, to authenticate a user, a server typically stores password verification data (PVD), which is a value derived from the user's password using publicly known functions. For those users whose passwords fall within an attacker's dictionary, their PVDs, if stolen (for example, through server compromise), allows the attacker to mount off-line dictionary attacks. We describe a password authentication system that can tolerate server compromises. The described system uses multiple (say n) servers to share password verification data and never reconstructs the shared PVD during user authentications. Only a threshold number (say t, t/spl les/n) of these servers are required for a user authentication and compromising up to (t-1) of these servers will not allow an attacker to mount off-line dictionary attacks, even if a user's password falls within the attacker's dictionary. The described system can still function if some of the servers are unavailable. We give the system architecture and implementation details. Our experimental results show that the described system works well. The given system can be used to build intrusion-tolerant applications.