scispace - formally typeset
Search or ask a question

Showing papers presented at "Annual Computer Security Applications Conference in 2005"


Proceedings ArticleDOI
05 Dec 2005
TL;DR: This survey tries to answer two important questions: "Are graphical passwords as secure as text-based passwords?" and "What are the major design and implementation issues for graphical passwords?"
Abstract: The most common computer authentication method is to use alphanumerical usernames and passwords. This method has been shown to have significant drawbacks. For example, users tend to pick passwords that can be easily guessed. On the other hand, if a password is hard to guess, then it is often hard to remember. To address this problem, some researchers have developed authentication methods that use pictures as passwords. In this paper, we conduct a comprehensive survey of the existing graphical password techniques. We classify these techniques into two categories: recognition-based and recall-based approaches. We discuss the strengths and limitations of each method and point out the future research directions in this area. We also try to answer two important questions: "Are graphical passwords as secure as text-based passwords?"; "What are the major design and implementation issues for graphical passwords?" This survey will be useful for information security researchers and practitioners who are interested in finding an alternative to text-based authentication methods

585 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: The sHype hypervisor security architecture as mentioned in this paper enforces strong isolation at the granularity of a virtual machine, thus providing a robust foundation on which higher software layers can enact finer-grained controls.
Abstract: We present the sHype hypervisor security architecture and examine in detail its mandatory access control facilities. While existing hypervisor security approaches aiming at high assurance have been proven useful for high-security environments that prioritize security over performance and code reuse, our approach aims at commercial security where near-zero performance overhead, non-intrusive implementation, and usability are of paramount importance. sHype enforces strong isolation at the granularity of a virtual machine, thus providing a robust foundation on which higher software layers can enact finer-grained controls. We provide the rationale behind the sHype design and describe and evaluate our implementation for the Xen open-source hypervisor

304 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: This work proposes a dynamic solution that tags and tracks user input at runtime and prevents its improper use to maliciously affect the execution of the program.
Abstract: Improperly validated user input is the underlying root cause for a wide variety of attacks on Web-based applications. Static approaches for detecting this problem help at the time of development, but require source code and report a number of false positives. Hence, they are of little use for securing fully deployed and rapidly evolving applications. We propose a dynamic solution that tags and tracks user input at runtime and prevents its improper use to maliciously affect the execution of the program. Our implementation can be transparently applied to Java classfiles, and does not require source code. Benchmarks show that the overhead of this runtime enforcement is negligible and can prevent a number of attacks

259 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper proposes a method to alleviate problems by automatically generating new scripts in Honeyd, a popular tool developed by Niels Provos that offers a simple way to emulate services offered by several machines on a single PC.
Abstract: Honeyd (N. Provos, 2004) is a popular tool developed by Niels Provos that offers a simple way to emulate services offered by several machines on a single PC. It is a so called low interaction honeypot. Responses to incoming requests are generated thanks to ad hoc scripts that need to be written by hand. As a result, few scripts exist, especially for services handling proprietary protocols. In this paper, we propose a method to alleviate these problems by automatically generating new scripts. We explain the method and describe its limitations. We analyze the quality of the generated scripts thanks to two different methods. On the one hand, we have launched known attacks against a machine running our scripts; on the other hand, we have deployed that machine on the Internet, next to a high interaction honeypot during two months. For those attackers that have targeted both machines, we can verify if our scripts have, or not, been able to fool them. We also discuss the various tuning parameters of the algorithm that can be set to either increase the quality of the script or, at the contrary, to reduce its complexity

188 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: The Bell-La Padula security model produced conceptual tools for the analysis and design of secure computer systems and these principles are reviewed in the context of today's computer and network environment.
Abstract: The Bell-La Padula security model produced conceptual tools for the analysis and design of secure computer systems. Together with its sibling engineering initiatives, it identified and elucidated security principles that endure today. This paper reviews those security principles, first in their own time, and then in the context of today's computer and network environment

148 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: The preliminary experimental evaluation on both real and synthetic database traces show that the proposed ID mechanisms work well in practical situations, and the use of roles makes the approach usable even for databases with large user population.
Abstract: A considerable effort has been recently devoted to the development of database management systems (DBMS) which guarantee high assurance security and privacy. An important component of any strong security solution is represented by intrusion detection (ID) systems, able to detect anomalous behavior by applications and users. To date, however, there have been very few ID mechanisms specifically tailored to database systems. In this paper, we propose such a mechanism. The approach we propose to ID is based on mining database traces stored in log files. The result of the mining process is used to form user profiles that can model normal behavior and identify intruders. An additional feature of our approach is that we couple our mechanism with role based access control (RBAC). Under a RBAC system permissions are associated with roles, usually grouping several users, rather than with single users. Our ID system is able to determine role intruders, that is, individuals that while holding a specific role, have a behavior different from the normal behavior of the role. An important advantage of providing an ID mechanism specifically tailored to databases is that it can also be used to protect against insider threats. Furthermore, the use of roles makes our approach usable even for databases with large user population. Our preliminary experimental evaluation on both real and synthetic database traces show that our methods work well in practical situations

147 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: A graphical technique is introduced that shows multiple-step attacks by matching rows and columns of the clustered adjacency matrix that allows attack impact/responses to be identified and prioritized according to the number of attack steps to victim machines, and allows attack origins to be determined.
Abstract: We apply adjacency matrix clustering to network attack graphs for attack correlation, prediction, and hypothesizing We self-multiply the clustered adjacency matrices to show attacker reachability across the network for a given number of attack steps, culminating in transitive closure for attack prediction over all possible number of steps This reachability analysis provides a concise summary of the impact of network configuration changes on the attack graph Using our framework, we also place intrusion alarms in the context of vulnerability-based attack graphs, so that false alarms become apparent and missed detections can be inferred We introduce a graphical technique that shows multiple-step attacks by matching rows and columns of the clustered adjacency matrix This allows attack impact/responses to be identified and prioritized according to the number of attack steps to victim machines, and allows attack origins to be determined Our techniques have quadratic complexity in the size of the attack graph

127 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper provides a novel alternative approach to network vulnerability analysis by utilizing a penetration tester's perspective of maximal level of penetration possible on a host, and argues that suboptimal solutions are an unavoidable cost of scalability, and hence practical utility.
Abstract: The typical means by which an attacker breaks into a network is through a chain of exploits, where each exploit in the chain lays the groundwork for subsequent exploits. Such a chain is called an attack path, and the set of all possible attack paths form an attack graph. Researchers have proposed a variety of methods to generate attack graphs. In this paper, we provide a novel alternative approach to network vulnerability analysis by utilizing a penetration tester's perspective of maximal level of penetration possible on a host. Our approach has the following benefits: it provides a more intuitive model in which an analyst can work, and its algorithmic complexity is polynomial in the size of the network, and so has the potential of scaling well to practical networks. The drawback is that we track only "good" attack paths, as opposed to all possible attack paths. Hence, an analyst may make suboptimal choices when repairing the network. Since attack graphs grow exponentially with the size of the network, we argue that suboptimal solutions are an unavoidable cost of scalability, and hence practical utility. A working prototype tool has been implemented to demonstrate the practicality of our approach

102 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper presents the design and implementation of Nitpicker - an extremely minimized secure graphical user interface that addresses problems while retaining compatibility to legacy operating systems.
Abstract: Malware such as Trojan horses and spyware remain to be persistent security threats that exploit the overly complex graphical user interfaces of today's commodity operating systems. In this paper, we present the design and implementation of Nitpicker - an extremely minimized secure graphical user interface that addresses these problems while retaining compatibility to legacy operating systems. We describe our approach of kernelizing the window server and present the deployed security mechanisms and protocols. Our implementation comprises only 1,500 lines of code while supporting commodity software such as X11 applications alongside protected graphical security applications. We discuss key techniques such as client-side window handling, a new floating-labels mechanism, drag-and-drop, and denial-of-service-preventing resource management. Furthermore, we present an application scenario to evaluate the feasibility, performance, and usability of our approach

99 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper presents the concept of stealth breakpoints and discusses the design and implementation of VAMPiRE, a realization of this concept, which cannot be detected or countered and provides unlimited number of breakpoints to be set on code, data, and I/O with the same precision as that of hardware breakpoints.
Abstract: Microscopic analysis of malicious code (malware) requires the aid of a variety of powerful tools. Chief among them is a debugger that enables runtime binary analysis at an instruction level. One of the important services provided by a debugger is the ability to stop execution of code at an arbitrary point during runtime, using breakpoints. Software breakpoints support an unlimited number of breakpoint locations by changing the code being debugged so that it can be interrupted during runtime. Most, if not all, malware are very sensitive to code modification with self-modifying and/or self-checking (SM-SC) capabilities, rendering the use of software breakpoints limited in their scope. Hardware breakpoints supported by the underlying processor, on the other hand, use a subset of the processor register set and exception mechanisms to provide breakpoints that do not entail code modification. This makes hardware breakpoints the most powerful breakpoint mechanism for malware analysis. However, current processors provide a very limited number of hardware breakpoints (typically 2-4 locations). Thus, a serious restriction is imposed on the debugger to set a desired number of breakpoints without resorting to the limited alternative of software breakpoints. Also, with the ever evolving nature of malware, there are techniques being employed that prevent the use of hardware breakpoints. This calls for a new breakpoint mechanism that retains the features of hardware breakpoints while providing an unlimited number of breakpoints, which cannot be detected or countered. In this paper, we present the concept of stealth breakpoints and discuss the design and implementation of VAMPiRE, a realization of this concept. VAMPiRE cannot be detected or countered and provides unlimited number of breakpoints to be set on code, data, and I/O with the same precision as that of hardware breakpoints. It does so by employing a subtle combination of simple stealth techniques using virtual memory and hardware single-stepping mechanisms that are available on all processors, old and new. This technique makes VAMPiRE portable to any architecture, providing powerful breakpoint ability similar to hardware breakpoints for microscopic malware analysis

87 citations


Proceedings ArticleDOI
05 Dec 2005
TL;DR: It is shown that Wurster et al.'s page-replication attack can be detected by self-checksumming programs with self-modifying code, and is robust up to attacks using either costly interpretive emulation or specialized hardware.
Abstract: Recent research has proposed self-checksumming as a method by which a program can detect any possibly malicious modification to its code. Wurster et al. developed an attack against such programs that renders code modifications undetectable to any self-checksumming routine. The attack replicated pages of program text and altered values in hardware data structures so that data reads and instruction fetches retrieved values from different memory pages. A cornerstone of their attack was its applicability to a variety of commodity hardware: they could alter memory accesses using only a malicious operating system. In this paper, we show that their page-replication attack can be detected by self-checksumming programs with self-modifying code. Our detection is efficient, adding less than 1 microsecond to each checksum computation in our experiments on three processor families, and is robust up to attacks using either costly interpretive emulation or specialized hardware

Proceedings ArticleDOI
Mary Ellen Zurko1
05 Dec 2005
TL;DR: This essay discusses the systemic roadblocks at the social, technical, and pragmatic levels that user-centered security must overcome to make substantial breakthroughs.
Abstract: User-centered security has been identified as a grand challenge in information security and assurance. It is on the brink of becoming an established subdomain of both security and human/computer interface (HCI) research, and an influence on the product development lifecycle. Both security and HCI rely on the reality of interactions with users to prove the utility and validity of their work. As practitioners and researchers in those areas, we still face major issues when applying even the most foundational tools used in either of these fields across both of them. This essay discusses the systemic roadblocks at the social, technical, and pragmatic levels that user-centered security must overcome to make substantial breakthroughs. Expert evaluation and user testing are producing effective usable security today. Principles such as safe staging, enumerating usability failure risks, integrated security, transparent security and reliance on trustworthy authorities can also form the basis of improved systems

Proceedings ArticleDOI
05 Dec 2005
TL;DR: This work improves upon existing implementations of port knocking by presenting a novel port knocking architecture that provides strong authentication while addressing the weaknesses of existing port knocking systems.
Abstract: It is sometimes desirable to allow access to open ports on a firewall only to authorized external users and present closed ports to all others. We examine ways to construct an authentication service to achieve this goal, and then examine one such method, "port knocking", and its existing implementations, in detail. We improve upon these existing implementations by presenting a novel port knocking architecture that provides strong authentication while addressing the weaknesses of existing port knocking systems

Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper traces the ten plus year history of the Naval Research Laboratory's Pump, designed to minimize the covert channel threat from the necessary message acknowledgements, without penalizing system performance and reliability.
Abstract: This paper traces the ten plus year history of the Naval Research Laboratory's Pump idea. The Pump was theorized, designed, and built at the Naval Research Laboratory's Center for High Assurance Computer Systems. The reason for the Pump is the need to send messages from a "low" enclave to a "high" enclave, in a secure and reliable manner. In particular, the Pump was designed to minimize the covert channel threat from the necessary message acknowledgements, without penalizing system performance and reliability. We review the need for the Pump, the design of the Pump, the variants of the Pump, and the current status of the Pump, along with manufacturing and certification difficulties

Proceedings ArticleDOI
05 Dec 2005
TL;DR: The results indicate that model checking can be both a feasible and integral part of the software development process.
Abstract: Software model checking has become a popular tool for verifying programs' behavior. Recent results suggest that it is viable for finding and eradicating security bugs quickly. However, even state-of-the-art model checkers are limited in use when they report an overwhelming number of false positives, or when their lengthy running time dwarfs other software development processes. In this paper we report our experiences with software model checking for security properties on an extremely large scale - an entire Linux distribution consisting of 839 packages and 60 million lines of code. To date, we have discovered 108 exploitable bugs. Our results indicate that model checking can be both a feasible and integral part of the software development process

Proceedings ArticleDOI
05 Dec 2005
TL;DR: A new approach is developed that can learn the characteristics of a particular attack, and filter out future instances of the same attack or its variants, and significantly increases the availability of servers subjected to repeated attacks.
Abstract: Buffer overflows have become the most common target for network-based attacks. They are also the primary mechanism used by worms and other forms of automated attacks. Although many techniques have been developed to prevent server compromises due to buffer overflows, these defenses still lead to server crashes. When attacks occur repeatedly, as is common with automated attacks, these protection mechanisms lead to repeated restarts of the victim application, rendering its service unavailable. To overcome this problem, we develop a new approach that can learn the characteristics of a particular attack, and filter out future instances of the same attack or its variants. By doing so, our approach significantly increases the availability of servers subjected to repeated attacks. The approach is fully automatic, does not require source code, and has low runtime overheads. In our experiments, it was effective against most attacks, and did not produce any false positives

Proceedings ArticleDOI
05 Dec 2005
TL;DR: This work proposes, develops and evaluates a system that automatically generates memorable mnemonics for a given password based on a text-corpus, and initial experimental results suggest that automatic mnemonic generation is a promising technique for making text-password systems more usable.
Abstract: Text-password based authentication schemes are a popular means of authenticating users in computer systems. Standard security practices that were intended to make passwords more difficult to crack, such as requiring users to have passwords that "look random" (high entropy), have made password systems less usable and paradoxically, less secure. In this work, we address the need for enhancing the usability of existing text-password systems without necessitating any modifications to the existing password authentication infrastructure. We propose, develop and evaluate a system that automatically generates memorable mnemonics for a given password based on a text-corpus. Initial experimental results suggest that automatic mnemonic generation is a promising technique for making text-password systems more usable. Our system was able to generate mnemonics for 80.5% of six-character passwords and 62.7% of seven-character passwords containing lower-case characters (a-z), even when the text-corpus size is extremely small (1000 sentences)

Proceedings ArticleDOI
05 Dec 2005
TL;DR: The main thrust of the work is to build a multimodal biometric feedback mechanism into the operating system so that verification failure can automatically lock up the computer within some estimate of the time it takes to subvert the computer.
Abstract: In this paper we describe the theory, architecture, implementation, and performance of a multimodal passive biometric verification system that continually verifies the presence/participation of a logged-in user. We assume that the user logged in using strong authentication prior to the starting of the continuous verification process. While the implementation described in the paper combines a digital camera-based face verification with a mouse-based fingerprint reader, the architecture is generic enough to accommodate additional biometric devices with different accuracy of classifying a given user from an imposter. The main thrust of our work is to build a multimodal biometric feedback mechanism into the operating system so that verification failure can automatically lock up the computer within some estimate of the time it takes to subvert the computer. This must be done with low false positives in order to realize a usable system. We show through experimental results that combining multiple suitably chosen modalities in our theoretical framework can effectively do that with currently available off-the-shelf components

Proceedings ArticleDOI
05 Dec 2005
TL;DR: The evidence graph is proposed as a novel graph model to facilitate the presentation and manipulation of intrusion evidence and a hierarchical reasoning framework that includes local reasoning and global reasoning is developed for automated evidence analysis.
Abstract: In this paper, we present techniques for a network forensics analysis mechanism that includes effective evidence presentation, manipulation and automated reasoning. We propose the evidence graph as a novel graph model to facilitate the presentation and manipulation of intrusion evidence. For automated evidence analysis, we develop a hierarchical reasoning framework that includes local reasoning and global reasoning. Local reasoning aims to infer the roles of suspicious hosts from local observations. Global reasoning aims to identify group of strongly correlated hosts in the attack and derive their relationships. By using the evidence graph model, we effectively integrate analyst feedback into the automated reasoning process. Experimental results demonstrate the potential and effectiveness of our proposed approaches

Proceedings ArticleDOI
05 Dec 2005
TL;DR: A host based system, BINDER (Break-IN DEtectoR), to detect break-ins by capturing user unintended malicious outbound connections and infer user intent at the process level under the assumption that user intent is implied by user-driven input.
Abstract: An increasing variety of malware, such as worms, spyware and adware, threatens both personal and business computing Remotely controlled bot networks of compromised systems are growing quickly In this paper, we tackle the problem of automated detection of break-ins caused by unknown malware targeting personal computers We develop a host based system, BINDER (Break-IN DEtectoR), to detect break-ins by capturing user unintended malicious outbound connections (referred to as extrusions) To infer user intent, BINDER correlates outbound connections with user-driven input at the process level under the assumption that user intent is implied by user-driven input Thus BINDER can detect a large class of unknown malware such as worms, spyware and adware without requiring signatures We have successfully used BINDER to detect real world spyware on daily used computers and email worms on a controlled testbed with very small false positives

Proceedings ArticleDOI
P.A. Karger1
05 Dec 2005
TL;DR: This paper looks at the requirements that users of MLS systems have and discusses their implications on the design of multi-level secure hypervisors.
Abstract: Using hypervisors or virtual machine monitors for security has become very popular in recent years, and a number of proposals have been made for supporting multi-level security on secure hypervisors, including PR/SM, NetTop, sHype, and others. This paper looks at the requirements that users of MLS systems have and discusses their implications on the design of multi-level secure hypervisors. It contrasts the new directions for secure hypervisors with the earlier efforts of KVM/370 and Digital's A1-secure VMM kernel

Proceedings ArticleDOI
05 Dec 2005
TL;DR: TARP implements security by distributing centrally issued secure MAC/IP address mapping attestations through existing ARP messages and improves the costs of implementing ARP security by as much as two orders of magnitude over existing protocols.
Abstract: IP networks fundamentally rely on the address resolution protocol (ARP) for proper operation. Unfortunately, vulnerabilities in the ARP protocol enable a raft of IP-based impersonation, man-in-the-middle, or DoS attacks. Proposed countermeasures to these vulnerabilities have yet to simultaneously address backward compatibility and cost requirements. This paper introduces the ticket-based address resolution protocol (TARP). TARP implements security by distributing centrally issued secure MAC/IP address mapping attestations through existing ARP messages. We detail the TARP protocol and its implementation within the Linux operating system. Our experimental analysis shows that TARP improves the costs of implementing ARP security by as much as two orders of magnitude over existing protocols. We conclude by exploring a range of operational issues associated with deploying and administering ARP security

Proceedings ArticleDOI
05 Dec 2005
TL;DR: This work proposes a privacy-preserving alert correlation approach based on concept hierarchies that aims to guide the alert sanitization process with the entropy or differential entropy of sanitized attributes.
Abstract: With the increasing security threats from infrastructure attacks such as worms and distributed denial of service attacks, it is clear that the cooperation among different organizations is necessary to defend against these attacks. However, organizations' privacy concerns for the incident and security alert data require that sensitive data be sanitized before they are shared with other organizations. Such sanitization process usually has negative impacts on intrusion analysis (such as alert correlation). To balance the privacy requirements and the need for intrusion analysis, we propose a privacy-preserving alert correlation approach based on concept hierarchies. Our approach consists of two phases. The first phase is entropy guided alert sanitization, where sensitive alert attributes are generalized to high-level concepts to introduce uncertainty into the dataset with partial semantics. To balance the privacy and the usability of alert data, we propose to guide the alert sanitization process with the entropy or differential entropy of sanitized attributes. The second phase is sanitized alert correlation. We focus on defining similarity functions between sanitized attributes and building attack scenarios from sanitized alerts. Our preliminary experimental results demonstrate the effectiveness of the proposed techniques

Proceedings ArticleDOI
05 Dec 2005
TL;DR: It is proved that the OIAP protocol is exposed to replay attacks, which could be used for compromising the correct behavior of a TP and a countermeasure is proposed to avoid such an attack as well as any replay attacks to the aforementioned protocol.
Abstract: We prove the existence of a flaw which we individuated in the design of the object-independent authorization protocol (OIAP), which represents one of the building blocks of the trusted platform module (TPM), the core of the trusted computing platforms (TPs) as devised by the trusted computing group (TCG) standards. In particular, we prove, also with the support of a model checker, that the protocol is exposed to replay attacks, which could be used for compromising the correct behavior of a TP We also propose a countermeasure to undertake in order to avoid such an attack as well as any replay attacks to the aforementioned protocol

Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper designed a survivability architecture, which combined elements of protection, detection, and adaptive reaction; and applied it to a DoD information system; and the resulting defense-enabled system was first evaluated internally, and then deployed for external Red Team exercise.
Abstract: Many techniques and mechanisms exist today, some COTS, others less mature research products that can be used to deflect, detect, or even recover from specific types of cyber attacks. None of them individually is sufficient to provide an all around defense for a mission critical distributed system. A mission critical system must operate despite sustained attacks throughout the mission cycle, which in the case of military systems, can range from hours to days. A comprehensive survivability architecture, where individual security tools and defense mechanisms are used as building blocks, is required to achieve this level of survivability. We have recently designed a survivability architecture, which combined elements of protection, detection, and adaptive reaction; and applied it to a DoD information system. The resulting defense-enabled system was first evaluated internally, and then deployed for external Red Team exercise. In this paper we describe the survivability architecture of the system, and explain the rationale that motivated the design

Proceedings ArticleDOI
05 Dec 2005
TL;DR: This work discusses an implementation of independent state for the Bro NIDS and examines how it can be leveraged for distributed processing, load parallelization, selective preservation of state across restarts and crashes, dynamic reconfiguration, high level policy maintenance, and support for profiling and debugging.
Abstract: Network intrusion detection systems (NIDSs) critically rely on processing a great deal of state. Often much of this state resides solely in the volatile processor memory accessible to a single user-level process on a single machine. In this work, we highlight the power of independent state, i.e., internal fine-grained state that can be propagated from one instance of a NIDS to others running either concurrently or subsequently. Independent state provides us with a wealth of possible applications that hold promise for enhancing the capabilities of NIDSs. We discuss an implementation of independent state for the Bro NIDS and examine how we can then leverage independent state for distributed processing, load parallelization, selective preservation of state across restarts and crashes, dynamic reconfiguration, high level policy maintenance, and support for profiling and debugging. We have experimented with each of these applications in several large environments and are now working to integrate them into the sites' operational monitoring. A performance evaluation shows that our implementation is suitable for use even in large scale environments

Proceedings ArticleDOI
05 Dec 2005
TL;DR: It is shown that some simple changes to Snort signatures can effectively verify the result of attacks against the application servers, thus significantly improve the quality of alerts.
Abstract: We propose a method to verify the result of attacks detected by signature-based network intrusion detection systems using lightweight protocol analysis. The observation is that network protocols often have short meaningful status codes saved at the beginning of server responses upon client requests. A successful intrusion that alters the behavior of a network application server often results in an unexpected server response, which does not contain the valid protocol status code. This can be used to verify the result of the intrusion attempt. We then extend this method to verify the result of attacks that still generate valid protocol status code in the server responses. We evaluate this approach by augmenting Snort signatures and testing on real world data. We show that some simple changes to Snort signatures can effectively verify the result of attacks against the application servers, thus significantly improve the quality of alerts

Proceedings ArticleDOI
05 Dec 2005
TL;DR: e-NeXSh as discussed by the authors is a security approach that utilises kernel and LIBC support for efficiently defending systems against process-subversion attacks by monitoring all LIBC function and system-call invocations and validating them against process specific information that strictly prescribes the permissible behaviour for the program.
Abstract: We present e-NeXSh, a novel security approach that utilises kernel and LIBC support for efficiently defending systems against process-subversion attacks. Such attacks exploit vulnerabilities in software to override its program control-flow and consequently invoke system calls, causing out-of-process damage. Our technique defeats such attacks by monitoring all LIBC function and system-call invocations, and validating them against process-specific information that strictly prescribes the permissible behaviour for the program (unlike general sandboxing techniques that require manually maintained, explicit policies, we use the program code itself as a guideline for an implicit policy). Any deviation from this behaviour is considered malicious, and we halt the attack, limiting its damage to within the subverted process. We implemented e-NeXSh as a set of modifications to the Linux-2.4.18-3 kernel and a new user-space shared library (e-NeXSh.so). The technique is transparent, requiring no modifications to existing libraries or applications. e-NeXSh was able to successfully defeat both code-injection and LIBC-based attacks in our effectiveness tests. The technique is simple and lightweight, demonstrating no measurable overhead for select UNIX utilities, and a negligible 1.55% performance impact on the Apache Web server

Proceedings ArticleDOI
05 Dec 2005
TL;DR: The vulnerability assessment support system described in this paper can also automate the entire process of vulnerability testing and thus for the first time makes it feasible to run vulnerability testing autonomously and frequently.
Abstract: As the number of system vulnerabilities multiplies in recent years, vulnerability assessment has emerged as a powerful system security administration tool that can identify vulnerabilities in existing systems before they are exploited. Although there are many commercial vulnerability assessment tools in the market, none of them can formally guarantee that the assessment process never compromises the computer systems being tested. This paper proposes a featherweight virtual machine (FVM) technology to address the safety issue associated with vulnerability testing. Compared with other virtual machine technologies, FVM is designed to facilitate sharing between virtual machines but still provides strong protection between them. The FVM technology allows a vulnerability assessment tool to test an exact replica of a production-mode network service, including both hardware and system software components, while guaranteeing that the production-mode network service is fully isolated from the testing process. In addition to safety, the vulnerability assessment support system described in this paper can also automate the entire process of vulnerability testing and thus for the first time makes it feasible to run vulnerability testing autonomously and frequently. Experiments on a Windows-based prototype show that Nessus assessment results against an FVM virtual machine are identical to those against a real machine. Furthermore, modifications to the file system and registry state made by vulnerability assessment runs are completely isolated from the host machine. Finally, the performance impact of vulnerability assessment runs on production network services is as low as 3%

Proceedings ArticleDOI
05 Dec 2005
TL;DR: An anomaly-based detection technique detailing a method to detect propagation of scanning worms within individual network cells, thus protecting internal networks from infection by internal clients.
Abstract: Signature-based schemes for detecting Internet worms often fail on zero-day worms, and their ability to rapidly react to new threats is typically limited by the requirement of some form of human involvement to formulate updated attack signatures. We propose an anomaly-based detection technique detailing a method to detect propagation of scanning worms within individual network cells, thus protecting internal networks from infection by internal clients. Our software implementation indicates that this technique is both accurate and rapid enough to enable automatic containment and suppression of worm propagation within a network cell. Our approach relies on an aggregate anomaly score, derived from the correlation of address resolution protocol (ARP) activity from individual network attached devices. Our preliminary analysis and prototype indicate that this technique can be used to rapidly detect zero-day worms within a very small number of scans