scispace - formally typeset
Search or ask a question

Showing papers presented at "USENIX Security Symposium in 2009"


Proceedings Article
10 Aug 2009
TL;DR: A novel malware detection approach is proposed that is both effective and efficient, and thus, can be used to replace or complement traditional antivirus software at the end host.
Abstract: Malware is one of the most serious security threats on the Internet today. In fact, most Internet problems such as spam e-mails and denial of service attacks have malware as their underlying cause. That is, computers that are compromised with malware are often networked together to form botnets, and many attacks are launched using these malicious, attacker-controlled networks. With the increasing significance of malware in Internet attacks, much research has concentrated on developing techniques to collect, study, and mitigate malicious code. Without doubt, it is important to collect and study malware found on the Internet. However, it is even more important to develop mitigation and detection techniques based on the insights gained from the analysis work. Unfortunately, current host-based detection approaches (i.e., anti-virus software) suffer from ineffective detection models. These models concentrate on the features of a specific malware instance, and are often easily evadable by obfuscation or polymorphism. Also, detectors that check for the presence of a sequence of system calls exhibited by a malware instance are often evadable by system call reordering. In order to address the shortcomings of ineffectivemodels, several dynamic detection approaches have been proposed that aim to identify the behavior exhibited by a malware family. Although promising, these approaches are unfortunately too slow to be used as real-time detectors on the end host, and they often require cumbersome virtual machine technology. In this paper, we propose a novel malware detection approach that is both effective and efficient, and thus, can be used to replace or complement traditional antivirus software at the end host. Our approach first analyzes a malware program in a controlled environment to build a model that characterizes its behavior. Such models describe the information flows between the system calls essential to the malware's mission, and therefore, cannot be easily evaded by simple obfuscation or polymorphic techniques. Then, we extract the program slices responsible for such information flows. For detection, we execute these slices to match our models against the runtime behavior of an unknown program. Our experiments show that our approach can effectively detect running malicious code on an end user's host with a small overhead.

498 citations


Proceedings Article
10 Aug 2009
TL;DR: A better approach may be to minimize the use of SSL warnings altogether by blocking users from making unsafe connections and eliminating warnings in benign situations.
Abstract: Web users are shown an invalid certificate warning when their browser cannot validate the identity of the websites they are visiting. While these warnings often appear in benign situations, they can also signal a man-in-the-middle attack. We conducted a survey of over 400 Internet users to examine their reactions to and understanding of current SSL warnings. We then designed two new warnings using warnings science principles and lessons learned from the survey. We evaluated warnings used in three popular web browsers and our two warnings in a 100- participant, between-subjects laboratory study. Our warnings performed significantly better than existing warnings, but far too many participants exhibited dangerous behavior in all warning conditions. Our results suggest that, while warnings can be improved, a better approach may be to minimize the use of SSL warnings altogether by blocking users from making unsafe connections and eliminating warnings in benign situations.

437 citations


Proceedings Article
10 Aug 2009
TL;DR: Vanish is presented, a system that meets this challenge through a novel integration of cryptographic techniques with global-scale, P2P, distributed hash tables (DHTs) and meets the privacy-preserving goals described above.
Abstract: Today's technical and legal landscape presents formidable challenges to personal data privacy First, our increasing reliance on Web services causes personal data to be cached, copied, and archived by third parties, often without our knowledge or control Second, the disclosure of private data has become commonplace due to carelessness, theft, or legal actions Our research seeks to protect the privacy of past, archived data -- such as copies of emails maintained by an email provider -- against accidental, malicious, and legal attacks Specifically, we wish to ensure that all copies of certain data become unreadable after a userspecified time, without any specific action on the part of a user, and even if an attacker obtains both a cached copy of that data and the user's cryptographic keys and passwords This paper presents Vanish, a system that meets this challenge through a novel integration of cryptographic techniques with global-scale, P2P, distributed hash tables (DHTs) We implemented a proof-of-concept Vanish prototype to use both the million-plus-node Vuze Bit-Torrent DHT and the restricted-membership OpenDHT We evaluate experimentally and analytically the functionality, security, and performance properties of Vanish, demonstrating that it is practical to use and meets the privacy-preserving goals described above We also describe two applications that we prototyped on Vanish: a Firefox plugin for Gmail and other Web sites and a Vanishing File application

404 citations


Proceedings Article
10 Aug 2009
TL;DR: A backwards compatible bounds checking technique that substantially reduces performance overhead and is more than two times faster than the fastest previous technique and about five times faster--using less memory--than recording object bounds using a splay tree.
Abstract: Attacks that exploit out-of-bounds errors in C and C++ programs are still prevalent despite many years of research on bounds checking. Previous backwards compatible bounds checking techniques, which can be applied to unmodified C and C++ programs, maintain a data structure with the bounds for each allocated object and perform lookups in this data structure to check if pointers remain within bounds. This data structure can grow large and the lookups are expensive. In this paper we present a backwards compatible bounds checking technique that substantially reduces performance overhead. The key insight is to constrain the sizes of allocated memory regions and their alignment to enable efficient bounds lookups and hence efficient bounds checks at runtime. Our technique has low overhead in practice--only 8% throughput decrease for Apache-- and is more than two times faster than the fastest previous technique and about five times faster--using less memory--than recording object bounds using a splay tree.

263 citations


Proceedings Article
10 Aug 2009
TL;DR: The design and implementation of a system that fully automates the process of constructing instruction sequences that can be used by an attacker for malicious computations are presented and a practical attack that can bypass existing kernel integrity protection mechanisms is described.
Abstract: Protecting the kernel of an operating system against attacks, especially injection of malicious code, is an important factor for implementing secure operating systems. Several kernel integrity protection mechanism were proposed recently that all have a particular shortcoming: They cannot protect against attacks in which the attacker re-uses existing code within the kernel to perform malicious computations. In this paper, we present the design and implementation of a system that fully automates the process of constructing instruction sequences that can be used by an attacker for malicious computations. We evaluate the system on different commodity operating systems and show the portability and universality of our approach. Finally, we describe the implementation of a practical attack that can bypass existing kernel integrity protection mechanisms.

256 citations


Proceedings Article
10 Aug 2009
TL;DR: It is concluded that most of modern computer keyboards generate compromising emanations (mainly because of the manufacturer cost pressures in the design), Hence, they are not safe to transmit confidential information.
Abstract: Computer keyboards are often used to transmit confidential data such as passwords. Since they contain electronic components, keyboards eventually emit electromagnetic waves. These emanations could reveal sensitive information such as keystrokes. The technique generally used to detect compromising emanations is based on a wide-band receiver, tuned on a specific frequency. However, this method may not be optimal since a significant amount of information is lost during the signal acquisition. Our approach is to acquire the raw signal directly from the antenna and to process the entire captured electromagnetic spectrum. Thanks to this method, we detected four different kinds of compromising electromagnetic emanations generated by wired and wireless keyboards. These emissions lead to a full or a partial recovery of the keystrokes. We implemented these sidechannel attacks and our best practical attack fully recovered 95% of the keystrokes of a PS/2 keyboard at a distance up to 20 meters, even through walls. We tested 12 different keyboard models bought between 2001 and 2008 (PS/2, USB, wireless and laptop). They are all vulnerable to at least one of the four attacks. We conclude that most of modern computer keyboards generate compromising emanations (mainly because of the manufacturer cost pressures in the design). Hence, they are not safe to transmit confidential information.

255 citations


Proceedings Article
10 Aug 2009
TL;DR: GATEKEEPER is a highly extensible system with a rich, expressive policy language, allowing the hosting site administrator to formulate their policies as succinct Datalog queries, and results in 1,341 verified warnings in 684 widgets, no false negatives, due to the soundness of the analysis, and false positives affecting only two widgets.
Abstract: The advent of Web 2.0 has lead to the proliferation of client-side code that is typically written in JavaScript. This code is often combined -- or mashed-up -- with other code and content from disparate, mutually untrusting parties, leading to undesirable security and reliability consequences. This paper proposes GATEKEEPER, a mostly static approach for soundly enforcing security and reliability policies for JavaScript programs. GATEKEEPER is a highly extensible system with a rich, expressive policy language, allowing the hosting site administrator to formulate their policies as succinct Datalog queries. The primary application of GATEKEEPER this paper explores is in reasoning about JavaScript widgets such as those hosted by widget portals Live.com and Google/IG. Widgets submitted to these sites can be either malicious or just buggy and poorly written, and the hosting site has the authority to reject the submission of widgets that do not meet the site's security policies. To show the practicality of our approach, we describe nine representative security and reliability policies. Statically checking these policies results in 1,341 verified warnings in 684 widgets, no false negatives, due to the soundness of our analysis, and false positives affecting only two widgets.

236 citations


Proceedings Article
10 Aug 2009
TL;DR: The effectiveness of NOZZLE is measured by demonstrating that it successfully detects 12 published and 2,000 synthetically generated heap-spraying exploits and it is shown that even with a detection threshold set six times lower than is required to detect published malicious attacks, NOZZle reports no false positives when run over 150 popular Internet sites.
Abstract: Heap spraying is a security attack that increases the exploitability of memory corruption errors in type-unsafe applications. In a heap-spraying attack, an attacker coerces an application to allocate many objects containing malicious code in the heap, increasing the success rate of an exploit that jumps to a location within the heap. Because heap layout randomization necessitates new forms of attack, spraying has been used in many recent security exploits. Spraying is especially effective in web browsers, where the attacker can easily allocate the malicious objects using JavaScript embedded in a web page. In this paper, we describe NOZZLE, a runtime heap-spraying detector. NOZZLE examines individual objects in the heap, interpreting them as code and performing a static analysis on that code to detect malicious intent. To reduce false positives, we aggregate measurements across all heap objects and define a global heap health metric. We measure the effectiveness of NOZZLE by demonstrating that it successfully detects 12 published and 2,000 synthetically generated heap-spraying exploits. We also show that even with a detection threshold set six times lower than is required to detect published malicious attacks, NOZZLE reports no false positives when run over 150 popular Internet sites. Using sampling and concurrent scanning to reduce overhead, we show that the performance overhead of NOZZLE is less than 7% on average. While NOZZLE currently targets heap-based spraying attacks, its techniques can be applied to any attack that attempts to fill the address space with malicious code objects (e.g., stack spraying [42]).

234 citations


Proceedings Article
10 Aug 2009
TL;DR: New methods for discovering integer bugs using dynamic test generation on x86 binaries using SmartFuzz and the black-box fuzz testing tool zzuf are introduced, and key design choices in efficient symbolic execution of such programs are described.
Abstract: Recently, integer bugs, including integer overflow, width conversion, and signed/unsigned conversion errors, have risen to become a common root cause for serious security vulnerabilities. We introduce new methods for discovering integer bugs using dynamic test generation on x86 binaries, and we describe key design choices in efficient symbolic execution of such programs. We implemented our methods in a prototype tool SmartFuzz, which we use to analyze Linux x86 binary executables. We also created a reporting service, metafuzz.com, to aid in triaging and reporting bugs found by SmartFuzz and the black-box fuzz testing tool zzuf. We report on experiments applying these tools to a range of software applications, including the mplayer media player, the exiv2 image metadata library, and ImageMagick convert. We also report on our experience using SmartFuzz, zzuf, and metafuzz.com to perform testing at scale with the Amazon Elastic Compute Cloud (EC2). To date, the metafuzz.com site has recorded more than 2; 614 test runs, comprising 2; 361; 595 test cases. Our experiments found approximately 77 total distinct bugs in 864 compute hours, costing us an average of $2:24 per bug at current EC2 rates. We quantify the overlap in bugs found by the two tools, and we show that SmartFuzz finds bugs missed by zzuf, including one program where Smart-Fuzz finds bugs but zzuf does not.

221 citations


Proceedings Article
10 Aug 2009
TL;DR: A tree-based data structure is described that can generate tamper-evident proofs with logarithmic size and space, improving over previous linear constructions and allowing large-scale log servers to selectively delete old events, in an agreed-upon fashion, while generating efficient proofs that no inappropriate events were deleted.
Abstract: Many real-world applications wish to collect tamperevident logs for forensic purposes. This paper considers the case of an untrusted logger, serving a number of clients who wish to store their events in the log, and kept honest by a number of auditors who will challenge the logger to prove its correct behavior. We propose semantics of tamper-evident logs in terms of this auditing process. The logger must be able to prove that individual logged events are still present, and that the log, as seen now, is consistent with how it was seen in the past. To accomplish this efficiently, we describe a tree-based data structure that can generate such proofs with logarithmic size and space, improving over previous linear constructions. Where a classic hash chain might require an 800 MB trace to prove that a randomly chosen event is in a log with 80 million events, our prototype returns a 3 KB proof with the same semantics. We also present a flexible mechanism for the log server to present authenticated and tamper-evident search results for all events matching a predicate. This can allow large-scale log servers to selectively delete old events, in an agreed-upon fashion, while generating efficient proofs that no inappropriate events were deleted. We describe a prototype implementation and measure its performance on an 80 million event syslog trace at 1,750 events per second using a single CPU core. Performance improves to 10,500 events per second if cryptographic signatures are offloaded, corresponding to 1.1 TB of logging throughput per week.

219 citations


Proceedings Article
10 Aug 2009
TL;DR: Gazelle is introduced, a secure web browser constructed as a multi-principal OS that exclusively manages resource protection and sharing across web site principals and exposes intricate design issues that no previous work has identified.
Abstract: Original web browsers were applications designed to view static web content. As web sites evolved into dynamic web applications that compose content from multiple web sites, browsers have become multiprincipal operating environments with resources shared among mutually distrusting web site principals. Nevertheless, no existing browsers, including new architectures like IE 8, Google Chrome, and OP, have a multi-principal operating system construction that gives a browser-based OS the exclusive control to manage the protection of all system resources among web site principals. In this paper, we introduce Gazelle, a secure web browser constructed as a multi-principal OS. Gazelle's browser kernel is an operating system that exclusively manages resource protection and sharing across web site principals. This construction exposes intricate design issues that no previous work has identified, such as crossprotection-domain display and events protection. We elaborate on these issues and provide comprehensive solutions. Our prototype implementation and evaluation experience indicates that it is realistic to turn an existing browser into a multi-principal OS that yields significantly stronger security and robustness with acceptable performance.

Proceedings Article
10 Aug 2009
TL;DR: This work strengthens the original congestion attack by combining it with a novel bandwidth amplification attack based on a flaw in the Tor design that lets us build long circuits that loop back on themselves, and demonstrates a working attack on today's deployed Tor network.
Abstract: In 2005, Murdoch and Danezis demonstrated the first practical congestion attack against a deployed anonymity network. They could identify which relays were on a target Tor user's path by building paths one at a time through every Tor relay and introducing congestion. However, the original attack was performed on only 13 Tor relays on the nascent and lightly loaded Tor network. We show that the attack from their paper is no longer practical on today's 1500-relay heavily loaded Tor network. The attack doesn't scale because a) the attacker needs a tremendous amount of bandwidth to measure enough relays during the attack window, and b) there are too many false positives now that many other users are adding congestion at the same time as the attacks. We then strengthen the original congestion attack by combining it with a novel bandwidth amplification attack based on a flaw in the Tor design that lets us build long circuits that loop back on themselves. We show that this new combination attack is practical and effective by demonstrating a working attack on today's deployed Tor network. By coming up with a model to better understand Tor's routing behavior under congestion, we further provide a statistical analysis characterizing how effective our attack is in each case.

Proceedings Article
10 Aug 2009
TL;DR: The results indicate that physical-layer identification of RFID transponders can be practical and thus has a potential to be used in a number of applications including product and document counterfeiting detection.
Abstract: In this work we perform the first comprehensive study of physical-layer identification of RFID transponders. We propose several techniques for the extraction of RFID physical-layer fingerprints. We show that RFID transponders can be accurately identified in a controlled environment based on stable fingerprints corresponding to their physical-layer properties. We tested our techniques on a set of 50 RFID smart cards of the same manufacturer and type, and we show that these techniques enable the identification of individual transponders with an Equal Error Rate of 2.43% (single run) and 4.38% (two runs). We further applied our techniques to a smaller set of electronic passports, where we obtained a similar identification accuracy. Our results indicate that physical-layer identification of RFID transponders can be practical and thus has a potential to be used in a number of applications including product and document counterfeiting detection.

Proceedings Article
10 Aug 2009
TL;DR: An automated reputation engine, SNARE, is built based on network-level features that can be ascertained without ever looking at a packet's contents, such as the distance in IP space to other email senders or the geographic distance between sender and receiver.
Abstract: Users and network administrators need ways to filter email messages based primarily on the reputation of the sender. Unfortunately, conventional mechanisms for sender reputation--notably, IP blacklists--are cumbersome to maintain and evadable. This paper investigates ways to infer the reputation of an email sender based solely on network-level features, without looking at the contents of a message. First, we study first-order properties of network-level features that may help distinguish spammers from legitimate senders. We examine features that can be ascertained without ever looking at a packet's contents, such as the distance in IP space to other email senders or the geographic distance between sender and receiver. We derive features that are lightweight, since they do not require seeing a large amount of email from a single IP address and can be gleaned without looking at an email's contents--many such features are apparent from even a single packet. Second, we incorporate these features into a classification algorithm and evaluate the classifier's ability to automatically classify email senders as spammers or legitimate senders. We build an automated reputation engine, SNARE, based on these features using labeled data from a deployed commercial spam-filtering system. We demonstrate that SNARE can achieve comparable accuracy to existing static IP blacklists: about a 70%detection rate for less than a 0.3%false positive rate. Third, we show how SNARE can be integrated into existing blacklists, essentially as a first-pass filter.

Proceedings ArticleDOI
10 Aug 2009
TL;DR: A solution called Uncoordinated DSSS (UDSSS) is proposed that enables spread-spectrum anti-jamming broadcast communication without the requirement of shared secrets and can handle an unlimited amount of receivers while being secure against malicious receivers.
Abstract: Jamming-resistant broadcast communication is crucial for safety-critical applications such as emergency alert broadcasts or the dissemination of navigation signals in adversarial settings. These applications share the need for guaranteed authenticity and availability of messages which are broadcasted by base stations to a large and unknown number of (potentially untrusted) receivers. Common techniques to counter jamming attacks such as Direct-Sequence Spread Spectrum (DSSS) and Frequency Hopping are based on secrets that need to be shared between the sender and the receivers before the start of the communication. However, broadcast antijamming communication that relies on either secret pairwise or group keys is likely to be subject to scalability and key-setup problems or provides weak jammingresistance, respectively. In this work, we therefore propose a solution called Uncoordinated DSSS (UDSSS) that enables spread-spectrum anti-jamming broadcast communication without the requirement of shared secrets. It is applicable to broadcast scenarios in which receivers hold an authentic public key of the sender but do not share a secret key with it. UDSSS can handle an unlimited amount of receivers while being secure against malicious receivers. We analyze the security and latency of UDSSS and complete our work with an experimental evaluation on a prototype implementation.

Proceedings Article
10 Aug 2009
TL;DR: VPriv provides the first practical protocol to compute path functions for various kinds of tolling, speed and delay estimation, and insurance calculations in a way that does not reveal anything more than the result of the function to the server, and an out-of-band enforcement mechanism using random spot checks that allows the server and application to handle misbehaving users.
Abstract: A variety of location-based vehicular services are currently being woven into the national transportation infrastructure in many countries. These include usage- or congestion-based road pricing, traffic law enforcement, traffic monitoring, "pay-as-you-go" insurance, and vehicle safety systems. Although such applications promise clear benefits, there are significant potential violations of the location privacy of drivers under standard implementations (i.e., GPS monitoring of cars as they drive, surveillance cameras, and toll transponders). In this paper, we develop and evaluate VPriv, a system that can be used by several such applications without violating the location privacy of drivers. The starting point is the observation that in many applications, some centralized server needs to compute a function of a user's path--a list of time-position tuples. VPriv provides two components: 1) the first practical protocol to compute path functions for various kinds of tolling, speed and delay estimation, and insurance calculations in a way that does not reveal anything more than the result of the function to the server, and 2) an out-of-band enforcement mechanism using random spot checks that allows the server and application to handle misbehaving users. Our implementation and experimental evaluation of VPriv shows that a modest infrastructure of a few multi-core PCs can easily serve 1 million cars. Using analysis and simulation based on real vehicular data collected over one year from the CarTel project's testbed of 27 taxis running in the Boston area, we demonstrate that VPriv is resistant to a range of possible attacks.

Proceedings Article
10 Aug 2009
TL;DR: A new attack is presented that allows a malicious user to eavesdrop on other users' keystrokes using such information that takes advantage of the stack information of a process disclosed by its virtual file within procfs, the process file system supported by Linux.
Abstract: A multi-user system usually involves a large amount of information shared among its users. The security implications of such information can never be underestimated. In this paper, we present a new attack that allows a malicious user to eavesdrop on other users' keystrokes using such information. Our attack takes advantage of the stack information of a process disclosed by its virtual file within procfs, the process file system supported by Linux. We show that on a multi-core system, the ESP of a process when it is making system calls can be effectively sampled by a "shadow" program that continuously reads the public statistical information of the process. Such a sampling is shown to be reliable even in the presence of multiple users, when the system is under a realistic workload. From the ESP content, a keystroke event can be identified if they trigger system calls. As a result, we can accurately determine inter-keystroke timings and launch a timing attack to infer the characters the victim entered. We developed techniques for automatically analyzing an application's binary executable to extract the ESP pattern that fingerprints a keystroke event. The occurrences of such a pattern are identified from an ESP trace the shadow program records from the application's runtime to calculate timings. These timings are further analyzed using a HiddenMarkovModel and other public information related to the victim on a multi-user system. Our experimental study demonstrates that our attack greatly facilitates password cracking and also works very well on recognizing English words.

Proceedings Article
10 Aug 2009
TL;DR: This paper proposes a TCP-over-DTLS (Datagram Transport Layer Security) transport between routers that gives each stream of data its own TCP connection, and protects the TCP headers--which would otherwise give stream identification information to an attacker--with DTLS.
Abstract: The Tor network gives anonymity to Internet users by relaying their traffic through the world over a variety of routers. All traffic between any pair of routers, even if they represent circuits for different clients, are multiplexed over a single TCP connection. This results in interference across circuits during congestion control, packet dropping and packet reordering. This interference greatly contributes to Tor's notorious latency problems. Our solution is to use a TCP-over-DTLS (Datagram Transport Layer Security) transport between routers. We give each stream of data its own TCP connection, and protect the TCP headers--which would otherwise give stream identification information to an attacker--with DTLS. We perform experiments on our implemented version to illustrate that our proposal has indeed resolved the cross-circuit interference.

Proceedings Article
10 Aug 2009
TL;DR: It is demonstrated that network warfare competitions can be instrumented to generate modern labeled datasets and such games can thus be used as engines to produce future datasets on a routine basis.
Abstract: Unlabeled network traffic data is readily available to the security research community, but there is a severe shortage of labeled datasets that allow validation of experimental results The labeled DARPA datasets of 1998 and 1999, while innovative at the time, are of only marginal utility in today's threat environment In this paper we demonstrate that network warfare competitions can be instrumented to generate modern labeled datasets Our contributions include design parameters for competitions as well as results and analysis from a test implementation of our techniques Our results indicate that network warfare competitions can be used to generate scientifically valuable labeled datasets and such games can thus be used as engines to produce future datasets on a routine basis

Proceedings Article
10 Aug 2009
TL;DR: This work presents a novel framework for building privacy-preserving social networking applications that retains the functionality offered by the current social networks and uses information flow models to control what untrusted applications can do with the information they receive.
Abstract: Social networking websites have recently evolved from being service providers to platforms for running third party applications. Users have typically trusted the social networking sites with personal data, and assume that their privacy preferences are correctly enforced. However, they are now being asked to trust each third-party application they use in a similar manner. This has left the users' private information vulnerable to accidental or malicious leaks by these applications. In this work, we present a novel framework for building privacy-preserving social networking applications that retains the functionality offered by the current social networks. We use information flow models to control what untrusted applications can do with the information they receive. We show the viability of our design by means of a platform prototype. The usability of the platform is further evaluated by developing sample applications using the platform APIs. We also discuss both security and nonsecurity challenges in designing and implementing such a framework.

Proceedings Article
10 Aug 2009
TL;DR: Nemesis is presented, a novel methodology for mitigating authentication bypass and access control vulnerabilities in existing web applications and can improve the precision of existing security tools, such as DIFT analyses for SQL injection prevention, by providing runtime information about user authentication.
Abstract: This paper presents Nemesis, a novel methodology for mitigating authentication bypass and access control vulnerabilities in existing web applications. Authentication attacks occur when a web application authenticates users unsafely, granting access to web clients that lack the appropriate credentials. Access control attacks occur when an access control check in the web application is incorrect or missing, allowing users unauthorized access to privileged resources such as databases and files. Such attacks are becoming increasingly common, and have occurred in many high-profile applications, such as IIS [10] and WordPress [31], as well as 14% of surveyed web sites [30]. Nevertheless, none of the currently available tools can fully mitigate these attacks. Nemesis automatically determines when an application safely and correctly authenticates users, by using Dynamic Information Flow Tracking (DIFT) techniques to track the flow of user credentials through the application's language runtime. Nemesis combines authentication information with programmer-supplied access control rules on files and database entries to automatically ensure that only properly authenticated users are granted access to any privileged resources or data. A study of seven popular web applications demonstrates that a prototype of Nemesis is effective at mitigating attacks, requires little programmer effort, and imposes minimal runtime overhead. Finally, we show that Nemesis can also improve the precision of existing security tools, such as DIFT analyses for SQL injection prevention, by providing runtime information about user authentication.

Proceedings Article
10 Aug 2009
TL;DR: This work presents a web application framework that leverages existing work on strong type systems to statically enforce a separation between the structure and content of both web documents and database queries generated by a web applications, and shows how this approach can automatically prevent the introduction of both server-side cross-site scripting and SQL injection vulnerabilities.
Abstract: Security vulnerabilities continue to plague web applications, allowing attackers to access sensitive data and co-opt legitimate web sites as a hosting ground for malware. Accordingly, researchers have focused on various approaches to detecting and preventing common classes of security vulnerabilities in web applications, including anomaly-based detection mechanisms, static and dynamic analyses of server-side web application code, and client-side security policy enforcement. This paper presents a different approach to web application security. In this work, we present a web application framework that leverages existing work on strong type systems to statically enforce a separation between the structure and content of both web documents and database queries generated by a web application, and show how this approach can automatically prevent the introduction of both server-side cross-site scripting and SQL injection vulnerabilities. We present an evaluation of the framework, and demonstrate both the coverage and correctness of our sanitization functions. Finally, experimental results suggest that web applications developed using this framework perform competitively with applications developed using traditional frameworks.

Proceedings Article
10 Aug 2009
TL;DR: Potential use cases in order to motivate the integration of VPST with other testbeds, identify requirements of interconnected test beds, and describe the design for integration with VPST are discussed.
Abstract: The Virtual Power System Testbed (VPST) at University of Illinois at Urbana-Champaign is part of the Trustworthy Cyber Infrastructure for the Power Grid (TCIP) and is maintained by members of the Information Trust Institute (ITI). VPST is designed to be integrated with other testbeds across the country to explore performance and security of Supervisory Control And Data Acquisition (SCADA) protocols and equipment. We discuss potential use cases in order to motivate the integration of VPST with other testbeds, identify requirements of interconnected testbeds, and describe our design for integration with VPST.

Proceedings Article
10 Aug 2009
TL;DR: Storages Capsules are encrypted file containers that allow a compromised machine to securely view and edit sensitive files without malware being able to steal confidential data.
Abstract: Protecting confidential information is a major concern for organizations and individuals alike, who stand to suffer huge losses if private data falls into the wrong hands. One of the primary threats to confidentiality is malicious software on personal computers, which is estimated to already reside on 100 to 150 million machines. Current security controls, such as firewalls, anti-virus software, and intrusion detection systems, are inadequate at preventing malware infection. This paper introduces Storages Capsules, a new approach for protecting confidential files on a personal computer. Storage Capsules are encrypted file containers that allow a compromised machine to securely view and edit sensitive files without malware being able to steal confidential data. The system achieves this goal by taking a checkpoint of the current system state and disabling device output before allowing access a Storage Capsule. Writes to the Storage Capsule are then sent to a trusted module. When the user is done editing files in the Storage Capsule, the system is restored to its original state and device output resumes normally. Finally, the trusted module declassifies the Storage Capsule by re-encrypting its contents, and exports it for storage in a low-integrity environment. This work presents the design, implementation, and evaluation of Storage Capsules, with a focus on exploring covert channels.

Proceedings Article
10 Aug 2009
TL;DR: This work identifies a class of Web browser implementation vulnerabilities, cross-origin JavaScript capability leaks, which occur when the browser leaks a JavaScript pointer from one security origin to another and proposes an approach to mitigate this class of vulnerabilities by adding access control checks to browser JavaScript engines.
Abstract: We identify a class of Web browser implementation vulnerabilities, cross-origin JavaScript capability leaks, which occur when the browser leaks a JavaScript pointer from one security origin to another. We devise an algorithm for detecting these vulnerabilities by monitoring the "points-to" relation of the JavaScript heap. Our algorithm finds a number of new vulnerabilities in the opensource WebKit browser engine used by Safari. We propose an approach to mitigate this class of vulnerabilities by adding access control checks to browser JavaScript engines. These access control checks are backwardscompatible because they do not alter semantics of the Web platform. Through an application of the inline cache, we implement these checks with an overhead of 1-2% on industry-standard benchmarks.

Proceedings Article
10 Aug 2009
TL;DR: This work presents a set of program analysis and run-time instrumentation techniques that ensure that errors in these low-level operations do not violate the assumptions made by a safety checking system, and adds these techniques to a compiler-based virtual machine called Secure Virtual Architecture.
Abstract: Systems that enforce memory safety for today's operating system kernels and other system software do not account for the behavior of low-level software/hardware interactions such as memory-mapped I/O, MMU configuration, and context switching. Bugs in such low-level interactions can lead to violations of the memory safety guarantees provided by a safe execution environment and can lead to exploitable vulnerabilities in system software. In this work, we present a set of program analysis and run-time instrumentation techniques that ensure that errors in these low-level operations do not violate the assumptions made by a safety checking system. Our design introduces a small set of abstractions and interfaces for manipulating processor state, kernel stacks, memory mapped I/O objects, MMU mappings, and self modifying code to achieve this goal, without moving resource allocation and management decisions out of the kernel. We have added these techniques to a compiler-based virtual machine called Secure Virtual Architecture (SVA), to which the standard Linux kernel has been ported previously. Our design changes to SVA required only an additional 100 lines of code to be changed in this kernel. Our experimental results show that our techniques prevent reported memory safety violations due to low-level Linux operations and that these violations are not prevented by SVA without our techniques. Moreover, the new techniques in this paper introduce very little overhead over and above the existing overheads of SVA. Taken together, these results indicate that it is clearly worthwhile to add these techniques to an existing memory safety system.

Proceedings Article
10 Aug 2009
TL;DR: This work proposes Cryptographic Computational Continuation Passing (CCCP), a mechanism that amplifies programmable passive RFID tags' capabilities by exploiting an often overlooked, plentiful resource: low-power radio communication.
Abstract: Passive RFID tags harvest their operating energy from an interrogating reader, but constant energy shortfalls severely limit their computational and storage capabilities. We propose Cryptographic Computational Continuation Passing (CCCP), a mechanism that amplifies programmable passive RFID tags' capabilities by exploiting an often overlooked, plentiful resource: low-power radio communication. While radio communication is more energy intensive than flash memory writes in many embedded devices, we show that the reverse is true for passive RFID tags. A tag can use CCCP to checkpoint its computational state to an untrusted reader using less energy than an equivalent flash write, thereby allowing it to devote a greater share of its energy to computation. Security is the major challenge in such remote checkpointing. Using scant and fleeting energy, a tag must enforce confidentiality, authenticity, integrity, and data freshness while communicating with potentially untrustworthy infrastructure. Our contribution synthesizeswellknown cryptographic and low-power techniques with a novel flash memory storage strategy, resulting in a secure remote storage facility for an emerging class of devices. Our evaluation of CCCP consists of energy measurements of a prototype implementation on the batteryless, MSP430-based WISP platform. Our experiments show that--despite cryptographic overhead--remote checkpointing consumes less energy than checkpointing to flash for data sizes above roughly 64 bytes. CCCP enables secure and flexible remote storage that would otherwise outstrip batteryless RFID tags' resources.

Proceedings Article
10 Aug 2009
TL;DR: A robust scheme named LOCK, for LOCating the prefix hijacKer ASes based on distributed Internet measurements, which is able to pinpoint the prefix hijacked AS with an accuracy up to 94.3%.
Abstract: Prefix hijacking is one of the top known threats on today's Internet. A number of measurement based solutions have been proposed to detect prefix hijacking events. In this paper we take these solutions one step further by addressing the problem of locating the attacker in each of the detected hijacking event. Being able to locate the attacker is critical for conducting necessary mitigation mechanisms at the earliest possible time to limit the impact of the attack, successfully stopping the attack and restoring the service. We propose a robust scheme named LOCK, for LOCating the prefix hijacKer ASes based on distributed Internet measurements. LOCK locates each attacker AS by actively monitoring paths (either in the control-plane or in the data-plane) to the victim prefix from a small number of carefully selected monitors distributed on the Internet. Moreover, LOCK is robust against various countermeasures that the hijackers may employ. This is achieved by taking advantage of two observations: that the hijacker cannot manipulate AS path before the path reaches the hijacker, and that the paths to victim prefix "converge" around the hijacker AS. We have deployed LOCK on a number of PlanetLab nodes and conducted several large scale measurements and experiments to evaluate the performance. Our results show that LOCK is able to pinpoint the prefix hijacker AS with an accuracy up to 94.3%.

Proceedings Article
10 Aug 2009
TL;DR: This paper is a collaborative work on the various tools and techniques used and the overall effectiveness of live-attack exercises in teaching information security.
Abstract: The Cyber Defense Exercise (CDX) is a four day Information Assurance exercise run by the National Security Agency/Central Security Service (NSA/CSS) to help train federal service academy students in secure network operations. This paper is a collaborative work on the various tools and techniques used and the overall effectiveness of live-attack exercises in teaching information security.

Proceedings Article
10 Aug 2009
TL;DR: This work believes that clinical trials can provide solid evidence of the efficacy of security products, much as they have in the field of medicine, and proposes an alternative evaluation method, computer security clinical trials.
Abstract: One of the largest challenges faced by purchasers of security products is evaluating their relative merits. While customers can get reliable information on characteristics such as runtime overhead, user interface, and support quality, the actual level of protection provided by different security products is mostly unranked--or, worse yet, ranked using criteria that do not generally reflect their performance in practice. Even though researchers have been working on improving testing methodologies, given the complex interactions of users, uses, evolving threats, and different deployment environments, there are fundamental limitations on the ability of lab-based measurements to determine real world performance. To address these issues, we propose an alternative evaluation method, computer security clinical trials. In this method, security products are deployed in randomly selected subsets of targeted populations and are monitored to determine their performance in normal use. We believe that clinical trials can provide solid evidence of the efficacy of security products, much as they have in the field of medicine.